Quality Zooming

In the time since the COVID-19 virus hit us, I have been working on a new project with my GEMH lab colleagues. A project that we had to start from the very ground up. This project, which I’m excited to tell you more about in a later blog post, offered me the opportunity to engage in a kind of research I’d had no experience before whatsoever.

We’re building an app to help youth reflect on their digital tech use, and in order for us to know better what to build for our target audience and how, I set out to conduct focus groups. Focus groups are like tiny studies, in which you try to answer a number of questions with a small group of people from your target sample. I had never done this sort of group study before, and although I have interviewed youth one on one (which was a great experience), I was worried about how this would pan out, and especially since we’d have to do these focus groups online.

So far, in a couple of months, our wonderful GEMH intern Denise and I have conducted 8 focus groups, with 4-7 people in each group. These are the things I’ve learned so far.

  • Being forced to do it online has forced us to broaden our reach, in ways that we otherwise might have neglected. We have recruited young people from all over the world, and I’m incredibly happy that youth from all over have participated; from India to Malta, from Spain to the UK, from Albania to Poland. With these diverse youth we’ve had awesome conversations.
  • Yes, conversations! I continue to be pleasantly surprised at how willing and interested these young people are to talk about their lives and their digital tech use. There I was, worried about whether or not the online group dynamic would allow people to open up. Turns out, in each focus group we’ve had wonderful insights into young people’s relationship with their smartphone, not just for Denise and I, but also for the youth themselves!
  • Opening up for people is so much easier to do when you yourself, as a researcher, open up as well. This is why I increasingly feel like questionnaires are a one-way street we don’t want to go down, if we want to find out more about youth’s tech use.. Sharing my own experience, although often different from their own, has helped our participants feel free and comfortable to share their own stories. I feel that this is especially important since I’m sure that online focus groups feel different from real-life focus groups. It’s hard to get to know each other in a short amount of time as it is, and not being in the same physical space makes that even harder. So, my ultimate tip is to really engage, and not only expect your participants to share with you, but to also share with them yourself!

Looking forward to continuing this qualitative journey!

Person-centric AI

During my time at the GEMH Lab I’ve become more and more interested in the relationship that people have with technology. Not only in terms of ‘how do we use it’ and ‘what can we do with it’, but also how technology has been inspired by us.

In particular artificial intelligence is meant to mimic us in some ways (and do better than us, in others). However, recently I’ve come to believe that we’re not inspiring artificial intelligence and digital social technology to a large enough extent, and I’ll explain what I mean in a second.

When you Google ‘person-centric’ (or ‘human-centric’/’people-centric’) AI, you get pages that refer to making AI understandable for humans. Although this is an important task (especially in the light of worries that we might be losing our grip on artificial intelligence and what it does), I was surprised to read that this is apparently what the Internet says that ‘person-centric AI’ is.

What about the psychology that goes into these tools, though? How do we strive to make sure that what we build is not just convenient and efficient, but also in line with our understanding of human psychology? Is the field of AI sufficiently sensitive to theories about human psychology? I’m getting the sense that it isn’t (but feel free to prove me wrong, see the end of my post ;).

Why does it matter? Because in order for human beings to benefit the most from technology, it needs to be sensitive to their needs. Things like preservation of the sense of agency, for instance, seem to be rarely taken into account, following an ‘aren’t you happy someone/something is doing it for you?’-approach. However, a loss of sense of agency can not only lead to reluctance to use a certain technological tool, it can also lead to decreased life satisfaction. In a world where we are increasingly surrounded by (AI) technologies and tools, such considerations are as vital as ever.

I’m thinking… that I’d really like to work towards building AI tools that make sense from a human psychological perspective. We need to consider the human mind behind every AI tool, and this is something that – at least, so it seems – is still underrepresented in the field of artificial intelligence. However, if you read this and are working on such an approach to AI, hit me up — I’d love to hear about your work!

Outliers

About a year ago, I started listening to a podcast that was recommended to me by my office mate, Jan. We both share an interest in machine learning, and he recommended that I listen to (among others) a show called Linear Digressions. I pretty much instantly fell in love with this podcast, as it discusses practical, easy to relate to applications and stories around machine learning.

One episode in particular, though, has stayed with me ever since I listened to it. Podcast hosts Katie and Ben discussed outliers in data: Outliers are values that seem to fall outside of the general range of values that you expect or find in your dataset.

As a researcher, now and again I am confronted with outliers in my data as well, and in my own education I have been taught (like many others) that outliers are generally something to get rid of as soon as possible, since they may violate assumptions of some of the most-used analyses in social sciences (rendering them inappropriate for your data).

In Linear Digressions, though, Katie discussed a fascinating story about why outliers don’t always deserve the flack that they catch. It’s true; these strange values might be caused by typing or measurement errors. Sometimes, however, these oddballs can tell us so much about the world out there, the one we’re trying to study. In fact, in Katie’s story, the outlier helped solve a public health crisis in 19th century London! For anyone who is intrigued, I highly recommend the episode. It’s as informative about data as it is about 19th century history, actually.

This is opened my eyes to how fascinating outliers in data can really be, and actually made me wish for more outliers in our data! Their potential for important new insights has captivated my imagination ever since.

I hope that such stories can make more people aware of the beauty of outliers. Can’t help but draw this parallel between societies and data; in both cases outliers deserve much more care and attention than they have been getting.