Sign up for your FREE personalized newsletter featuring insights, trends, and news for America's aging Baby Boomers

Newsletter
New

Personalizing Interactions

Card image cap

2024 ACM Athena Award recipient and University of Southern California professor Maja Matarić is not afraid to get personal. In her quest to design socially assistive robots—robots that provide social, not physical, support in realms like rehabilitation, education, and therapy—she realized that personalizing interactions would boost both engagement and outcomes. Artificial Intelligence (AI) has made that easier, though as always, surprises are never far when human beings are involved. Here, Matarić shares what she’s learned about meeting people where they are.

Let’s talk about your work on socially assistive robots. You’ve said that having kids inspired you to build robots that help people. How did that interest develop into the mission of supporting specific behavioral interventions in health, wellness, and education?

It was a confluence of events in my life. I had two small kids, and I really wanted my work to have impact beyond academia in ways that even children could understand.

I did a lot of reading, and I immersed myself in a bunch of communities, because I was trying to understand how to develop agents that could help people in ways in which they needed help. Identifying that niche—that place in the user journey where something is difficult, and where behavioral interventions could support people—is not at all obvious. It remains not obvious, because we engineers tend to think, “Here’s a problem. And this is how it should be solved.” And often, we don’t even recognize the right problem, much less the right solution. The hard part is not having to remember to take your medicine or figuring out how to do your stroke rehabilitation exercises; the hard part is that doing those things reminds people that they’re not well, and exercises are often stigmatizing, boring, and repetitive, or there are more nuanced motivations we need to uncover before we can find solutions.

It’s not hard to imagine the possibilities for human-machine interactions now, in the post-ChatGPT era, but you saw the potential far earlier. I’m curious to hear your perspective on what’s changed—and what has not changed—in the twenty-odd years you’ve been working in the field.

One thing that’s changed is that machines now talk to us like humans talk, and we perceive machines as if they were human—we agentify, or ascribe agency, to machines. It’s marvelous to see that the technology is at a level where we don’t have to write dialogue trees because the agents are actually smart. Of course, there’s still plenty of work left to do. There are also really hard questions to answer about how agents should interact with users, how they should adapt and personalize their behavior, and how we can ensure that they are ethical, safe, and privacy-protective. But the underlying technological substrate has accelerated tremendously.

What has not changed?

There are fundamental issues in robotics that remain unsolved, like how robots can effectively manipulate the world physically. From my perspective, though, that’s not the biggest challenge. I don’t need my robots to physically manipulate people. I need them to provide social, emotional, and psychological support, which means that accessibility is the far larger unsolved problem. Our goal is to put physically embodied agents into people’s lives—and they need to be in their lives, which means they need to be affordable, safe, and accessible. None of that exists on the consumer market. There are no such platforms.

I’m a little surprised by that, to the extent you’ve been able to demonstrate the efficacy of your robotic interventions, and the alternative is often engaging a trained human being. Why isn’t there more funding?

There has been a surge in funding in robotics, at least in startups and industry. Most of the money has gone to robots that manipulate things in the world, because ultimately, people are interested in automating manufacturing, and they’re not seeing the opportunity for socially assistive systems. The National Science Foundation tries, but they have a tiny budget. It’s not the mission of the Department of Defense. I recently received a grant from the National Institutes of Health, which is an honor, but NIH very rarely funds technologies for health interventions.

Still, I want to be optimistic, both in the sense that people are starting to understand the societal implications of talking machines, and because, fortunately and finally, the diversity of innovators who are contributing is expanding.

In the meantime, you created an open-source kit to help college and high school students build their own “robot friend.”

My lab started with a platform that was developed in Professor Guy Hoffman’s lab at Cornell called Blossom, and then we redid the structure to make it 3D-printed and much cheaper. Finally, we designed some exterior patterns that one can sew or crochet to customize the robot’s appearance.

Now, we have a robot platform that’s maybe $230 to build, and then you make a customized skin for it, and it’s really inexpensive and completely open-sourced, so hopefully anybody can do it.

These robots are very cute. I imagine that’s part of the point.

We and many others have done studies on this issue of embodiment. What happens when you interact with a screen versus when you interact with a physically embodied agent? There’s very clear evidence that physical embodiment is fundamental to improving both engagement and outcomes. That’s not to say that screen agents can’t do useful things. But the question is, how do they compare? It turns out, largely unfavorably.

We’re also working in contexts where things are really hard. This isn’t about video game engagement. It’s about helping children with autism learn new skills or supporting people with anxiety and depression in learning emotion regulation. We did a study in college dorms in which we compared a chatbot that provided LLM-based therapy versus the same LLM-based therapy from a physically embodied robot. Students engaged with and used both of them, but only the students who used the robot measurably reduced their psychiatric distress.

What are some of the things that surprise you about the way people interact with robots?

We’re always surprised by people. Early on, we were surprised when people tried to cheat or trick the robot. Now, we’re surprised by how people react to the idea of interacting with a robot. About seven years ago, we were doing a study with elderly people, and one of the participants said, “It’s cute, but why can’t it do as many things as my iPad can?” Some people absolutely love the robot and others are very grumpy, and the question is, what can we learn from that about our own stereotypes and cognitive biases, and about personalizing the interaction?

Personalization is why these interventions work. We need to be able to find out what someone needs right now, as opposed to simply telling them, “Here are your steps, and you need to go do them.” Even with physical health, it turns out that a lot depends on the state you’re in on a given day, on your metabolism, and so on. Why wouldn’t that be the case with your behavior, which relates to your mental and physical health, and also your social context?

It’s very multi-layered, isn’t it? It also alleviates this burden people often feel in therapeutic settings, where their health is tied to individual choices, and the broader social context figures, if at all, in a very indirect, amorphous way.

Exactly. However, I do worry that creating intelligent agents risks making vulnerable people even more isolated, because they’ll be told to just rely on their agent. What these agents should be doing is connecting people socially and serving as this interstitial network. It can’t be a binary choice between human-agent and human-human interaction. It has to be human-human-agent.


Recent