Go to Top

Our Human Affection For Robots

Asimo, a humanoid robot by Honda, waving at the camera
Asimo, Honda's recently discontinued humanoid robot. Photo by Luca Mascaro.

On Aug. 5, Mars’ Curiosity Rover turned 6 years old.

I’m not an astrophysicist, but I remember this date because in 2013, the robot vibrated to the tune of “Happy Birthday,” triggering an outpouring of pathos as people pictured the world’s (or rather, Mars’) loneliest birthday party. And while Curiosity doesn’t actually “sing” every year, the internet is there to wish a complex scientific instrument happy birthday anyhow.

We humans are an empathetic bunch. A recent episode of the podcast “Radiolab” revisited a small experiment the show had conducted a few years prior. In it, children were presented with a Barbie doll (human-looking but inanimate), a Furby (the robotic Christmas toy craze of years past) and a hamster named Gerbie. The kids, mostly around age 6 or 7, held each upside down for as long as they felt comfortable doing so.

The participants could hold a Barbie upside down more or less indefinitely, at least until their arms got tired. And none of the kids wanted to hold Gerbie down against her will for more than a few seconds. But interestingly, despite the fact that the participants were old enough to understand that Furby and Gerbie are fundamentally different, the toy’s cries and protests of “Me scared” when it was inverted meant the kids eventually grew uneasy in a way they did not with the Barbie. The average time for an upside-down Furby was about a minute – more than Gerbie, but much less than Barbie.

It’s not only kids who are affected by robots simulating discomfort. Social psychologists scanned the brains of several adult volunteers while they watched experimenters interact with one of three targets – a woman in a green T-shirt, a green box and a green robot shaped like a dinosaur – with either affection or abuse. Researchers said they were surprised by how similar the viewers’ responses were for the human and the robot, though the concern for the human target of simulated violence was greater. While both this experiment and Radiolab’s trial involved very small sample sizes, they still suggest that, under the right circumstances, we can view robots as more “being” than “object.”

Humans don’t get attached to robots by default, of course. Starship Technologies, a company that has developed small robots to deliver food in Silicon Valley, recently reported that while most people seem to like the robots, a minority sometimes deliver a bit of abuse in the form of a kick. Company cofounder Ahti Heinla assured Business Insider that “if people have such anger management techniques that’s fine by us, our robot just drives on.” A hitchhiking robot known as hitchBOT made it through Germany and Canada, but met a messy end in Philadelphia, where vandals dismembered it.

We can unnecessarily focus animosity toward machines much less complex that robots too. Just ask anyone who has ever felt the silent malice of a vending machine that takes your dollar and refuses to give you a snack in return. But “robot rage” doesn’t seem to be as widespread as, say, people fondly naming their Roombas.

I recently started to wonder how the tendency to anthropomorphize and bond with robots might look in an increasingly automated workplace. My colleague Ben Sullivan wrote in this space a few years ago about how artificial intelligence and robotics are likely to continue to expand into niches formerly filled by humans. A lot of discussions I’ve encountered about robots versus humans at work frame it as a duality; the workplace may use both for a while, but eventually a given industry will be all human or all robot. In some instances, this is likely true. But I wonder how things will look in workplaces that eventually become a blend of the two.

Consider health care. The demand for home care workers is expected to expand in the next few decades, but these are already difficult positions to fill. We aren’t yet at a point where robots can supplement, or even replace, these workers, but we are progressing in that direction. It is easy to imagine what it might look like.

In Zoe Kazan’s play “After the Blast,” which I saw staged in New York City last year, a woman nurtures, and ultimately deeply bonds with, a robot designed to be a companion to an aging adult after undergoing a socialization process. The 2012 film “Robot & Frank,” starring Frank Langella, presented a playful take on a robot’s potential role in elder care. But such robots don’t only exist in fiction; in Japan, bear-faced robots already help long-term care facility residents to get out of bed, reach objects and keep their memories sharp through games and engagement. Similar elder care robots are available or in development in a variety of countries, including Germany and the United States.

We don’t need to care about all kinds of robots equally. As Florence Tan, the deputy chief technologist at NASA’s Science Mission Directorate, pointed out when a journalist asked why Curiosity only hummed itself a birthday tune once, “there is no scientific gain from the rover playing music or singing ‘Happy Birthday’ on Mars.” In other words, even if we emotionally bond with Curiosity, there’s no utilitarian reason for it. It isn’t a function of the machine’s design.

Some people even worry about how much we care about certain robots. Sherry Turkle, a psychologist at the Massachusetts Institute of Technology, observed that research supported the conclusion that “we are vulnerable to seeing robots are appropriate for things they are not appropriate for.” Turkle has argued that human-machine relationships can be unhealthy, and that developers should work to make robots less human, rather than more. But for good or ill, bonding is a feature, not an unintended side effect, of robots like those working in long-term care facilities in Japan.

I think the workplace robots most likely to be deliberately designed to trigger human empathy will be those doing jobs where the humans involved feel vulnerable. Consider the inflatable medical robots under development at Carnegie Mellon. The project inspired the design of Baymax, an inflatable medical robot that appeared in the 2014 film “Big Hero Six” and arguably stole the show. Like the big-screen version, the real robot in development is meant to be not only safe, sanitary and smart – but social. As the project website says: “Baymax cares. That’s what he was designed to do.”

The question of whether robots can “care” about us is complicated, and one I’m certainly not qualified to answer. But I think it is abundantly clear that we can care, deeply, about robots. People mourned NASA’s Cassini probe, which ended its 13-year mission last year and disintegrated in Saturn’s atmosphere as it fell toward the planet’s surface; interviews with military personnel found that soldiers sometimes feel sad when the robots that they use to disarm bombs are destroyed in action, in ways that go beyond frustration at losing an expensive piece of equipment.

How we learn to get along with intelligent machines does not address the question of what to do with humans left behind when those machines can capably take over their work. But the potential for human-robot bonds fascinates me anyway, because it raises the issue of what it means to be a person, and how human empathy paired with technology may one day accomplish more than pure robot intelligence alone.

, , , , , , ,