I was standing in line for a tourist attraction in Tokyo when a small robot began addressing the crowd. The robot resembled Rosey from The Jetsons and was meant to amuse people while they waited. It babbled for a while, and then its eyes turned into two pink hearts. “I love everyone,” it announced.
“Oh, really?” I responded sarcastically. I couldn’t help myself. “Everyone? That’s disingenuous.”
The Tokyo robot was one of many robots and other forms of artificial intelligence (AI) that have grated on my nerves. I’m not alone in my disdain. Scientists have been studying robot hate for more than 30 years. Research finds that many people view robots “as the other,” and robot hatred can lead to sabotage, attacks and even “robot bullying.”
Is Artificial Intelligence a Threat to Humans?
Robots and AI have a relatively short history in the U.S., but it’s one that has long been controversial. With the increase in automation during the 1950s, some people saw mechanization as a way to make life better or easier. Others saw it as a threat. Robots could take over jobs or the world for those who read a lot of science fiction.
By the 1990s, information retrieval agents became mainstream, but they weren’t always functional and could be more of a nuisance than a help. Microsoft introduced Clippy, a virtual assistant, in 1996, and it became famous for popping up at inopportune moments and asking aggravating questions like, “it looks like you’re writing a letter. Would you like help with that?” One study described Clippy as having “fatal shortcomings in its ability to determine when users actually needed help.”
In the early 2000s, AI became more useful. People turned to online search engines to retrieve information, and global position systems (GPS) became widely available. But AI also became more personal. Tech companies introduced chatbots, like Jabberwacky, that interacted and responded to users.
“Vocal social agents” such as Siri or Alexa are now a part of daily life for many users. Similar to their chatbot predecessors, they are designed to replicate human communication norms, and they learn and repeat our behavior patterns.
For some users, asking Alexa to play 80s music is a convenience. But for others, it can be an opportunity for bad behavior.
Read More: Robots and Artificial Intelligence Have Ancient Mythology Origins
Robot Bullying: Harsh Verbal Abuse
Well before people asked Siri or Alexa rude questions, users of early 2000s chatbots also showed a tendency for harassment. This poor human behavior toward robots is an example of robot bullying.
In 2008, a study in Interacting with Computers analyzed how users engaged with Jabberwacky, the online chatterbot that started in 1997 and garnered more than 10 million replies in the following decade.
To analyze conversations, the researchers picked a time sample, meaning they selected a specific day (Nov. 22, 2004) and then analyzed all the interactions (716 conversations) that occurred during the time period.
When analyzing the content of the conversations, the authors found some users were friendly or curious about testing the system and its capabilities. But many were unkind. On the milder side of the AI abuse spectrum, some users liked telling Jabberwacky that it was merely a computer or correcting its grammar.
About 10 percent of interactions, however, involved insulting or offensive language. Another 11 percent was sexually-explicit, or as the researchers described it: “Harsh verbal abuse was the norm in these conversations, which were more similar to dirty soliloquies than to hot chats.”
The authors concluded that because chatbots lack memory and reasoning, they are a way for people to violate social norms in a seemingly harmless manner. But studies have found other instances in which people perceive robots or AI as threats, leading to “anti-robot attacks.”
Robot Bullying: Physical Attacks
What exactly is robot bullying in the physical sense, such as attacks? Scholars organize “anti-robot attacks” into several categories. Physical attacks, decision-making impairment (i.e., messing with sensors), manipulation, intentional neglect and security breaches. There’s also an extremely specific category – “staging of robot attacks for online dissemination” that involves stunts like ordering food delivered by a robot, waiting for the machine to roll up and then kicking it, or pulling off the little flag it has. Attackers then post the video on the internet.
AI Replacing Jobs
So why would anyone kick a food-delivering robot? Scholars have found there are complex motivations. Since the early 1800s, people have attacked machinery that threatened to displace workers. Some anti-robot disdain continues to stem from the threat that people feel robots have on their livelihood.
People also view robots as “the other,” meaning they are not one of us, yet we’re supposed to accept them into our lives. Similarly, people might associate a specific robot with an organization or corporation they dislike.
Read More: What a Digital Worker Could Mean for the Human Workforce
Fear of Robots
And because the technology is relatively new, people can be distrustful and cynical. A 2022 study in Personality and Individual Differences measured how high school students felt about AI. Using the Cynical Hostility Towards AI Scale, researchers had 659 participants complete a survey about their feelings toward AI.
The study found that just because a person was cynical toward AI didn’t mean they were cynical in general or toward other people. Participants were also more distrustful of AI when they felt it was hostile or had negative intentions.
The belief that a machine can have negative intentions demonstrates the complexity of robot hate. People believe a machine can be programmed to be harmful, yet people understand that robots aren’t conscious and they don’t have the ability to suffer if we’re mean to them.
One scholar argued the fact that robots are not “morally considerable” was one of the reasons people felt comfortable with robot hate. Our sarcasm doesn’t hurt robots’ feelings. Food delivery robots aren’t traumatized by being kicked. Thus, robots can be a safe place for people (like the Jabberwacky users) to break social norms.
And sometimes... it can feel like robots and AI are just cruising for a bruising. Devices that are programmed to replicate human communication can become sassy with their responses. Researchers are now exploring ways that devices can be better anthropomorphized in order to elicit empathy from users.
Read More: Study Suggests What the Human-Robot Relationship Looks Like