Interacting with Robots February 2017
Want more free featured content?
Subscribe to Insights in Brief

A previous Scan™ article, Social Robots, explores how robot developers are creating a new generation of robots that engage with people in human-dominated environments. In combination with increasing computational power, progress in machine learning, vision processing, and artificial intelligence are enabling rapid advancement in robotics and automated systems. Because of this technological progress, many companies—including Sony Corporation (Tokyo, Japan), Sharp Corporation (Hon Hai Precision Industry Co./Foxconn Technology Group; New Taipei City, Taiwan), Toyota Motor Corporation (Toyota, Japan), and Boston Dynamics (Alphabet; Mountain View, California)—have a renewed interest in creating robots that interact with people in their everyday lives. Potential application areas for social robots include hospitality, caregiving, education, entertainment, and companionship. For robots to excel in these application areas, they will need to interact with humans in a complex manner that requires them to understand and react appropriately to human behavior and emotions.
Many researchers are exploring relationships between humans and robots now in efforts to understand how humans and robots should interact in the future. Researchers commonly use the term affective computing to describe human–machine interactions in which the machine can detect and respond to a human's emotions. Researchers at the Fraunhofer Society for the Advancement of Applied Research's (Munich, Germany) Fraunhofer Institute for Communication, Information Processing and Ergonomics (FKIE; Wachtberg, Germany) are developing a diagnostic system that will enable machines to evaluate a person's state and determine whether the person is capable of performing a task or in need of assistance. FKIE researcher Jessica Schwarz developed a holistic model that provides a comprehensive look at human states and their causes. The model also takes into account external factors such as the task a person must perform, the time of day, and the environment in which the person must function. This model serves as the basis for the diagnostic system the researchers are developing, which uses several technologies to collect a broad array of physiological data. The system could enable more effective and efficient human–machine interactions in complex situations such as those that social robots might encounter. Researchers from University College London (London, England) and the University of Bristol (Bristol, England) recently conducted an experiment and found that individuals preferred working with an imperfect emotional robot over working with a perfect robot that lacks emotions. During the experiment, participants prepared a meal with assistance from a robot that would either perform the task perfectly with an unchanging smile on its face or make mistakes and verbally apologize while making a sad facial expression. The majority of participants preferred the emotional, mistake-prone robot.
Governments and companies must prepare for significant social and cultural shifts.
A number of companies—including Affectiva (Waltham, Massachusetts) and Eyeris (Palo Alto, California)—offer software capable of identifying a person's emotions on the basis of the person's facial expressions; such software sees use in market research for advertisements, trailers, and movies. Some robot manufacturers are already incorporating basic emotion-recognition technology into service-robot platforms. For example, the Pepper humanoid robot developed by SoftBank Group Corp. (Tokyo, Japan) subsidiaries Softbank Mobile Corp. and Aldebaran Robotics (now SoftBank Robotics) uses image-recognition technology to detect basic emotions, but the robot can provide only limited emotional responses.
Issues that relate to trust also create challenges for human–robot interactions. For example, a Georgia Institute of Technology (Atlanta, Georgia) study suggests that humans might be too trusting of robots. The researchers designed an experiment to determine whether people would put their trust in and follow an autonomous guide robot that is supposed to lead people out of a building during a fire or other emergency. During the experiment, participants trusted and followed a guide robot during a simulated emergency even though the robot had performed unreliably before the emergency, making strange turns and leading participants into rooms without an exit. The researchers concluded that the subjects trusted the questionably functioning robot because they identified it as an authority figure and assumed it was operating correctly—even though the researchers informed some participants that the robot had broken down before the simulated emergency. Humans' placing too much trust in technology is also creating challenges for developers of self-driving technology. For instance, Tesla Motors (Palo Alto, California) equips its vehicles with the Autopilot optional semiautonomous system and specifically states that drivers must continue to monitor the road when the Autopilot system is active; however, some users have become overconfident in the system's capabilities and pay insufficient attention to the road, which has resulted in accidents and at least two deaths. Such problems will likely occur with increasing frequency as more and more companies equip vehicles with self-driving technologies.
When designing robots, developers typically focus on how robots can best serve humans; they often neglect to consider the negative ways in which humans could interact with robots. Previous Scan™ articles, Rights for Robots and Humanoid Psychologies explore ideas about how advances in artificial intelligence and robotics will challenge cultural and behavioral norms. Currently, humans treat robots as objects, but as robots become more intelligent and look and act more like humans, treating robots and artificial-intelligence agents as objects may become unacceptable. Many digital assistants—including Alexa (Amazon.com; Seattle, Washington), Cortana (Microsoft; Redmond, Washington), and Siri (Apple; Cupertino, California)—use a female voice by default, and some consumers use inappropriate, crude, and sexist language when they interact with them. Some companies have taken steps to ensure that their digital assistants respond to such language in ways that discourage and shut down inappropriate conversations. And researchers from the Advanced Telecommunications Research Institute International (Kyoto, Japan) and Osaka University (Osaka, Japan) studied public reactions to a robot that helped the elderly run errands at a shopping mall in Osaka, Japan, and found that many children verbally abused, punched, kicked, or obstructed the robot.
During the past decade, many activists and parents expressed concerns about violent and sexually explicit video games, claiming that the games can corrupt people and encourage bad behavior. Video games often receive more criticism than movies and television shows do because video-game players are not passive observers but active participants who choose which actions to perform. Imaginably, violent and sexual interactions with humanoid robots (especially robots that simulate emotions or appear to feel pain) will receive harsh criticism because of the physical nature and active participation of such interactions.
As robots become an increasingly prevalent part of everyday life, governments and companies must prepare for significant social and cultural shifts.