Author: Luis F. Castillon
If robots have a wide range of physical configurations that go beyond the limits of organic evolution, why would companies still want to design robots that look like humans? Tesla’s "Optimus" and Figure's "Figure 01" being good examples.
Historically, researchers and engineers have used biomimicry in robotics as an innovation and design methodology: imitating how our muscles, joints, bone structures and any other body parts work to help robots achieve our same functions. This design approach happens to be helpful to develop a type of robot often referred as "general purpose humanoid" that is considered to be one of robotics' best business opportunities due to the need of replacing our own human physical labor and house work. But is biomimicry the reason to develop humanoid robots? The answer is no. There are several reasons to consider anthropomorphism when designing for human-robot interaction:
Using familiarity for likeability, affinity and technology adoption As social psychologists, such as Robert Zajonc, have proven, we tend to like that which is familiar to us, known as familiarity principle or the mere-exposure effect. Robots that resemble our human traits could be a way to ease technology adoption though likeability. In the early 2000s, Honda’s child-like ASIMO and Sony's dancing QRIO meant to portray that robots would impose no threat to humans, even being perceived as friendly or cute. Building affinity can be part of a robot's job, specially social robots such as those used in education, psychological support or health. Would you leave your grandparent to be taken care for by a robot that resembles a human nurse with loving eyes that can shape like a heart (such as Diligent Robotics’ Moxi) or by a faceless 8-arm robot that resembles an alien octopus? There are also challenging aspects to designing resemblance to create affinity. Take into consideration Masahiro Mori’s “uncanny valley”, that states users have a more emotional and empathic reaction when robots either start to resemble or fully resemble a healthy human, but there is a middle-resemblance "valley" that can cause aversion due to human cognitive mechanisms such as conflicting perceptual cues. Look at Hansons Robotics' Sophia or even Engineered Art's Mesmer System for building realistic humanoid robots and affinity is still hard to create. When it comes to cost-effective high-resemblance robots, we are not just there yet.
Usability through non-verbal communication HRI designers need to develop a common, natural, language between robots and users, which include non-verbal communication. Humans can relay on robots to communicate in the same way we have evolved to recognize within our species: reading someone’s eyes, gestures, body posture or proximity as part of the message or for emotional cues. This can best be represented in anthropomorphic designs. Instead of verbally speaking “I don’t recognize the command”, an anthropomorphic robot can shrug its shoulders, or maybe frown if not being handled properly. Robots can also get users to do what they want to do through non-verbal interfaces. There are some very interesting readings regarding this topic, such as Akiko Yasuhara's and Takuma Takehara's article on how “Robots with tears can convey enhanced sadness and elicit support intentions”.
To perform at an anthropocentric world Our streets, buildings, tools and the entire artificial ecosystem that we inhabit and manipulate is designed for human dimensions and scale. Even the word manipulate comes from “hand”. Therefore robots should be designed to perform at their best in this same anthropocentric scenario: being able to go through human doors, looking outside windows, grabbing a cup, sewing with a needle, and so on. Moreover, some robots are meant to work side-by-side with humans, such as collaborative robots, which also means being able to effectively and safely share spaces, activities and tools. Working with us in our own environment is the main reason most companies explain their selection of a humanoid form. Agility Robotic's Digit states in their website: "Digit was designed for the realities of existing workspaces. Customers have designed facilities around people and how we walk, move, reach, and work. By leveraging a bipedal, dynamically stable design, Digit can operate in tight areas, reach out to similar heights as people can, walk up and down stairs, ramps, and elevators, and interact with the real world similar to a person (unlike conventional wheeled robots). This translates into ease of deployment and, ultimately, cost savings for our customers." Brett Adcock, CEO of Figure, also makes a similar case in their website for Figure 01: "There are two schools of thought on how to solve real-world robotics: build an environment specifically for robots, or reverse it and build robots for our human environment. We could have either millions of different types of robots serving unique tasks or one humanoid robot with a general interface, serving millions of tasks. At Figure, we believe general purpose humanoid robots built for a human environment is the desired route to have the largest overall impact. For that reason, our humanoid robots resemble the human body in shape — two legs, two arms, hands, and screen for a face. With one product we can meet the complex human environment with human-like capabilities, and provide endless types of support across a variety of circumstances."
Indicative functions and user expectation When looking at an anthropomorphic robot, users can get a quick and intuitive sense of what the robot can and cannot do as a direct analogy to our human body parts: having hands lets us know how the robot will manipulate an object or having a face lets us know there is a "front" for the robot or the direction it is walking. We can even build expectations on the robot's intelligence and behavior. As Engineered Art's explains their humanoid form for AMECA in their website: "Human-like artificial intelligence needs a human-like artificial body (AI x AB)". Although having a human form gives us a sense of the robot's indicative functions, HRI designers need to consider that this comparison can quickly be proven untrue by users and cause unmet expectations (underestimating or overestimating robot functionality). For example, humans can have almost a 180 degree vision, while a single-camera robot might have only 90 degrees, failing our expectation as we overestimated its capabilities due to its human form. Perhaps the robot is equipped with more cameras at many angles, even at the back of its head, 3D lidar or infrared view which we wouldn't assume from a human form, making us feel insecure or vulnerable. We might think a robot's manipulator that resembles a hand will have the same wrist rotation as a human, while robots might have different rotation restrictions and even different angles of freedom than humans. Look at videos from the next generation of Boston Dynamic's Atlas rotating its joints in ways that resemble the scary character from the movie The Exorcist. The word scary is not something your want for user experience. HRI designers need to balance and meet expectations to avoid consumer dissatisfaction. When designing a humanoid form, users might also assume the location of component types, which can be beneficial for usability when it comes to repair and maintenance. Think of it this way: in case you had to save humanity from a robot apocalypse, if you had a gun with just one bullet, would you go for a head shot? You probably would. This decision assumes that the main function components that control the entire robot are located in the head, as our brain does, but that is not necessarily (and often) the case. Softabank's NAO Robot has two eyes, but they are not its cameras. NAO has one camera in its forehead and one in its mouth to achieve correct angles and operability. As robots become more widespread used by non-technical users, HRI design should also help to communicate intuitively how to repair, give maintenance or simply understand the technology and its components and placing components to its human equivalent could help usability if there were no operational restrictions. Should cameras be placed as eyes and intuitively know where they are? Would it help as an analogy of opening and closing your eyes to know if they are shut or open? Should speakers be placed as a mouth and microphones in the ears? We can even go further as body locations could also modify meanings: Would there be a change in the message’s connotation if the speakers, where the words from the robot are spoken, were placed in the robot’s rear? It would literally illustrate the common phrase “Talking out of your a**”.
Adequate for human ergonomics, movement and control Robots that adapt to our body form as part of their function, such as robotic exoskeletons and prosthetics, are by nature anthropomorphic, such as Eksobionic's EksoNR. This is also the case for robots that are being controlled by live natural human movement, just like in the classic robot action movie “Real Steel” (2011) with real applications such as Joao Ramos’ HERMES robot for MIT back in 2015.
As an image of ourselves and our achievements Mankind has represented (and altered) the human figure throughout art's history: from the first cave paintings of men hunting to Italian sculptures, the human body has always been an iconic image of our creations, even a signature of perfection and a search of how we perceive ourselves. If Space X sends some robots to terraform Mars, a humanoid robot like Optimus from “sister” company Tesla, would probably not be the best fit for that environment, but it sure would be an iconic representation of humankind conquest of space. The first humans in that planet, and the obvious emotional shock it might represent, would also be grateful of having familiarity in a humanoid colony of robots as we pointed out before. There are also some reasons on why NOT to create anthropomorphic robot designs:
Having two feet, two arms, a head and overall verticality will not necessarily provide functionality you are looking for or can be hard to adapt to a given environment, for example space exploration or aquatic activities. Bipedals are not preferred for balance (high centroid), aerodynamics and tend to be slow compared to alternatives. Some robots, such as Agility Robotics' Digi make changes such as the "backward" knee bent (making it look like a humanoid grasshopper) and others simply add wheels, such as 1X's EVE . Apptronik's Apollo is modular and its legs can be dismounted to place the robot in any platform or kept fixed. In robot-assisted telesurgery, most common units are not humanoid but have several arms for each surgery tool.
We made a case about general purpose humanoids replacing our human labor therefore needing our human shape, but If we want robots to do what we actually CAN'T as humans, then changing size and shape seems like an obvious choice to differentiate from our abilities. Such is the case of medical nanorobots or large manufacturing soldering robotic arms, both performing at scales we couldn’t possible do.
We can falsely assign human traits that the robot does not have: do they have emotions and intelligence the same way as humans do? Will they act and react as it is expected from a human? Do robots have an identity and a personality?
Robots are meant to be tools. Tools that have an owner, that can manipulate the tool, that can also be disposed of. Having a humanoid form can create negative connotations of enslavement.
There are also other topics related to creating anthropomorphic robots that are interesting to analyze from a sociological and cultural perspective such as body type inclusion, racial trait selection, gender role and even robot sexualization.
We can expect a near future where humanoid robots will walk among us, and we can also expect further discussions and controversies on the way we perceive and understand our human shape when applied in robotics.
Comments