There is some controversy about how human-like robots should be,
which arises especially in two new applications of robotics that are
attracting a lot of attention (especially in Japan):
playmates/baby-sitters for young children and companions/caretakers for
the elderly. (These address two important problems in
contemporary society: daycare for children and eldercare. Do you
think it’s a good idea?)
You have already seen how programs as crude (and unintelligent) as
Eliza could encourage some people to confide and discuss their
problems. Might people put too much trust in robots that look and
act much more like humans? (To see an example of the state of the
art of human-looking robots, look at “Japanese Android Video” <androidvideo.com>. Note that, unlike Leonardo, this robot [named Repliee Q1expo] is not interacting with the people; “she” is just speaking prerecorded speech.)
Roboticists speak of an “uncanny canyon” in people’s responses to robot
appearance and behavior. In general, people prefer (and better
understand) robots that look more like humans, but robots that are very
similar, but not identical, to people in their appearance and behavior
are very disturbing (“uncanny”). The reason seems to be that we
have several hundred thousand years’ of experience in sensitively
judging people by their appearance and behavior; this is and has been
very important in our social interactions. (For example, people
may look or act “creepy,” or suspicious, or trustworthy, or
approachable, etc.) If a robot looks sufficiently like a human,
then these brain systems are engaged, and if it then falls short of
comfortable human behavior, then the warning signals go off. We
have lower expectations for robots that are less humanoid (e.g.,
Leonardo), because these highly tuned brain systems are not engaged by
them. (Similarly, for example, we are less sensitive to the mental
states of chimps than to those of humans, and we are not bothered when
chimps don’t act like humans.)
A second “uncanny canyon” occurs in the way people of different ages
respond to very human-like androids (such as Repliee). Children
younger than 3 or 4 years do not find her disturbing, nor do adults
over about 20. But between these ages is an “uncanny canyon” in which
they find the android creepy. The reason seems to be that
beginning about 3 or 4, children are forming a detailed and subtle
cognitive model of how humans normally look and behave, an idea of
humanness we might say, and that the imperfect humanness of Repliee
clashes with their developing expectations. (By the time they are
adults, their idea of humanness is more secure.)
So on the one hand, androids might be disturbing if they are not
completely human-like, but if their behavior seems authentically human,
it might encourage people to treat them as humans (relying on them too
much, seeking empathy from them, or psychological advice, for
example). On the other hand, as we have discussed, in human-robot
interaction, it is important for people to be able to “read” the
internal (“mental”) states of robots, and this is facilitated by
greater human-like appearance and behavior.
So here is an issue to think about and discuss: How human-like should robots be?
This is especially an issue for contemporary robots, which are far
below humans in intelligence, do not have emotions, etc.
See Scientific American Mind (June/July 2006) for a discussion of this android and the two “uncanny canyons.”