
An actor robot runs on a playpen trying to catch the visible green food, while an observer machine learns to predict the actor robot’s behavior purely through visual observations. Although the observer can always see the green foods, the actor, from its own perspective, cannot due to occlusions.
Like a longtime couple who can predict each other’s every move, a Columbia Engineering robot has learned to predict its partner robot’s future actions and goals based on just a few initial video frames.
When two primates are cooped up together for a long time, we quickly learn to predict the near-term actions of our roommates, co-workers or family members. Our ability to anticipate the actions of others makes it easier for us to successfully live and work together. In contrast, even the most intelligent and advanced robots have remained notoriously inept at this sort of social communication. This may be about to change.
The study, conducted at Columbia Engineering’s Creative Machines Lab led by Mechanical Engineering Professor Hod Lipson, is part of a broader effort to endow robots with the ability to understand and anticipate the goals of other robots, purely from visual observations.
The researchers first built a robot and placed it in a playpen roughly 3×2 feet in size. They programmed the robot to seek and move towards any green circle it could see. But there was a catch: Sometimes the robot could see a green circle in its camera and move directly towards it. But other times, the green circle would be occluded by a tall red carboard box, in which case the robot would move towards a different green circle, or not at all.
After observing its partner puttering around for two hours, the observing robot began to anticipate its partner’s goal and path. The observing robot was eventually able to predict its partner’s goal and path 98 out of 100 times, across varying situations—without being told explicitly about the partner’s visibility handicap.
“Our initial results are very exciting,” says Boyuan Chen, lead author of the study, which was conducted in collaboration with Carl Vondrick, assistant professor of computer science, and published today by Nature Scientific Reports. “Our findings begin to demonstrate how robots can see the world from another robot’s perspective. The ability of the observer to put itself in its partner’s shoes, so to speak, and understand, without being guided, whether its partner could or could not see the green circle from its vantage point, is perhaps a primitive form of empathy.”

An actor robot runs on a playpen trying to catch the visible green food, while an observer machine learns to predict the actor robot’s behavior purely through visual observations. Although the observer can always see the green foods, the actor, from its own perspective, cannot due to occlusions.
When they designed the experiment, the researchers expected that the Observer Robot would learn to make predictions about the Subject Robot’s near-term actions. What the researchers didn’t expect, however, was how accurately the Observer Robot could foresee its colleague’s future “moves” with only a few seconds of video as a cue.
The researchers acknowledge that the behaviors exhibited by the robot in this study are far simpler than the behaviors and goals of humans. They believe, however, that this may be the beginning of endowing robots with what cognitive scientists call “Theory of Mind” (ToM). At about age three, children begin to understand that others may have different goals, needs and perspectives than they do. This can lead to playful activities such as hide and seek, as well as more sophisticated manipulations like lying. More broadly, ToM is recognized as a key distinguishing hallmark of human and primate cognition, and a factor that is essential for complex and adaptive social interactions such as cooperation, competition, empathy, and deception.
In addition, humans are still better than robots at describing their predictions using verbal language. The researchers had the observing robot make its predictions in the form of images, rather than words, in order to avoid becoming entangled in the thorny challenges of human language. Yet, Lipson speculates, the ability of a robot to predict the future actions visually is not unique: “We humans also think visually sometimes. We frequently imagine the future in our mind’s eyes, not in words.”
Lipson acknowledges that there are many ethical questions. The technology will make robots more resilient and useful, but when robots can anticipate how humans think, they may also learn to manipulate those thoughts.
“We recognize that robots aren’t going to remain passive instruction-following machines for long,” Lipson says. “Like other forms of advanced AI, we hope that policymakers can help keep this kind of technology in check, so that we can all benefit.”
Short high-level video description of the Columbia Engineering “Robot Theory of Mind” project (audio narrations included).
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
Robot to human empathy
- Without Walls Announces Biggest Outdoor Arts Programme To Dateon March 3, 2021 at 4:40 am
In the time of Covid, Outdoor Arts provide important opportunities to reinvigorate our cultural lives whilst supporting the creative economy. To lead the way, Without Walls has announced 21 new ...
- Without Walls leads the way for cultural recovery with Outdoor Artson March 2, 2021 at 7:00 pm
Without Walls is more committed than ever to collaborate with the very best artists and companies and bring fantastic outdoor arts to people across the UK.
- On the scene, like a sex-obsessed machine: when a robot writes a playon March 1, 2021 at 5:42 am
In a drama written by artificial intelligence, the computer’s imagination touches on themes of love and loneliness – but is mostly obsessed with sex ...
- Klara and the Sun by Kazuo Ishiguro review – what it is to be humanon February 24, 2021 at 11:49 pm
The Nobel laureate examines loneliness, sacrifice and the meaning of love in a novel narrated by a machine with feelings ...
- Robotics: Teaching Robots how to Interact with the “Theory of Intent”on February 18, 2021 at 12:33 am
Robotic scientists are trying to be able to develop a “theory of mind” for robots that will make them more collaborative and smarter in how they respond to their environment, especially when they are ...
Go deeper with Google Headlines on:
Robot to human empathy
Go deeper with Bing News on:
Robot to robot empathy
- Without Walls Announces Biggest Outdoor Arts Programme To Dateon March 3, 2021 at 4:40 am
In the time of Covid, Outdoor Arts provide important opportunities to reinvigorate our cultural lives whilst supporting the creative economy. To lead the way, Without Walls has announced 21 new ...
- On the scene, like a sex-obsessed machine: when a robot writes a playon March 1, 2021 at 5:42 am
In a drama written by artificial intelligence, the computer’s imagination touches on themes of love and loneliness – but is mostly obsessed with sex ...
- See Spot Gun? Art installation controls robot 'dog' to warn public of its 'murderous' military capabilitieson February 24, 2021 at 10:54 am
Retailing at $74,500, Spot is pricier than the average canine companion. It is also a robot that can operate machinery and possesses alarming military capabilities. These qualities are what the New ...
- Boston Dynamics doesn’t want you to shoot paintballs from Spot the robot dogon February 23, 2021 at 11:32 am
A new game aims to highlight the dangers of the robot dog by arming it with a paintball gun and giving you the controls.
- When killer robots come to America they will be wrapped in fur, carrying a ball’, art collective MSCHF sayson February 23, 2021 at 8:11 am
When killer robots come to America they will be wrapped in fur, carrying a ball’, art collective MSCHF says Boston Dynamics ’ robot dog Spot has had a paintball gun affixed to it and will be let loose ...