A SoftBank Robotics Pepper robot was used in the two robot conditions
New research has shown robots can encourage humans to take greater risks in a simulated gambling scenario than they would if there was nothing to influence their behaviours. Increasing our understanding of whether robots can affect risk-taking could have clear ethical, practical and policy implications, which this study set out to explore.
Dr Yaniv Hanoch, Associate Professor in Risk Management at the University of Southampton who led the study explained, “We know that peer pressure can lead to higher risk-taking behaviour. With the ever-increasing scale of interaction between humans and technology, both online and physically, it is crucial that we understand more about whether machines can have a similar impact.”
This new research, published in the journal Cyberpsychology, Behavior, and Social Networking, involved 180 undergraduate students taking the Balloon Analogue Risk Task (BART), a computer assessment that asks participants to press the spacebar on a keyboard to inflate a balloon displayed on the screen. With each press of the spacebar, the balloon inflates slightly, and 1 penny is added to the player’s “temporary money bank”. The balloons can explode randomly, meaning the player loses any money they have won for that balloon and they have the option to “cash-in” before this happens and move on to the next balloon.
One-third of the participants took the test in a room on their own (the control group), one third took the test alongside a robot that only provided them with the instructions but was silent the rest of the time and the final, the experimental group, took the test with the robot providing instruction as well as speaking encouraging statements such as “why did you stop pumping?”
The results showed that the group who were encouraged by the robot took more risks, blowing up their balloons significantly more frequently than those in the other groups did. They also earned more money overall. There was no significant difference in the behaviours of the students accompanied by the silent robot and those with no robot.
Dr Hanoch said: “We saw participants in the control condition scale back their risk-taking behaviour following a balloon explosion, whereas those in the experimental condition continued to take as much risk as before. So, receiving direct encouragement from a risk-promoting robot seemed to override participants’ direct experiences and instincts.”
The researcher now believe that further studies are needed to see whether similar results would emerge from human interaction with other artificial intelligence (AI) systems, such as digital assistants or on-screen avatars.
Dr Hanoch concluded, “With the wide spread of AI technology and its interactions with humans, this is an area that needs urgent attention from the research community.”
“On the one hand, our results might raise alarms about the prospect of robots causing harm by increasing risky behavior. On the other hand, our data points to the possibility of using robots, and AI, in preventive programs such as anti-smoking campaigns in schools, and with hard to reach populations, such as addicts.”
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
Human interaction with AI
- What is explainable AI? Building trust in AI models
Explainable AI, abbreviated "XAI," is an emerging set of techniques to peel back the curtains on complex AI systems.
- Shock AI Discovery Suggests We've Not Even Discovered Half of What's Inside Our Cells
Inside every cell of the human body is a constellation of proteins, millions of them. They're all jostling about, being speedily assembled, folded, packaged, shipped, cut and recycled in a hive of act ...
- The AI revolution can supercharge learning in school
It is Monday afternoon at St John’s Church of England primary school in Wigan and the children in year six are studying science. One pupil is learning about vit ...
- EthicAI=LABS: A laboratory format investigating the intersections between artificial intelligence and ethic (press release)
The EthicAI=LABS project is a collaboration between Goethe-Institut branches in Ankara, Athens, Bucharest, Zagreb, Sarajevo, and Sofia exploring the relationship between artificial intelligence and ...
- Artificial Intelligence Successfully Predicts Protein Interactions – Could Lead to Wealth of New Drug Targets
Research led by UT Southwestern and the University of Washington could lead to a wealth of drug targets. UT Southwestern and University of Washington researchers led an international team that used ar ...
Go deeper with Google Headlines on:
Human interaction with AI
Go deeper with Bing News on:
- GBP/JPY Price Forecast – British Pound Continues Noisy Behavior Against Yen
The British pound has gone back and forth during the trading session on Thursday as we continue to hang about the 50 day EMA.
- Getting Dietary Fat From Plants May Reduce Stroke Risk
In a new study, people who got more of the fat in their diet from plants sources were less likely to have a stroke than those who got their fat from animal sources.
- What flight attendants want you to know about traveling right now
Pre-planning may not prepare you for the full reality of flying in 2021, and it's not just Covid-19. Unruly passenger behavior -- often fueled by disagreements about face masks -- has the potential to ...
- Are Leaders Rewarded for Taking Risks?
In a new study, Yale SOM’s Oriane Georgeac, Gerben van Kleef of the University of Amsterdam, and their co-authors find that in certain situations, risk-taking can strengthen a leader—but it can also ...
- Forcepoint’s Sean Berg: Agencies Should Understand Individual Users’ Behavior When Implementing Zero Trust
Sean Berg, president of global governments and critical infrastructure at Forcepoint, said agencies looking to implement zero trust architectures should tailor their security strategy to individual ...