
A SoftBank Robotics Pepper robot was used in the two robot conditions
New research has shown robots can encourage humans to take greater risks in a simulated gambling scenario than they would if there was nothing to influence their behaviours. Increasing our understanding of whether robots can affect risk-taking could have clear ethical, practical and policy implications, which this study set out to explore.
Dr Yaniv Hanoch, Associate Professor in Risk Management at the University of Southampton who led the study explained, “We know that peer pressure can lead to higher risk-taking behaviour. With the ever-increasing scale of interaction between humans and technology, both online and physically, it is crucial that we understand more about whether machines can have a similar impact.”
This new research, published in the journal Cyberpsychology, Behavior, and Social Networking, involved 180 undergraduate students taking the Balloon Analogue Risk Task (BART), a computer assessment that asks participants to press the spacebar on a keyboard to inflate a balloon displayed on the screen. With each press of the spacebar, the balloon inflates slightly, and 1 penny is added to the player’s “temporary money bank”. The balloons can explode randomly, meaning the player loses any money they have won for that balloon and they have the option to “cash-in” before this happens and move on to the next balloon.
One-third of the participants took the test in a room on their own (the control group), one third took the test alongside a robot that only provided them with the instructions but was silent the rest of the time and the final, the experimental group, took the test with the robot providing instruction as well as speaking encouraging statements such as “why did you stop pumping?”
The results showed that the group who were encouraged by the robot took more risks, blowing up their balloons significantly more frequently than those in the other groups did. They also earned more money overall. There was no significant difference in the behaviours of the students accompanied by the silent robot and those with no robot.
Dr Hanoch said: “We saw participants in the control condition scale back their risk-taking behaviour following a balloon explosion, whereas those in the experimental condition continued to take as much risk as before. So, receiving direct encouragement from a risk-promoting robot seemed to override participants’ direct experiences and instincts.”
The researcher now believe that further studies are needed to see whether similar results would emerge from human interaction with other artificial intelligence (AI) systems, such as digital assistants or on-screen avatars.
Dr Hanoch concluded, “With the wide spread of AI technology and its interactions with humans, this is an area that needs urgent attention from the research community.”
“On the one hand, our results might raise alarms about the prospect of robots causing harm by increasing risky behavior. On the other hand, our data points to the possibility of using robots, and AI, in preventive programs such as anti-smoking campaigns in schools, and with hard to reach populations, such as addicts.”
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
Human interaction with AI
- AI companion used for sinister purposes in novel for fans of 'Black Mirror' and 'Klara and the Sun'
Is artificial intelligence so powerful that it can recapture the beauty and emotions of a lost loved one? Or is it so dangerous that it can paint a different picture about someone you thought you knew ...
- The implications of AI on human morality
Mergan Velayudan, Acting CIO at MultiChoice Group, discusses the importance of human oversight when it comes to AI and ethics.
- AWS’ transcription platform is now powered by generative AI
AWS added new languages to its Amazon Transcribe product, offering generative AI-based transcription for 100 languages and a slew of new AI capabilities for customers. Announced during the AWS re: ...
- Meta’s AI Raises Ethical Concerns in Blurring Reality and AI Interaction
Meta's AI blurs reality, celebrity AI raises ethical concerns, and social media profits from extended screen time. A complex world of virtual interactions.
- NSFW AI: Pephop AI Impresses with Attractive AI Chatbots
Pephop AI is an NSFW AI chatbot platform; it's also a bold innovation in the realm of artificial intelligence communication.
Go deeper with Google Headlines on:
Human interaction with AI
[google_news title=”” keyword=”human interaction with AI” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]
Go deeper with Bing News on:
Risk-taking behaviour
- Risky Adolescent Sexual Behavior: A Psychological Perspective for Primary Care Clinicians
risk of STI contraction or unwanted pregnancy). This set of values, biased toward the appreciation of the here-and-now to the exclusion of all else, is likely to enable risky sexual behavior.
- Sexual Risk Behavior Differences Among Sexual Minority High School Students
[2] Less is known about sexual risk behavior differences between sexual minority youth subgroups. This is the first analysis of subgroup differences among sexual minority youths using nationally ...
- Risk-taking behavior on roads in Victoria, Australia increased during lockdowns, study shows
Reduced travel on Victoria's roads during the COVID-19 pandemic did not equate to proportionate reductions in road trauma.
- How can you encourage risk-taking in your virtual meetings?
We hear many stories about talents who lose their jobs because their risk taking behaviour went against thier managers wishes. So my advise, before you conduct any risk taking activities think ...
- Hormonal contraceptives taken by adolescents may affect risk assessment behavior
Hormonal contraceptives taken by adolescents may influence development of the brain in a way that alters the recognition of risks, a new study in rats suggests.
Go deeper with Google Headlines on:
Risk-taking behaviour
[google_news title=”” keyword=”risk-taking behaviour” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]