via Stevens Institute of Technology
Stevens Institute of Technology AI project models first impressions based on facial features
When two people meet, they instantly size each other up, making snap judgments about everything from the other person’s age to their intelligence or trustworthiness based solely on the way they look. Those first impressions, though often inaccurate, can be extremely powerful, shaping our relationships and impacting everything from hiring decisions to criminal sentencing.
Researchers at Stevens Institute of Technology, in collaboration with Princeton University and University of Chicago, have now taught an AI algorithm to model these first impressions and accurately predict how people will be perceived based on a photograph of their face. The work appears today, in the April 21 issue of the Proceedings of the National Academy of Science.
“There’s a wide body of research that focuses on modeling the physical appearance of people’s faces,” said Jordan W. Suchow, a cognitive scientist and AI expert at the School of Business at Stevens. “We’re bringing that together with human judgments and using machine learning to study people’s biased first impressions of one another.”
Suchow and team, including Joshua Peterson and Thomas Griffiths at Princeton, and Stefan Uddenberg and Alex Todorov at Chicago Booth, asked thousands of people to give their first impressions of over 1,000 computer-generated photos of faces, ranked using criteria such as how intelligent, electable, religious, trustworthy, or outgoing a photograph’s subject appeared to be. The responses were then used to train a neural network to make similar snap judgments about people based solely on photographs of their faces.
“Given a photo of your face, we can use this algorithm to predict what people’s first impressions of you would be, and which stereotypes they would project onto you when they see your face,” Suchow explained.
Many of the algorithm’s findings align with common intuitions or cultural assumptions: people who smile tend to be seen as more trustworthy, for instance, while people with glasses tend to be seen as more intelligent. In other cases, it’s a little harder to understand exactly why the algorithm attributes a particular trait to a person.
“The algorithm doesn’t provide targeted feedback or explain why a given image evokes a particular judgment,” Suchow said. “But even so it can help us to understand how we’re seen — we could rank a series of photos according to which one makes you look most trustworthy, for instance, allowing you to make choices about how you present yourself.”
Though originally developed to help psychological researchers generate face images for use in experiments on perception and social cognition, the new algorithm could find real-world uses. People carefully curate their public persona, for instance, sharing only the photos they think make them look most intelligent or confident or attractive, and it’s easy to see how the algorithm could be used to support that process, said Suchow. Because there’s already a social norm around presenting yourself in a positive light, that sidesteps some of the ethical issues surrounding the technology, he added.
More troublingly, the algorithm can also be used to manipulate photos to make their subject appear a particular way — perhaps making a political candidate appear more trustworthy, or making their opponent seem unintelligent or suspicious. While AI tools are already being used to create “deepfake” videos showing events that never actually happened, the new algorithm could subtly alter real images in order to manipulate the viewer’s opinion about their subjects.
“With the technology, it is possible to take a photo and create a modified version designed to give off a certain impression,” Suchow said. “For obvious reasons, we need to be careful about how this technology is used.”
To safeguard their technology, the research team has secured a patent and is now creating a startup to license the algorithm for pre-approved ethical purposes. “We’re taking all the steps we can to ensure this won’t be used to do harm,” Suchow said.
While the current algorithm focuses on average responses to a given face across a large group of viewers, Suchow next hopes to develop an algorithm capable of predicting how a single individual will respond to another person’s face. That could give far richer insights into the way that snap judgments shape our social interactions, and potentially help people to recognize and look beyond their first impressions when making important decisions.
“It’s important to remember that the judgments we’re modeling don’t reveal anything about a person’s actual personality or competencies,” Suchow explained. “What we’re doing here is studying people’s stereotypes, and that’s something we should all strive to understand better.”
Original Article: This Algorithm Has Opinions About Your Face
More from: Stevens Institute of Technology | Princeton University | University of Chicago
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
AI project models first impressions
- Google Reveals Project Astra: An All-Seeing AI That Could Live In Your Glasses
AI and AR are going to dovetail. Meta has already stated this, and at Google I/O, a new AI initiative called Project Astra, that can continuously scan camera feeds to provide contextual understanding ...
- This is the dumbest GPT-4o complaint I’ve seen
ChatGPT's voice sounds almost human after the GPT-4o update, and that's a good thing - but you can always change it up.
- Apple AI
In February 2023, Apple held its first annual AI summit ... language model. It's being used for autocorrect, which has proven spectacular in the betas. Apple's worst-kept secret is Project Titan ...
- Red Hat unveils RHEL AI and InstructLab to democratize enterprise AI
With RHEL AI and InstructLab, Red Hat aims to do for AI what it did for Linux and Kubernetes - make powerful tech accessible to a ...
- Alphabet’s Intrinsic robotics unit details internally developed AI models
Executives detailed the AI models ... projects carried out in collaboration with Google DeepMind. Both focused on optimizing industrial robots’ movements. According to Intrinsic, the first ...
Go deeper with Google Headlines on:
AI project models first impressions
[google_news title=”” keyword=”AI project models first impressions” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]
Go deeper with Bing News on:
People’s stereotypes
- The Stereotypes Show
Hosts Dan Magro and Samora Suber present guests with surprise stereotypes which they can personally relate to, opening a humorous exchange, ultimately debunking the clichés.
- Parents urged to change parenting stereotypes
Mphephu urged parents to treat their children equally regardless of gender. “If a girl is home, a mother will tell her to wash dishes and cook and let the male child go play soccer with his friends.
- France is right to ‘drive like a woman’ because most sex stereotypes are wrong
And if we look at other sex stereotypes, if you swap them ... a penetrating conversation with no more than two people and she stood in the same place all evening. She is enriched, he’s just drunk and ...
- Black males debunk America’s stereotypes, embrace higher education as a means to achieving their dreams
But despite these numbers — and even given a recent study released by Edge Research and HCM Strategists which found that the perceived value of a college degree slipped last year among young people ..
- Fans uproar at AMC's trigger warning to 'Goodfellas' for offensive language and cultural stereotypes
L OS ANGELES, CALIFORNIA: American Multi-Cinema Networks has introduced a trigger warning to the iconic movie 'Goodfellas' which caused a stir among fans and cast.
Go deeper with Google Headlines on:
People’s stereotypes
[google_news title=”” keyword=”peoples stereotypes” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]