
via Stevens Institute of Technology
Stevens Institute of Technology AI project models first impressions based on facial features
When two people meet, they instantly size each other up, making snap judgments about everything from the other person’s age to their intelligence or trustworthiness based solely on the way they look. Those first impressions, though often inaccurate, can be extremely powerful, shaping our relationships and impacting everything from hiring decisions to criminal sentencing.
Researchers at Stevens Institute of Technology, in collaboration with Princeton University and University of Chicago, have now taught an AI algorithm to model these first impressions and accurately predict how people will be perceived based on a photograph of their face. The work appears today, in the April 21 issue of the Proceedings of the National Academy of Science.
“There’s a wide body of research that focuses on modeling the physical appearance of people’s faces,” said Jordan W. Suchow, a cognitive scientist and AI expert at the School of Business at Stevens. “We’re bringing that together with human judgments and using machine learning to study people’s biased first impressions of one another.”
Suchow and team, including Joshua Peterson and Thomas Griffiths at Princeton, and Stefan Uddenberg and Alex Todorov at Chicago Booth, asked thousands of people to give their first impressions of over 1,000 computer-generated photos of faces, ranked using criteria such as how intelligent, electable, religious, trustworthy, or outgoing a photograph’s subject appeared to be. The responses were then used to train a neural network to make similar snap judgments about people based solely on photographs of their faces.
“Given a photo of your face, we can use this algorithm to predict what people’s first impressions of you would be, and which stereotypes they would project onto you when they see your face,” Suchow explained.
Many of the algorithm’s findings align with common intuitions or cultural assumptions: people who smile tend to be seen as more trustworthy, for instance, while people with glasses tend to be seen as more intelligent. In other cases, it’s a little harder to understand exactly why the algorithm attributes a particular trait to a person.
“The algorithm doesn’t provide targeted feedback or explain why a given image evokes a particular judgment,” Suchow said. “But even so it can help us to understand how we’re seen — we could rank a series of photos according to which one makes you look most trustworthy, for instance, allowing you to make choices about how you present yourself.”
Though originally developed to help psychological researchers generate face images for use in experiments on perception and social cognition, the new algorithm could find real-world uses. People carefully curate their public persona, for instance, sharing only the photos they think make them look most intelligent or confident or attractive, and it’s easy to see how the algorithm could be used to support that process, said Suchow. Because there’s already a social norm around presenting yourself in a positive light, that sidesteps some of the ethical issues surrounding the technology, he added.
More troublingly, the algorithm can also be used to manipulate photos to make their subject appear a particular way — perhaps making a political candidate appear more trustworthy, or making their opponent seem unintelligent or suspicious. While AI tools are already being used to create “deepfake” videos showing events that never actually happened, the new algorithm could subtly alter real images in order to manipulate the viewer’s opinion about their subjects.
“With the technology, it is possible to take a photo and create a modified version designed to give off a certain impression,” Suchow said. “For obvious reasons, we need to be careful about how this technology is used.”
To safeguard their technology, the research team has secured a patent and is now creating a startup to license the algorithm for pre-approved ethical purposes. “We’re taking all the steps we can to ensure this won’t be used to do harm,” Suchow said.
While the current algorithm focuses on average responses to a given face across a large group of viewers, Suchow next hopes to develop an algorithm capable of predicting how a single individual will respond to another person’s face. That could give far richer insights into the way that snap judgments shape our social interactions, and potentially help people to recognize and look beyond their first impressions when making important decisions.
“It’s important to remember that the judgments we’re modeling don’t reveal anything about a person’s actual personality or competencies,” Suchow explained. “What we’re doing here is studying people’s stereotypes, and that’s something we should all strive to understand better.”
Original Article: This Algorithm Has Opinions About Your Face
More from: Stevens Institute of Technology | Princeton University | University of Chicago
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
AI project models first impressions
- The 5 Best AI Cover Letter Generators Compared for 2023
The best AI cover letter generators can help you draft compelling applications and assist you in the job-hunting process. To help you find the right tools for your needs, we’ve reviewed the five best ...
- Meet the first Spanish AI model earning up to €10,000 per month
Aitana, an exuberant 25-year-old pink-haired woman from Barcelona, receives weekly private messages from celebrities asking her out. But this model is not real, she was created by her designers using ...
- AI Is Going To Transform Business Models Across Every Industry
Tomorrow.io is making generative AI-based weather models available through Gale. In June, the world's first AI-generated drug entered human trials. According to CNBC, Insilico Medicine's treatment ...
- Introducing Project “Sophia”, a new generation AI-first business application
We are committed to continuous innovation to reimagine business applications in this era of AI. Today we are excited to announce the preview of Project “Sophia ...
- How can you ensure your AI models meet your service level agreements?
Before you outsource your AI project, you need to have a clear vision of what you want to achieve with your AI model and how you will measure its success. You should define your AI goals in terms ...
Go deeper with Google Headlines on:
AI project models first impressions
[google_news title=”” keyword=”AI project models first impressions” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]
Go deeper with Bing News on:
People’s stereotypes
- Online map charts Toronto neighbourhoods by stereotypes
Ever wondered what living in your neighbourhood might say about you? Well, this online map provides insightful, humourous (and sometimes, ...
- Racist AI: Image Generator Stable Diffusion laced with racial, gendered stereotypes, finds study
“For instance, an Indigenous person looking at Stable Diffusion’s representation of people from Australia is not going to see their identity represented-that can be harmful and perpetuate stereotypes ...
- Most Asian Americans face racial discrimination, stereotypes: survey
A new Pew Research Center (PRC) report reveals that Asian Americans continue to face prejudice, with a majority feeling that their experiences with discrimination receive insufficient national ...
- Popular AI image generator perpetuates racial and gendered stereotypes: Study
"For instance, an Indigenous person looking at Stable Diffusion's representation of people from Australia is not going to see their identity represented—that can be harmful and perpetuate stereotypes ...
- How Grayson White defied QB stereotypes to lead Camden to title game
“I spoke to Dabo this summer and he called me porkchop,” White said this week. “People have called me T-bone and other things but nothing really phases me. During a game, I throw it right back at him ...
Go deeper with Google Headlines on:
People’s stereotypes
[google_news title=”” keyword=”peoples stereotypes” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]