In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways.
Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.
Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.
“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”
The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14 in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.
As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.
Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.
The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.
The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.
[osd_subscribe categories=’artificial-intelligence’ placeholder=’Email Address’ button_text=’Subscribe Now for any new posts on the topic “ARTIFICIAL INTELLIGENCE”‘]
The Latest on: Bias in artificial intelligence systems
via Google News
The Latest on: Bias in artificial intelligence systems
- The dangers of using AI in hiringon July 29, 2022 at 2:52 pm
Banks that need to fill large volumes of jobs increasingly let AI decide who’s in and who’s out, based on analysis of digital resumes and video interviews. Algorithms can take a more inclusive and ...
- Why Businesses Must Overcome AI Bias in Hiring — and How They Can Do Iton July 29, 2022 at 2:02 pm
Artificial intelligence can be helpful in sorting job candidates, but the systems must be programmed correctly and monitored closely.
- No quick fix: How OpenAI's DALL·E 2 illustrated the challenges of bias in AIon July 28, 2022 at 8:50 am
OpenAI released the second version of its DALL·E image generator in April to rave reviews, but efforts to address societal biases in its output have illustrated systemic underlying problems with AI ...
- Georgia Tech, Johns Hopkins University shows race and gender bias in artificial intelligenceon July 27, 2022 at 9:56 pm
A study conducted by institutions including the Georgia Institute of Technology and Johns Hopkins University has shown racist and sexist prejudices in a popular artificial intelligence system. The AI, ...
- Artificial Intelligence in Life Sciences: An Evolving Risk Landscapeon July 27, 2022 at 2:29 am
Read the second article in a series on AI in Life Sciences authored by techUK members Marsh in collaboration with the law firm Kennedys ...
- Artificial Intelligence in Healthcare: The Need for Compliance is Nowon July 27, 2022 at 1:48 am
View our Cookie Notice Right now, the regulation of artificial intelligence is high on the agenda for policymakers and regulators. allocate responsibility and governance for AI projects not only ...
- Research: Artificial intelligence can fuel racial bias in healthcare, but can mitigate it, tooon July 24, 2022 at 6:18 pm
While some healthcare algorithms exacerbate inequitable medical care, other algorithms can actually close such gaps, a growing body of research shows.
- Artificial Intelligence (AI) and the Risk of Bias in Recruitment Decisionson July 20, 2022 at 9:50 am
As part of the UK data protection authority’s new three-year strategy (ICO25), launched on 14 July, UK Information Commissioner John Edwards announced an investigation into the use of AI systems ...
- UK data watchdog investigates whether AI systems show racial biason July 13, 2022 at 10:00 pm
ICO says AI-driven discrimination can lead to job rejections or being wrongfully denied bank loans or benefit The UK data watchdog is to investigate whether artificial intelligence systems are ...
via Bing News