In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways.
Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.
Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.
“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”
The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14 in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.
As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.
Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.
The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.
The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.
The Latest on: Bias in artificial intelligence systems
via Google News
The Latest on: Bias in artificial intelligence systems
- A start-up says its voice recognition tech beats Google and Amazon at reducing racial biason October 26, 2021 at 3:41 am
Researchers have become increasingly concerned about bias in the algorithms powering speech recognition services.
- We need to pay attention to AI bias before it's too lateon October 25, 2021 at 2:30 pm
Cognitive bias leads to AI bias, and the garbage-in/garbage-out axiom applies. Experts offer advice on how to limit the fallout from AI bias.
- The Use Of Artificial Intelligence In Business Codifies Gendered Ageism. How Do We Fix It?on October 25, 2021 at 5:50 am
As more organizations adopt AI and use data to make decisions like hiring, what are the consequences of not involving a diverse set of individuals, especially women, in the design process?
- An Artificial Intelligence Strategy for NATOon October 25, 2021 at 2:40 am
At their October 2021 meeting, Allied Defence Ministers formally adopted an Artificial Intelligence Strategy for NATO. Current and former NATO staff with direct involvement in the development and ...
- 5 Steps to Help Tech Companies Reduce Bias in AIon October 24, 2021 at 10:01 pm
The five-step framework can help tech entrepreneurs prevent biases from developing and reverse existing biases in tech. Children inevitably adapt to the culture in which they were raised. Parents or ...
- Artificial intelligence sees more funding, but needs more people and better dataon October 23, 2021 at 8:36 am
A call for 'developing metrics to assess goodness-of-data, better incentives for data excellence, better data education, better practices for early detection of data cascades, and better data access.' ...
- Saturday Special: Artificial Intelligence’s Worrisome Religious Biaseson October 23, 2021 at 7:09 am
Although AI is capable of generating complex and cohesive natural language, a series of recent works demonstrate that they also learn undesired social biases that can perpetuate harmful stereotypes.
- Bias in AI – The Problems and Solutionson October 22, 2021 at 8:27 am
Some praise artificial intelligence (AI) as a solution to some of humankind’s toughest problems and those who demonize technology as an existential threat. As you can see, AI has both exciting ...
- What’s Fundamentally Wrong with AI? Real Machine Intelligence vs. Artificial Human Intelligenceon October 17, 2021 at 10:25 am
Truth and Falsehood went bathing, Falsehood then dressed in Truth's clothes, and Truth, refusing to take another's clothes, went naked. [ Late 1500s] Success in creating effective AI, could be the ...
via Bing News