In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways.
Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.
Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.
“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”
The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14 in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.
As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.
Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.
The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.
The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.
Learn more:Â Biased bots: Human prejudices sneak into artificial intelligence systems
[osd_subscribe categories=’artificial-intelligence’ placeholder=’Email Address’ button_text=’Subscribe Now for any new posts on the topic “ARTIFICIAL INTELLIGENCE”‘]
The Latest on: Bias in artificial intelligence systems
[google_news title=”” keyword=”Bias in artificial intelligence systems” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]
via Google News
The Latest on: Bias in artificial intelligence systems
- Colorado lawmakers consider landmark artificial intelligence regulationson April 26, 2024 at 8:35 am
Colorado could be one of the first states to broadly regulate generative artificial intelligence, as lawmakers consider the balance between setting guardrails on a potentially-harmful technology and ...
- Colorado small businesses oppose bill that seeks to combat discrimination in AI systemson April 25, 2024 at 1:45 pm
Senate Bill 205 in Colorado seeks to enhance consumer protections against AI discrimination, but concerns persist regarding its impact on small businesses and innovation.
- Artificial Intelligence: Is It A Looming Threat To Employment Generation?on April 24, 2024 at 7:30 pm
By Zahid Ahmad Lone In the rapidly evolving landscape of technology, the advent of Artificial Intelligence (AI) has spa ...
- Randomised controlled trials evaluating artificial intelligence in clinical practice: a scoping reviewon April 24, 2024 at 5:39 pm
This scoping review of randomised controlled trials on artificial intelligence (AI) in clinical practice reveals an expanding interest in AI across clinical specialties and locations. The USA and ...
- CISA’s chief data officer: Bias in AI models won’t be the same for every agencyon April 24, 2024 at 1:24 pm
Monitoring and logging are critical for agencies as they assess datasets, though “bias-free data might be a place we don’t get to,” the federal cyber agency’s CDO says.
- The AI Conscience: Leading Ethical Decision-Making In The Artificial Intelligence Ageon April 24, 2024 at 4:00 am
The march of artificial intelligence (AI) is transforming businesses worldwide. As AI becomes more influential, leaders face the challenge of making ethical decisions in uncharted territories. As ...
- Addressing Racial Bias in AI: Challenges and Solutionson April 21, 2024 at 3:42 am
Explore the challenges of racial bias in AI development and industry responses. Learn about historical precedents like Microsoft's Tay chatbot.
- Artificial Intelligence – humanity’s sidekick in tackling financial crimeon April 19, 2024 at 9:14 pm
To respond effectively, financial institutions need to fight fire with fire, using AI to limit the ability of lawbreakers to get away with their crimes. In doing so, it’s essential to balance the ...
- Understanding AI outputs: Study shows pro-western cultural bias in the way AI decisions are explainedon April 19, 2024 at 9:46 am
Humans are increasingly using artificial intelligence (AI) to inform decisions about our lives. AI is, for instance, helping to make hiring choices and offer medical diagnoses.
via Bing News