Suresh Venkatasubramanian, an associate professor in the University of Utah’s School of Computing, leads a team of researchers that have discovered a technique to determine if algorithms used for tasks such as hiring or administering housing loans could in fact discriminate unintentionally. The team also has discovered a way to fix such errors if they exist. Their findings were recently revealed at the 21st Association for Computing Machinery’s SIGKDD Conference on Knowledge Discovery and Data Mining in Sydney, Australia.
Software may appear to operate without bias because it strictly uses computer code to reach conclusions. That’s why many companies use algorithms to help weed out job applicants when hiring for a new position.
But a team of computer scientists from the University of Utah, University of Arizona and Haverford College in Pennsylvania have discovered a way to find out if an algorithm used for hiring decisions, loan approvals and comparably weighty tasks could be biased like a human being.
The researchers, led by Suresh Venkatasubramanian, an associate professor in the University of Utah’s School of Computing, have discovered a technique to determine if such software programs discriminate unintentionally and violate the legal standards for fair access to employment, housing and other opportunities. The team also has determined a method to fix these potentially troubled algorithms. Venkatasubramanian presented his findings Aug. 12 at the 21st Association for Computing Machinery’s SIGKDD Conference on Knowledge Discovery and Data Mining in Sydney, Australia.
“There’s a growing industry around doing résumé filtering and résumé scanning to look for job applicants, so there is definitely interest in this,” says Venkatasubramanian. “If there are structural aspects of the testing process that would discriminate against one community just because of the nature of that community, that is unfair.”
Machine-learning algorithms
Many companies have been using algorithms in software programs to help filter out job applicants in the hiring process, typically because it can be overwhelming to sort through the applications manually if many apply for the same job. A program can do that instead by scanning résumés and searching for keywords or numbers (such as school grade point averages) and then assigning an overall score to the applicant.
These programs also can learn as they analyze more data. Known as machine-learning algorithms, they can change and adapt like humans so they can better predict outcomes. Amazon uses similar algorithms so they can learn the buying habits of customers or more accurately target ads, and Netflix uses them so they can learn the movie tastes of users when recommending new viewing choices.
But there has been a growing debate on whether machine-learning algorithms can introduce unintentional bias much like humans do.
“The irony is that the more we design artificial intelligence technology that successfully mimics humans, the more that A.I. is learning in a way that we do, with all of our biases and limitations,” Venkatasubramanian says.
Disparate impact
Venkatasubramanian’s research determines if these software algorithms can be biased through the legal definition of disparate impact, a theory in U.S. anti-discrimination law that says a policy may be considered discriminatory if it has an adverse impact on any group based on race, religion, gender, sexual orientation or other protected status.
Venkatasubramanian’s research revealed that you can use a test to determine if the algorithm in question is possibly biased. If the test — which ironically uses another machine-learning algorithm — can accurately predict a person’s race or gender based on the data being analyzed, even though race or gender is hidden from the data, then there is a potential problem for bias based on the definition of disparate impact.
“I’m not saying it’s doing it, but I’m saying there is at least a potential for there to be a problem,” Venkatasubramanian says.
Read more: PROGRAMMING AND PREJUDICE
The Latest on: Bias in Algorithms
via Google News
The Latest on: Bias in Algorithms
- Opinion: A unified approach for reducing appraisal biason May 27, 2022 at 12:08 pm
With advances in data engineering and modeling, we can begin correcting some of the bias and issues in the modern appraisal process.
- Republicans seized on a study as proof of Google’s bias. Its authors say it's being misrepresented.on May 25, 2022 at 12:57 am
An academic study finding that Google’s algorithms for weeding out spam emails demonstrated a bias against conservative candidates has inflamed Republican lawmakers, who have seized on the results as ...
- Algorithms are our next weapon in fighting crimeon May 24, 2022 at 2:00 pm
We all tend to follow habits at work and in other aspects of our lives. Our habits usually reflect preferences, learning, or a combination of the two.
- How to Remove Your Investment Bias in Shifting Marketson May 24, 2022 at 10:40 am
On the most recent episode of the Behind the Markets podcast on Wharton Business Radio, hosts Jeremy Schwartz, director of research at WisdomTree, and Wharton finance professor Jeremy Siegel were ...
- Do algorithms help to reduce crime?on May 20, 2022 at 1:35 am
The development of more powerful algorithms and data-gathering processes provides some grounds for optimism among police departments.
- AI industry, government will have to dig deep to end racial bias in algorithms: Brookingson May 18, 2022 at 12:50 pm
To solve bias in facial recognition in the United States, first at least address systemic racial bias in the culture, recommends The Brookings Institution.
- Medicine, AI, and Bias: Will Bad Data Undermine Good Tech?on May 18, 2022 at 12:34 pm
Artificial Intelligence models may help predict and prevent diseases. But recent research highlights the challenges they face in providing insights that work for all.
- Humanitas nabs $1.5M in seed funding to help enterprises handle algorithmic biason May 18, 2022 at 12:06 pm
Humanitas Technologies, an artificial intelligence startup building a data pool designed to fight algorithmic bias for diversity and inclusion, today announced that it raised $1.5 million in seed ...
- New Guidance From CFPB and Others Seeks to Tackle Bias in AI Tools Used by Financial Services Companieson May 18, 2022 at 9:53 am
Companies offering consumer-facing financial products and services increasingly rely on artificial intelligence and predictive algorithms as they make decisions about ...
- Ways In Which Big Data And AI Automate Recruitment Bias Auditson May 18, 2022 at 5:30 am
It could be as obvious as white and black, or it could be an unconscious bias which is something out of your control, and yet, it is something humans do involuntarily almost always. Ecommerce giant ...
via Bing News