Software may appear to operate without bias because it strictly uses computer code to reach conclusions. That’s why many companies use algorithms to help weed out job applicants when hiring for a new position.
But a team of computer scientists from the University of Utah, University of Arizona and Haverford College in Pennsylvania have discovered a way to find out if an algorithm used for hiring decisions, loan approvals and comparably weighty tasks could be biased like a human being.
The researchers, led by Suresh Venkatasubramanian, an associate professor in the University of Utah’s School of Computing, have discovered a technique to determine if such software programs discriminate unintentionally and violate the legal standards for fair access to employment, housing and other opportunities. The team also has determined a method to fix these potentially troubled algorithms. Venkatasubramanian presented his findings Aug. 12 at the 21st Association for Computing Machinery’s SIGKDD Conference on Knowledge Discovery and Data Mining in Sydney, Australia.
“There’s a growing industry around doing résumé filtering and résumé scanning to look for job applicants, so there is definitely interest in this,” says Venkatasubramanian. “If there are structural aspects of the testing process that would discriminate against one community just because of the nature of that community, that is unfair.”
Many companies have been using algorithms in software programs to help filter out job applicants in the hiring process, typically because it can be overwhelming to sort through the applications manually if many apply for the same job. A program can do that instead by scanning résumés and searching for keywords or numbers (such as school grade point averages) and then assigning an overall score to the applicant.
These programs also can learn as they analyze more data. Known as machine-learning algorithms, they can change and adapt like humans so they can better predict outcomes. Amazon uses similar algorithms so they can learn the buying habits of customers or more accurately target ads, and Netflix uses them so they can learn the movie tastes of users when recommending new viewing choices.
But there has been a growing debate on whether machine-learning algorithms can introduce unintentional bias much like humans do.
“The irony is that the more we design artificial intelligence technology that successfully mimics humans, the more that A.I. is learning in a way that we do, with all of our biases and limitations,” Venkatasubramanian says.
Venkatasubramanian’s research determines if these software algorithms can be biased through the legal definition of disparate impact, a theory in U.S. anti-discrimination law that says a policy may be considered discriminatory if it has an adverse impact on any group based on race, religion, gender, sexual orientation or other protected status.
Venkatasubramanian’s research revealed that you can use a test to determine if the algorithm in question is possibly biased. If the test — which ironically uses another machine-learning algorithm — can accurately predict a person’s race or gender based on the data being analyzed, even though race or gender is hidden from the data, then there is a potential problem for bias based on the definition of disparate impact.
“I’m not saying it’s doing it, but I’m saying there is at least a potential for there to be a problem,” Venkatasubramanian says.
Read more: PROGRAMMING AND PREJUDICE
The Latest on: Bias in Algorithms
[google_news title=”” keyword=”Bias in Algorithms” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]
via Google News
The Latest on: Bias in Algorithms
- It’s Time To Call Out All The Light-Skin Bias On Instagramon November 16, 2022 at 12:28 pm
In the digital age, these feelings are complicated by the notion that an algorithm determines everything. How do you fight a monster that is hidden in plain sight? For Saccoh, the birth of FN Meka, ...
- Unconscious Bias in the Workplaceon November 15, 2022 at 4:00 pm
Possessing bias is part of the human behaviour. From an evolutionary standpoint, it makes more sense to trust those who are like us, because they seem to pose less of a threat. However ...
- Finding ‘fairness’ in AI: How to combat bias in the data collection processon November 14, 2022 at 9:01 am
Legal firms that depend on AI to power their data analytics & decisions need to ensure they're addressing possible bias in data collection.
- Mayo Clinic creates product to test AI models for bias and inaccuracyon November 14, 2022 at 7:28 am
Mayo Clinic has introduced a product to evaluate the accuracy and susceptibility to bias of artificial intelligence models. The proliferation of AI models has provided ...
- Still Racist After All These Datasets: Once Bias Is Baked Into Your AI, It’s Hard to Root Outon November 9, 2022 at 10:37 am
AI bias matters for a variety of reasons ... When the UK’s Department of Work and Pensions (DWP) announced it would be using machine algorithms to decide whether people should receive universal credit ...
- DeepMind Is Working on a Solution to Bias in AIon November 8, 2022 at 4:00 pm
Improving algorithms themselves is only one half of the work to be done to safeguard against bias in AI. Figures released from studies such as one conducted by New York University's AI Now Institute ...
- Republican National Committee sues Google over ‘political bias’ in email spam algorithmon November 7, 2022 at 7:40 am
Google has denied the Republican National Committee's accusations by explaining its algorithm adjusts based on user preferences. The Wall Street Journal. Like us on Facebook to see similar stories ...
- Identifying and Controlling Unconscious Bias Towards Supplierson November 6, 2022 at 4:02 pm
Unconscious bias manifests in the supplier selection ... harvesting historic procurement data and applying advanced algorithms to discover patterns and identify potential risks and discrepancies.
- Tool pushes AI community’s nose in algorithms’ messon November 4, 2022 at 1:22 pm
An AI researcher has created a tool for Hugging Face that confronts viewers with the bias built into artificial intelligence algorithms.
via Bing News