via Singularity Hub
Researchers from the University of Warwick, Imperial College London, EPFL (Lausanne) and Sciteb Ltd have found a mathematical means of helping regulators and business manage and police Artificial Intelligence systems’ biases towards making unethical, and potentially very costly and damaging commercial choices – an ethical eye on AI.
Artificial intelligence (AI) is increasingly deployed in commercial situations. Consider for example using AI to set prices of insurance products to be sold to a particular customer. There are legitimate reasons for setting different prices for different people, but it may also be profitable to ‘game’ their psychology or willingness to shop around.
The AI has a vast number of potential strategies to choose from, but some are unethical and will incur not just moral cost but a significant potential economic penalty as stakeholders will apply some penalty if they find that such a strategy has been used – regulators may levy significant fines of billions of Dollars, Pounds or Euros and customers may boycott you – or both.
Mathematicians and statisticians from University of Warwick, Imperial, EPFL and Sciteb Ltd have come together to help business and regulators creating a new “Unethical Optimization Principle” and provide a simple formula to estimate its impact.
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
- Daniel Kahneman: ‘Clearly AI is going to win. How people are going to adjust is a fascinating problem’on May 16, 2021 at 5:14 am
The psychologist on applying his ideas to organisations, why we’re not equipped to grasp the spread of a virus, and the massive disruption that’s just round the corner ...
- 7 best practices for implementing data-driven technologies, like AI and machine learningon May 14, 2021 at 7:38 am
Data science, machine learning and artificial intelligence are key enabling capabilities in today's business world but few executives know how to deploy them effectively.
- There’s implicit bias in AI. What can we do about it?on May 13, 2021 at 4:38 am
And as AI becomes more commonplace in consumer-based technology that we use every day (hello, iPhone face recognition) it’s important to consider how technologists can prevent bias in the ...
- DataRobot’s Zepl acquisition bridges the AI divideon May 12, 2021 at 2:20 pm
DataRobot acquires Zepl to bridge the divide between end users, business analysts, and data science teams relying on open source toolkits.
- Eliminating bias from healthcare AI critical to improve health equityon May 11, 2021 at 11:34 pm
Artificial intelligence (AI)-driven healthcare has the potential to transform medical decision-making and treatment, but these algorithms must be thoroughly tested and continuously monitored to avoid ...
Go deeper with Google Headlines on:
Go deeper with Bing News on:
Artificial intelligence ethics
- Q&A: How is Big Tech dealing with ethical problems?on May 13, 2021 at 2:34 am
FT correspondents Madhumita Murgia and Kiran Stacey answered your questions on the use of artificial intelligence by companies ...
- Google plans to double AI ethics team after fallout from staff departureson May 12, 2021 at 11:18 am
Google plans to double the size of its artificial intelligence ethics team in the next few years, following controversy over Google's treatment of whistleblowers critical of its AI approach, according ...
- DataRobot Joins World Economic Forum Initiative to Advance the Equity, Accountability, and Transparency of Artificial Intelligenceon May 12, 2021 at 10:00 am
DataRobot joined the World Economic Forum initiative to advance the equity, accountability, and transparency of AI and machine learning.
- Alphabet's Google Plans to Double AI Ethics Staffon May 11, 2021 at 2:59 pm
Alphabet's Google will double its AI-ethics staff to 200 researchers, Google Vice President of Engineering Marian Croak says.
- Google Plans to Double AI Ethics Research Staffon May 11, 2021 at 8:04 am
Vice President of Engineering Marian Croak also said CEO Sundar Pichai has committed to boost the budget of a team tasked with evaluating code and product to avert harm, discrimination and other ...