Autonomous systems — like driverless cars — perform tasks that previously could only be performed by humans. In a new IEEE Intelligent Systems Expert Opinion piece, Carnegie Mellon University artificial intelligence ethics experts David Danks and Alex John London argue that current safety regulations do not plan for these systems and are therefore ill-equipped to ensure that autonomous systems will perform safely and reliably.
“Currently, we ensure safety on the roads by regulating the performance of the various mechanical systems of vehicles and by licensing drivers,” said London, professor of philosophy and director of the Center for Ethics and Policy in the Dietrich College of Humanities and Social Sciences. “When cars drive themselves we have no comparable system for evaluating the safety and reliability of their autonomous driving systems.”
Danks and London point to the Department of Transportation’s recent attempt to develop safety regulations for driverless cars as an example of traditional guidelines that do not adequately test and monitor the novel capabilities of autonomous systems. Instead, they suggest creating a staged, dynamic system that resembles the regulatory and approval process for drugs and medical devices, including a robust system for post-approval monitoring.
“Self-driving cars and autonomous systems are rapidly spreading through every part of society, but their successful use depends on whether we can trust and understand them,” said Danks, the L.L. Thurstone Professor of Philosophy and Psychology and head of the Department of Philosophy. “We, as a society, need to find new ways to monitor and guide the development and implementation of these autonomous systems.”
The phased process Danks and London propose would begin with “pre-clinical trials,” or testing in simulated environments, such as self-driving cars navigating varied landscapes and climates. This would provide information about how the autonomous system makes decisions in a wide range of contexts, so that we can understand how they might act in future, new situations.
Acceptable performance would permit the system to move on to “in-human” studies through a limited introduction into real-world environments with trained human “co-pilots.” Successful trials in these targeted environments would then lead to monitored, permit-based testing, and further easing of restrictions as performance goals were met.
Danks and London propose that this regulatory system should be modeled and managed similarly to how the Food and Drug Administration regulates the drug approval process.
“Autonomous vehicles have the potential to save lives and increase economic productivity. But these benefits won’t be realized unless the public has credible assurance that such systems are safe and reliable,” London said.
[osd_subscribe categories=’artificial-intelligence-ethics’ placeholder=’Email Address’ button_text=’Subscribe Now for any new posts on the topic “ARTIFICIAL INTELLIGENCE ETHICS”‘]
Receive an email update when we add a new ARTIFICIAL INTELLIGENCE ETHICS article.
The Latest on: Artificial intelligence ethics
[google_news title=”” keyword=”artificial intelligence ethics” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]
via Google News
The Latest on: Artificial intelligence ethics
- McKee signs executive order establishing Artificial Intelligence Task Force and Data Center of Excellenceon February 29, 2024 at 4:55 pm
Gov. Dan McKee signs an executive order establishing an AI Task Force for Rhode Island. (State of Rhode Island)PROVIDENCE, R.I. (WLNE) — At an event at Rhode Island College Gov. Dan McKee signed an ...
- R.I. Governor McKee signs executive order on artificial intelligenceon February 29, 2024 at 10:00 am
A task force headed by former US Representative James Langevin will advise the state on the risks and opportunities of artificial intelligence, and a new Center of Excellence for AI will create a ...
- In an AI-Inundated World, Attorney Intelligence Can’t Be Artificial: 12 Considerations to Raise AI IQon February 29, 2024 at 8:49 am
Generative artificial intelligence has become unequivocally one of the most ... and/or the entire legal profession will need to properly embrace AI—for example, the ethical guardrails that may be ...
- McKee signs executive order on artificial intelligenceon February 29, 2024 at 8:09 am
(WJAR) — Gov. Dan McKee will sign an executive order on Thursday regarding artificial intelligence. The executive order will be signed at Rhode Island College, the home of the Institute for ...
- Historic Landmarks and AI Ethics Highlight Charles County Sessionon February 29, 2024 at 5:36 am
In a comprehensive briefing held on Tuesday, February 27, Maryland officials outlined their strategic approach to integrating Artificial Intelligence (AI) into state and county government operations.
- University building foundations for ethical Artificial Intelligence in policingon February 28, 2024 at 3:53 am
Findings from this research co-led by University of Northampton's Institute for Social Innovation and Impact will inform policing bodies, policymakers and civil rights groups.
- Justice Department appoints first chief artificial intelligence officeron February 22, 2024 at 5:17 pm
The U.S. Justice Department has appointed its first chief artificial intelligence officer amid rising concerns about the ethics of AI.
- Can AI porn be ethical?on February 18, 2024 at 2:52 pm
The porn industry is often at the forefront of emerging technologies, and, unsurprisingly, girlfriends powered by artificial intelligence ... s why she implements ethical guardrails on MyPeach.ai ...
- Ethical Considerations for Litigators Navigating the Artificial Intelligence Landscapeon February 16, 2024 at 7:09 am
With AI, the ability to drive massively greater levels of productivity is within reach, but there are risks if the technology is not used properly.
via Bing News