AI's Ethical Issues Featured Pattern: P1240 August 2018
Abstracts in this Pattern:
Some legal experts, lawmakers, and software engineers are concerned that without sufficient legal and ethical controls, AI could harm societies and see inappropriate use in warfare. While announcing a new national AI initiative in France, President Emmanuel Macron said that the strategy included a focus on AI ethics and controls to avoid "opaque privatization of AI or its potentially despotic usage" by foreign governments. Macron also proposed the creation of an independent international AI group akin to the Intergovernmental Panel on Climate Change (Geneva, Switzerland). And Tsinghua University (Beijing, China) law professor Feng Xiang recently argued that capitalist societies will fail to deal properly with the coming wave of automation as legal systems protect the private ownership of robots and AIs and as unemployment rises.
Militaries are well-established users of artificial intelligence (and indeed have funded much basic research in the field of AI), yet the broadening scope of military AIs and participation in military projects by organizations that traditionally do not associate with the defense sector are attracting controversy. Some Google (Alphabet; Mountain View, California) employees resigned over the company's participation in a US Department of Defense (DoD; Arlington County, Virginia) project that aims to use AI to analyze footage from drones and identify objects and people of interest. In addition, some 4,000 Google employees signed a petition that asks the company to cancel its DoD contract and cease participating in military work. In response, the company decided not to renew the DoD contract. In a similar development, scientists called for a boycott of the Korea Advanced Institute of Science and Technology (KAIST; Daejeon, South Korea) after a now-removed web page announced that KAIST's new Research Center for the Convergence of National Defense and Artificial Intelligence would work with Hanwha Systems (Hanwha Group; Seoul, South Korea) on AI systems and technologies for the military. In a forthcoming research paper, University of Virginia (Charlottesville, Virginia) law professor Ashley Deeks argues that the increasing use of predictive algorithms on the battlefield raises legal and ethical issues. Deeks highlights that the US military will need to mitigate the common criticisms that predictive algorithms are opaque and sometimes use biased data.
The Development of this Pattern
Data Points
- SC-2018-07-11-047
While announcing a new national AI initiative in France, President Emmanuel Macron said that the strategy included a focus on AI ethics and controls. - SC-2018-07-11-052
Some Google employees resigned over the company's participation in a US Department of Defense project that aims to use AI to analyze footage from drones and identify objects and people of interest. - SC-2018-07-11-059
University of Virginia law professor Ashley Deeks argues that the increasing use of predictive algorithms on the battlefield raises legal and ethical issues.
Implications
P1240 — AI's Ethical Issues
Academics, politicians, and engineers are seeking stronger legal and ethical frameworks for artificial intelligence (AI).
Previous Alerts
- P0456 — Ethics of Autonomous Transportation (February 2013)
The increasing autonomy of vehicles will create challenging ethical and moral questions for lawmakers, manufacturers, and users alike. - SoC779 — Morality of Autonomous Vehicles (February 2015)
Autonomous cars will eliminate many common human errors and therefore make driving safer; however, the accidents that do occur will present issues. - P0936 — Teach AI Morals! (June 2016)
Researchers are considering the need for artificial intelligence (AI) to possess moral traits and devising ways to implant such traits in AI systems. - SoC888 — Falling into Categorical Holes (August 2016)
Individuals' remaining in the wrong category permanently after a company categorizes them incorrectly is a big concern. - SoC985 — Artificial Intelligence Is Alien Intelligence (December 2017)
Companies and individuals should not equate artificial intelligence's humanlike capabilities with humanlike thought processes.