Skip to Main Content

Strategic Business Insights (SBI) logo

AI's Ethical Issues Featured Pattern: P1240 August 2018

Author: Rob Edmonds (Send us feedback.)

Academics, politicians, and engineers are seeking stronger legal and ethical frameworks for artificial intelligence (AI).

Abstracts in this Pattern:

Some legal experts, lawmakers, and software engineers are concerned that without sufficient legal and ethical controls, AI could harm societies and see inappropriate use in warfare. While announcing a new national AI initiative in France, President Emmanuel Macron said that the strategy included a focus on AI ethics and controls to avoid "opaque privatization of AI or its potentially despotic usage" by foreign governments. Macron also proposed the creation of an independent international AI group akin to the Intergovernmental Panel on Climate Change (Geneva, Switzerland). And Tsinghua University (Beijing, China) law professor Feng Xiang recently argued that capitalist societies will fail to deal properly with the coming wave of automation as legal systems protect the private ownership of robots and AIs and as unemployment rises.

Militaries are well-established users of artificial intelligence (and indeed have funded much basic research in the field of AI), yet the broadening scope of military AIs and participation in military projects by organizations that traditionally do not associate with the defense sector are attracting controversy. Some Google (Alphabet; Mountain View, California) employees resigned over the company's participation in a US Department of Defense (DoD; Arlington County, Virginia) project that aims to use AI to analyze footage from drones and identify objects and people of interest. In addition, some 4,000 Google employees signed a petition that asks the company to cancel its DoD contract and cease participating in military work. In response, the company decided not to renew the DoD contract. In a similar development, scientists called for a boycott of the Korea Advanced Institute of Science and Technology (KAIST; Daejeon, South Korea) after a now-removed web page announced that KAIST's new Research Center for the Convergence of National Defense and Artificial Intelligence would work with Hanwha Systems (Hanwha Group; Seoul, South Korea) on AI systems and technologies for the military. In a forthcoming research paper, University of Virginia (Charlottesville, Virginia) law professor Ashley Deeks argues that the increasing use of predictive algorithms on the battlefield raises legal and ethical issues. Deeks highlights that the US military will need to mitigate the common criticisms that predictive algorithms are opaque and sometimes use biased data.