AI: Reality Check September 2018
Subscribe to Insights in Brief to be notified about new Featured Content as it becomes available!
Some artificial-intelligence (AI) researchers are speaking out against AI hype. In April 2018, robotics pioneer Rodney Brooks published an essay in which he said that artificial general intelligence (akin to "common sense," rather than the application-specific intelligence of commercial AI systems) "isn't coming any time soon" and that "deep learning" did not imply "deep understanding." The same month, Michael I. Jordan—a professor at University of California, Berkeley, whose technical reports on computer-science topics are among the world's most frequently cited in other researchers' works—published his own essay on AI hype. In a June interview with the New York Times, Jordan criticized machine learning, saying "There is no real intelligence there ...trusting these brute force algorithms too much is a faith misplaced." Other AI experts, including Filip Piekniewski of Koh Young Technology, have also cautioned on the limits of deep learning and AI.
Although some of the researchers' complaints are esoteric (Jordan talks about statistics academics being relabeled AI academics), their overall concern about inflated expectations is also visible in the commercial world. Commenting on a recent US–European survey on AI security software, Juraj Malcho, of Slovakian security company ESET, said, "It is worrying to see that the hype around AI and ML [machine learning] is causing so many IT decision makers...to regard the technologies as 'the silver bullet' to cybersecurity challenges." Another recent US–European survey (by big-data company Databricks and IDG Research) found that, although many respondents said that their organizations had invested in AI, only a third judged their AI projects successful.
Concern about AI hype is valid, and a course correction of public and enterprise expectations is likely. Despite progress in data-driven machine learning, artificial general intelligence is little closer now than it was decades ago. And narrow, data-driven AI applications (although undoubtedly useful in many situations) are challenged by sometimes unavailable training data, by bias risks, and by poor transparency.
But any realignment of AI expectations is unlikely to dent practical use of data-driven AI technologies in corporations and governments significantly. For most organizations, AI deployments are still in the early phases (likely the reason many are not yet successful), and organizations are still discovering suitable use cases. Most likely, organizations will stop seeing AI as a "silver bullet" and take a more pragmatic approach—including developing processes to manage the limitations of the software (a recent O'Reilly survey on AI adoption found that some organizations are already checking for bias as a standard process step). Plausibly, a course correction of AI expectations could actually help make a success of real-world commercial AI.
Although data-driven AI is highly capable and continues to improve, machine-learning algorithms excel only at the specific applications for which they are trained and (perhaps more important) can tackle applications only where suitable data are available. Although no simple solution exists for the first of these limitations, the latter may lessen in time if "data-light" AI makes progress. The June 2018 Big Data Viewpoints details emerging techniques—including those relying on generative AI, transfer learning, and capsule networks—that may reduce AI's reliance on specific data.