Dynamics in Fake News August 2019
Want more free featured content?
Subscribe to Insights in Brief

During recent years, fake news—false or misleading information that typically sees dissemination through social-media channels—has become a major issue for governments and media outlets. We offer a broad picture of the dynamics underlying fake news and the attempts that companies are making to tackle the phenomenon.
Fake news by its nature evades easy detection, which makes a quantitative analysis difficult. For example, Brendan Nyhan of the University of Michigan (Arbor, Michigan) described most studies of fake-news content as focusing on simple view counts rather than on who encountered fake news and, more important, whether the fake news had a measurable effect. A study by Dr. Nyhan and colleagues found that fake news made up only 2% of the information US residents consumed from websites on which hard-news topics are the focus.
Another important factor in understanding and quantifying the spread of fake news is the measurement of its tendency to spread through sharing. Researchers from New York University (New York, New York) and Princeton University (Princeton, New Jersey) studied the tendency of social-media users to share content from fake-news sites during the 2016 US presidential election. The researchers found that only 3% of study participants ages 18 to 29 shared news from these sites, whereas 11% of study participants older than age 65 shared news from these sites. Princeton University assistant professor of politics and public affairs Andrew Guess highlights that recognizing the different fake-news-sharing tendencies between the age groups could aid in the development of interventions that aim to reduce the dissemination of fake news.
Many analysts and news outlets concur that a significant factor in the generation and spread of fake news during recent years has been deliberate campaigns by the government of Russia. For example, in 2018, researchers from Johns Hopkins University (Baltimore, Maryland) discovered a systematic misinformation campaign about vaccinations on Twitter's (San Francisco, California) social network. The researchers looked at more than 250 vaccination-related tweets that linked to accounts under the control of the Russian Internet Research Agency (Saint Petersburg, Russia), which the US Intelligence Community (Office of the Director of National Intelligence; Washington, DC) has described as a group of professional trolls with ties to Russian intelligence. Even the government of Russia itself appears happy to acknowledge the importance of misinformation. In a recent speech, Valery Gerasimov, the chief of the general staff of the Russian Armed Forces (Ministry of Defence of the Russian Federation; Moscow, Russia), described a vision of future warfare in which information operations—including cyberattacks and the spread of misinformation—could have an impact comparable to that of conventional military operations.
An important reason why Russia (and various state actors) have put resources into producing fake news is that social-media platforms have proved to be highly effective tools for spreading ideas. In fact, researchers at the Massachusetts Institute of Technology (MIT; Cambridge, Massachusetts) found that fake news spreads considerably faster than does real news on Twitter's social network. MIT Sloan School of Management professor Sinan Aral reported that "falsehood diffuses significantly farther, faster, deeper and more broadly than the truth, in all categories of information, and in many cases by an order of magnitude" ("Study: On Twitter, false news travels faster than true stories," MIT News Office, 8 March 2018; online). In addition to social-media platforms' enabling the rapid spread of misinformation, some social-media platforms have shown a tendency to encourage users to view increasingly extreme content. Various studies have examined the algorithm that video-sharing company YouTube (Alphabet; Mountain View, California) uses to recommend and autoplay content on the basis of the content that users choose to watch and revealed that the algorithm tends to prioritize watch time. One of the most effective ways to extend user watch times is to recommend more extreme versions of content that users have already watched. The tendency of YouTube's algorithm to encourage users to view increasingly extreme content applies to all types of content, not just political content. For example, Zeynep Tufekci—an associate professor at the University of North Carolina at Chapel Hill (Chapel Hill, North Carolina) School of Information and Library Science—found that the algorithm would offer videos about veganism to users who watched videos about vegetarianism and offer videos about ultramarathons to users who watched videos about jogging. Dr. Tufekci surmises that "YouTube may be one of the most powerful radicalizing instruments of the 21st century" ("YouTube, the Great Radicalizer," New York Times, 10 March 2018; online).
A particularly worrisome development is the rise of deepfakes—AI-generated fake images and videos that can mimic real-world images and videos so closely that distinguishing the fake from the real becomes extremely difficult. Importantly, deepfake-generating tools are readily available to nonexpert users and have seen use to generate a wide variety of fake content, including imagery of people, animals, and objects that do not actually exist. An early use for deepfake-generating tools was to graft the faces of celebrities onto the faces of actors in pornographic video content.
AI and machine learning are not just tools for creating fake content; they are also playing an important role in attempts to tackle fake news. For example, in December 2018, Taiwanese software developer Carol Hsu released Auntie Meiyu—an AI bot that can detect and alert users about misinformation or inaccurate comments within text conversations. The bot acts as a fact-checker and will interject when it identifies false or disputed information in a text conversation. Similarly, researchers at the Fraunhofer Institute for Communication, Information Processing and Ergonomics (Fraunhofer Society for the Advancement of Applied Research; Munich, Germany) developed a software tool for analyzing and classifying social-media posts to help detect fake news. The software examines not just the post content but also the associated metadata.
Major social-media platforms are also taking measures to tackle fake news. For example, in early 2018, YouTube began presenting information from online encyclopedia Wikipedia (Wikimedia Foundation; San Francisco, California) alongside videos about conspiracy theories. Meanwhile, Facebook (Menlo Park, California) is focusing more on suppressing the spread of misinformation with a two-prong approach it developed. Articles that third-party fact-checkers have already identified as false appear physically smaller on users' news feeds, making the false information visually less prominent. In addition, Facebook uses machine-learning algorithms to scan new articles for signs that they may be false, and human fact-checkers prioritize articles that receive high falsehood-prediction scores.
In many ways, fake news is analogous to an infectious illness: It requires a potent combination of engagement, reproduction, and ease of transmission to flourish. In addition, similar dynamics exist in the fight against fake news and the fight against an infectious illness: Actors who aim to fight fake news are playing a game of cat and mouse with the creators of fake news, adapting to new forms and outbreaks of false information. In the long term, the limiting factor for fake news may be the minds of its consumers. In time, many people may develop a higher level of awareness of fake news and the importance of treating new information with some skepticism.