The Art or Science of Forecasting October 2014
Subscribe to Insights in Brief to be notified about new Featured Content as it becomes available!
Market forecasts are extremely popular inputs for both government and commercial organizations. These forecasts provide numbers, and numbers are tools that many planners and strategists are accustomed to and can work with. Forecasts that highlight impressive market developments for emerging technologies are commonplace throughout industry publications because they justify spending and presumably provide a rationale for choosing a strategic direction. For example, a recent article on 3D-printing-news website Inside3DP (www.inside3dp.com) drew attention to a new report from market-research firm IDTechEx (Cambridge, England). The report predicts that the market for 3D-printing applications will grow to $7 billion by 2015, with more than 40% of that market ($3 billion) coming from bioprinting. One can easily infer the conclusions that managers and decision makers related to the industry will come to.
Organizations and industry commentators are starting to view long-term market forecasts with increased scepticism.
Precise but inaccurate forecasts, though, result in incorrect conclusions that lead to misguided strategies. In June 2014, the German Federal Ministry of Transport and Digital Infrastructure (Berlin, Germany) announced that it had compiled traffic forecasts for the year 2030. This development prompted journalists at German newspaper Der Spiegel to look at the accuracy of previous forecasts. Specifically, the journalists looked at the forecasts that researchers made for the year 2015 back in 2001. The forecasts were wildly inaccurate. The 2001 forecast predicted that the number of registered cars in Germany would reach 49.8 million in 2015. As of 2013 (the latest year for which numbers are available), only 43.3 million cars were registered in Germany. The 2001 forecasts also predicted that 251 million passengers would pass through Germany's airports annually by 2015, but that number reached only 180.7 million by the end of 2013. Furthermore, the forecasts predicted that annual river-based freight transport would reach 88.6 billion tonne-kilometers in 2015, but by the end of 2013, that figure reached only 59.7 billion tonne-kilometers. The airline and shipping industries likely will not see the numbers the 2001 forecasts predicted by 2015. Surprisingly, many forecasts provide very precise data—for example, the 2001 forecasts provided very specific figures such as 49.8 million cars and 88.6 billion tonne-kilometers—despite the common understanding that overall forecasts are likely incorrect. The article in Der Spiegel even cites politicians who reference the "fake precision" of forecasts.
Planners for both government and commercial organizations like market-research forecasts because numbers facilitate the creation of both policies and strategies. Building strategies around numbers is easy. For example, if a government agency can state that a certain number of additional cars will be on the road by 2020, then it can easily justify the planning of new infrastructure. Nonnumerical forecasts likely will generate questions about quantities—"How much?" and "How many?"—to provide guidance. More often than not, forecasts are misleading (and sometimes wildly so), but a clear need for them exists. Forecast improvement is necessary, and planners must manage their expectations about the accuracy of forecasts.
Many researchers create forecasts and projections through the analysis of existing data or collections of new data they obtain from interviews with industry representatives; however, some researchers are starting to use other approaches to create forecasts. The past decade has seen an explosion of big-data-based approaches to forecasting, but developments suggest that a great deal of additional work is still necessary to improve the accuracy of such approaches. In March 2014, professors from Northeastern University (Boston, Massachusetts) released papers that discuss the growing inaccuracy of Google's (Mountain View, California) Google Flu Trends, which predicts influenza activity by analyzing related search queries on the company's search engine. Google Flu Trends performed well initially, but it eventually began greatly overestimating flu rates. Google Flu Trends has been an exemplar of big-data-related shortcomings, and the system's issues illustrate the need for additional work to develop more accurate forecasting tools. New models are emerging. For example, Dirk Brockmann from Humboldt University of Berlin (Berlin, Germany) and Dirk Helbing from ETH Zurich (Zurich, Switzerland) created a new model of how diseases spread. The scientists believe that geographic distance between cities has less of an impact on the spread of disease than does effective distance—a measure of distance that takes into consideration the density of the flow of traffic between airports. The new model uses the connectedness of locations to identify the origin of an outbreak and to predict how it will spread.
Analysis of social media is another big-data-based approach seeing increasing use in forecasting and trend research, although the forecasts these analyses produce mostly address short-term developments. Indeed, the use of algorithms that sift through social-media communications has become common. Dataminr (New York, New York) analyzes the tweets on Twitter's (San Francisco, California) social network daily in efforts to identify events before mainstream media outlets pick up on them. Dataminr's customers include companies in finance, media, and the public sector. During the Boston Marathon in April 2014, authorities used Dataminr in their strategy to prevent events similar to the terror attack that occurred during the marathon the previous year. As few as three tweets can deliver a signal strong enough that Dataminr alerts its clients, which—according to the company—get a warning of five to ten minutes. The company claims that by evaluating some 30 indicators of significance, it was able to alert its clients about the death of Osama bin Laden 23 minutes before news outlets reported on the event. Of note is that Dataminr identifies short-term developments and events that already happened but require time to become known; the use of social-media analysis to make long-term forecasts is unproven. In addition, forecasts that rely on the analysis of social media are susceptible to manipulation of social networks. A group of researchers at the Federal University of Minas Gerais (Belo Horizonte, Brazil) recently released 120 bots on Twitter's social network. The bots used heuristics that the researchers created to retweet items and generate their own posts. Although Twitter actively tries to identify and suspend bots, 69% of the researchers' bots avoided detection. This research could suggest that a new generation of intelligent bots may be able to cause significant problems for services that aim to analyze Twitter activity to measure public opinion.
Recent developments suggest that organizations and industry commentators are starting to view long-term market forecasts with increased scepticism. Can new analytical tools, algorithms, and emerging approaches enable analysts to refine their forecasts and create more accurate market data? Somewhat ironically, the answer to that question itself remains uncertain. Current evidence suggests that a great deal more research is necessary and that disruptive developments can emerge from many directions. Although technical advances are occurring, predicting the future is still art, not science.