Over the past year or so, I’ve heard two distinct stories about machine learning. At face value, each story illustrates a different aspect about this technology. At times, the two seem inconsistent. Yet, when viewed together, they suggest the nature of machine learning and the way that we need to think about it.
The first story is based on the idea that machine learning is going to replace all human labor with machines. It is common in the public press and often told by the people who are developing autonomous cars. By 2020, we are told, or 2022, or 2025, machine intelligence will be ready to take control of an automobile and will completely replace human drivers. It will make our transportation system safer and more efficient. It will also, to the consternation of many, eliminate one task in which human beings seem to be more skilled than machines.
Behind this story is the great fear that machine intelligence will upend the economy and prevent most people from being able to earn a living and provide for themselves. It suggests that we are replacing our children of flesh and blood with children of silicon and code. To a few individuals, this idea is comforting. To a considerably larger group, it is terrifying. To the rest of the world, it is bewildering.
I heard this story in a colleague’s office recently. My colleague has a doctorate in physics and a sophisticated grasp of how technology interacts with the economy. However, she was deeply concerned about the social implications of machine intelligence. “Don’t you think it is different this time?” she kept saying. “Everyone says that this is nothing new, but don’t you think that we are really changing society?”
I argued against the idea that machine intelligence was going to replace human beings. I pointed to the fact that engineers have repeatedly claimed over the past 200 years that machines were going to replace people and that they had been proven wrong again and again. I also noted that the loudest proponents of machine intelligence are generally trying to capitalize on this technology, and hence they have strong incentives to exaggerate the impact of their work. If they want to attract a large investment, they have to make the case that they have a great technology.
Related:
Does Insurance Have a Future in Governing Cybersecurity?
Human Behavior Aware Energy Management in Residential Cyber-Physical Systems
Machine Learning Systems and Intelligent Applications
Developing Children’s Regulation of Learning in Problem-Solving With a Serious Game
My argument has one flaw that undermines its attempt to reassure others. Even though it claims that machine intelligence will not make human being obsolete, it acknowledges that it will likely disrupt the three major fields of human endeavor: the social, the economic, and the political. It also seems likely to disrupt two other human fields: the psychological and the epistemological—the way we think about ourselves and the world, and the way we organize knowledge.
The second story of machine intelligences tends to come from graduate students, though I have heard it from entrepreneurs, IT directors, and even a 14-year-old student from Sweden. (The Swedish student contacted me because he had somehow found one of my articles that was published two years ago in this magazine.) All of these individuals are doing something that generates a large amount of data. Each of them said that they wanted to feed that data into a machine learning algorithm to “see what it might find.”
None of these individuals is particularly interested in modeling human intelligence with these algorithms. They don’t think that they are replacing flesh and blood with silicon and code. To them, machine learning is a tool for data analysis, a means of finding patterns in words or numbers or pictures. If they believe that these algorithms are replacing humans, they would say that machine intelligence is replacing not ordinary people but statisticians. In many ways, that assessment is correct and points to the kind of impact that machine learning will ultimately have on society.
The current work on machine learning grew out two streams of research, though few of the current researchers might acknowledge one of the two. The first of these streams centered on artificial neural networks—efforts to model the actions of the human brain. Neurons had long been of interest to computer scientists. A 1943 paper on the subject, “A Logical Calculus of the Ideas Immanent in Nervous Activity,” by Walter Pitts and Warren MacCulloch (Bulletin of Mathematical Biophysics, vol. 5, 1943, pp. 115–133), influenced many early computer scientists, including John von Neumann. However, through the mid-1970s, the idea of an artificial neural network seemed impracticable. No one had been able to propose a means of setting the parameters in the networks. (In general, each network would have one parameter for each input to the network and for each connection between the neurons.)
In 1974, Paul Werbos invented an algorithm to set the parameters of a neural network, an algorithm called back propagation (“Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences,” PhD thesis, Harvard University, 1974). This algorithm taught a network to recognize or classify a set of objects by using a set of known objects called a training set. The algorithm demonstrated the effectiveness of neural networks both as a means of classifying data and as a means of learning about data. However, back propagation was a computationally intensive algorithm and hence of limited interest for about a decade.
Through most of the 1970s and 1980s, the artificial intelligence community was more interested in logical and rule-based systems than in neural networks. These systems used logical prepositions to capture the intelligence of experts. They collected statements of the form “If A and B and C and D are true, then you should consider doing Y.” Researchers created such systems for management, training, and medical diagnoses.
Among the groups that became interested in expert systems was the statistical community. Statistics, particularly applied statistical analysis, might be considered a highly technical craft. Statisticians used complex and sophisticated tools to probe data, identify patterns, and test hypotheses. Although the field taught a certain set of principles, it freely acknowledged that its practice was an art. That art required a set of skills that was held by relatively few experts. So between 1984 and 1994, a group of statisticians and computer scientists tried to make a rule-based system that would learn and behave like an expert statistician.
The task of building a rule-based system for statistical analysis proved harder than most people anticipated. The research community, frustrated by the lack of progress, gradually lost interest in the work. After 1990, few groups continued to work on this project. However, this work had built connections between computer science and demonstrated the power of advanced statistical modeling techniques. These techniques quickly became the basis of new algorithms for setting the parameters in neural networks. By the mid-1990s, they were proving to be central to the new field of machine learning. “A statistician,” Werbos later wrote, “would say that back propagation is a special case of nonlinear least squares” (“Back Propagation: Past and Future,” Proc. IEEE Int’l Conf. Neural Networks, 1988, pp. 343–353). Looking back on that era, a pioneer of artificial intelligence observed, “the great surprise [of the 1990s] was the tremendous success of statistical algorithms for classifying and identifying data.”
So, if machine learning is replacing human labor, it is first and foremost replacing the labor of statisticians. The entrepreneurs, graduate students, and 14-year olds who are using it to process data are using complex algorithms to perform work that was once done by statisticians. While these individuals have not entirely replaced statistical analysts, they have been able to accomplish some substantial things without learning the details of multivariate statistical analysis or employing the services of trained statisticians.
Machine learning is far from being an automatic technique. It requires its users to learn something about data and classification. Yet, it has allowed relatively untrained individuals to identify novel phenomena, interesting patterns, and new ways of looking at the world.
I know that this essay does not answer the question posed by its first story, “Will machine learning make human being obsolete?” That is a question that can be answered only retrospectively. At best, we can say that machine learning has disseminated one set of skills—those of mathematical classification—to a broad audience and changed the role of one group of workers: statistical analysts. We also know that machine learning has introduced new rigor into data processing and demands that all of us view the world with the same kind of precision that it brings to data. Will that make us obsolete? Perhaps. It has certainly changed the way that we organize our knowledge and seems to be changing the way that we think about ourselves.
About David Alan Grier
David Alan Grier is a writer and scholar on computing technologies and was President of the IEEE Computer Society in 2013. He writes for Computer magazine. You can find videos of his writings at video.dagrier.net. He has served as editor in chief of IEEE Annals of the History of Computing, as chair of the Magazine Operations Committee and as an editorial board member of Computer. Grier formerly wrote the monthly column “The Known World.” He is an associate professor of science and technology policy at George Washington University in Washington, DC, with a particular interest in policy regarding digital technology and professional societies. He can be reached at grier@computer.org.