Bridging the Great Divide
By David Alan Grier

building a bridge of gearsMany difficult problems in computer science remain unsolved. Determining the relationship between P and NP—the problems that can be solved in polynomial time and those that cannot—is perhaps the most famous, but there are many others. What is the fastest way to multiply two matrices? Is it possible to factor an integer in polynomial time? Will we ever be able to get computer science researchers and computing practitioners to talk to each other peaceably? This last question describes the hardest issue that we face as a field. It will require us to acknowledge that we are a community that would like to believe is united by a technology, when we are in fact a group of workers divided by our methods of organizing knowledge.

Early this year, I attended an editorial board meeting to develop a new journal that would appeal both to academic computer scientists and practicing software engineers. However, everyone on the board was a professor; not one had worked in industry. This is a fairly common phenomenon. Professors are rewarded for serving on editorial boards whereas industrial computer scientists are not. Most companies believe that such service can distract employees without providing any useful knowledge in return.

The meeting ran into trouble when the group tried to define publication standards. For academic papers, they could articulate the peer review process and standards they wanted to uphold. For applied papers, they could not. Some group members argued that they needed to apply the same standards to both kinds of papers, whereas others felt that they needed to referee applied papers differently from academic papers. However, no one could describe a review process that satisfied everyone.

Finally, one member of the group—a professor at a very prestigious university—said, “We will just have to accept that industrial papers simply are not as rigorous as academic papers.”

The entire board nodded in agreement to this statement, which encouraged me to jump into the conversation. I didn’t have a perfect solution for the group, but I understood the separation in a way that they did not.

Academic and corporate research have been divided for as long as the two have existed. The division between the two has often been confused with the division between theoretical and applied research, but this is not so. Many universities have large and productive applied laboratories that conduct research on physical problems and produce a body of knowledge for applied problems. You might say that they create a theory of applied work. In this idea—a theory of application—we begin to see the difference between academic and corporate research.

Academic work attempts to build an organized body of knowledge. This body of knowledge might concern an abstract set of entities or activities that seem very common and applied. However, academic work isn’t really about the abstract entities or the applied activities; it is about the body of knowledge. Academic researchers are rewarded for articulating the details of that body of knowledge, for identifying the limitations of that body of knowledge, for bringing to concepts to that body of knowledge, for showing how to solve new kinds of problems with that body of knowledge. Along the way, they might identify knowledge that helps a practicing software engineer create a new product or service, but that knowledge is a byproduct of academic work.

By contrast, corporate research is about helping computing create those new goods or services. This research’s value is judged by how it might make the company more effective, more flexible, more dominant, or more profitable. Along the way, this work might contribute to a unified body of knowledge, but it doesn’t start with such a goal. It is easy to claim that such research is merely problem solving, but this misses the point as well. Much corporate research has helped a product development team think more clearly about its work or complete work more quickly. However, one would rarely say that such a result truly contributes to the grand body of knowledge that university professors create.

In the late 19th century, industrial engineer Frederick Winslow Taylor identified the difference between the two kinds of research. Taylor performed many studies on how his employees might cut and mill metal because he found little in the academic research that helped him manage production. “Experiments upon the art of cutting metals,” he wrote in an early book “have been mainly undertaken by scientific men, mostly by professors. It is but natural,” he continued, “that the scientific man should lean toward experiments which require the use of apparatus and that type of scientific observation which is beyond the scope of the ordinary mechanic.”

Taylor’s experiments focused on the things that a mechanic could control at that time: the mill’s speed, the cutting tool’s angle, and the amount of water directed at the cut. He didn’t worry about the materials’ chemical composition or even the tool’s temperature. He wanted to create a system that would allow his workers to produce good products. He dismissed experiments that required “elaborate and expensive apparatus,” which he claimed were “almost barren of useful results.”

The fact that Taylor did his most important work more than 125 years ago suggests that we are not doing a particularly good job of coordinating industrial and academic research. We hold the model of academic research as a standard for all of our research activities and fail to acknowledge that industrial work has different goals.

Recognizing the differences between the two worlds does not automatically provide a way to judge and evaluate industrial work. In building a review process, we need to verify and evaluate industrial research just as we verify and evaluate academic research. When we verify research, we check the work’s results to ensure that they are correct. At some level, the work of verification differs little between the two fields.

However, when reviewing industrial research, the evaluation stage differs substantially from the same step in academic research. Evaluation is the process of assigning a value to research. Both industrial and academic can produce results that are valid and true but not especially valuable. In academic research, we value research that adds to the body of knowledge and provides new tools for expanding this body of knowledge. In industrial research, we value research that improves the company and improves the production process. The two are not the same; they require different methods of identifying value.

However, the two kinds of research are not entirely isolated. They do inform one another. Certainly, academic researchers tend to train industrial practitioners, and industrial practitioners regularly identify problems that need an academic treatment. Still, the two groups work in different spheres and have different goals. If we want conferences and periodicals in which representatives of each group work equally, we must acknowledge a fundamental fact: industry might validate research in the same way that universities do, but they don’t necessarily value the results equally.

David Alan Grier circle image

About David Alan Grier

David Alan Grier is a writer and scholar on computing technologies and was President of the IEEE Computer Society in 2013. He writes for Computer magazine. You can find videos of his writings at He has served as editor in chief of IEEE Annals of the History of Computing, as chair of the Magazine Operations Committee and as an editorial board member of Computer. Grier formerly wrote the monthly column “The Known World.” He is an associate professor of science and technology policy at George Washington University in Washington, DC, with a particular interest in policy regarding digital technology and professional societies. He can be reached at