The invention of the computer is a contentious subject, and it encourages me to start this column with a little bit of advice, if you will allow me to offer advice. Should anyone ever ask you to debate the origins, invention, or creation of the electronic stored program computer, politely decline the offer. Tell them that there is something that you need to do or people that you need to see. Don’t get involved in a debate because you will quickly discover that it generates far more heat than light. Too many participants in this discussion are more interested in claiming complete credit for the idea than in helping us understand how computing became central to industrialized society.
Perhaps it is a little sad that we still have angry debates some 70 years after the computer emerged in public life. However, we can appreciate the cause of the debate. The computer was one of the crucial inventions of the 20th century. A large number of people would love to take credit for creating it.
The subject of the invention of the computer is timely because we have just passed one of the symbolic milestones of the computer age. February 16, 2016, was the 70th anniversary of the press conference held at the University of Pennsylvania for one of the important early machines, the Electronic Numerical Integrator and Computer, or ENIAC. Many sources point to this conference as the start of the computing era. Although it ultimately proves to be an unsatisfactory beginning for computing, it helps us understand why the invention of computing remains controversial.
As a machine, the ENIAC is a distant cousin of the modern computer. It is an electronic machine but is quite different from our current processors. At best, it can be described as a collection of digital adding machines that were cabled together. The ENIAC would read data from a punched card reader and sent this data to one of these machines. The adding machine would perform calculations and then send the output to another. The original version of the machine could not be programmed as modern computers are programmed, but it did have a control unit that synchronized the different adding machines and moved data from unit to unit. Eventually, the machine was upgraded and could be programmed. Recent scholarship has shown that the programmable version of the machine eventually proved to be quite useful and taught the world a great deal about programming.
If we consider the ENIAC that was operating on February 16, 1946, we find that it doesn’t really support the idea that computing began on that date. First, February 16 was not the first day that the computer did calculations. It had been operational since October or November 1945. Second, as noted above, it was not programmable as we now program machines. Third, other machines had been running programs for at least a year. In April 1944, Harvard University dedicated a mechanical machine that could execute a program. This machine was not a modern computer, but it represented an important step in the development of the computer. It utilized a program that was punched on a loop of paper tape rather than stored in member. Still, it taught us a number of practical lessons about the nature of programming.
As we continue to look at the ENIAC, we find more and more problems with identifying it as the first computer. The first articles on the machine describe it as an all-electronic computer. The New York Times, for example, called it “the electronic speed marvel.” Yet, we can find at least one all-electronic calculator that predated the ENIAC. This machine was a device that was designed to do matrix calculations. It was created by John Atanasoff of Iowa State University, a less prominent school located in the middle of the US. At least one of the ENIAC designers knew of Atanasoff’s machine and had been able to inspect it. He claimed that had borrowed little from Atanasoff’s work. Indeed the two machines are quite different.
A number of authors have argued that Atanasoff’s machine was actually the first computer. Although it was an important computing machine, it lacked one key element: it was not programmable. It was managed by an operator who had to follow a fixed set of instructions. At this point, the argument between those who hold that Atanasoff built the first computer and those who take the side of the ENIAC designers becomes loud and hot. The distinctions between the two machines are difficult to explain without delving into technical details. Most of the differences seem a bit arbitrary. The Atanasoff machine was binary. The ENIAC was decimal. The Atanasoff machine had a single processor. The ENIAC had multiple processors. These details indicate the technical accomplishments of the two groups of designers but do not really establish either machine as the first computer.
If we are solely interested in identifying the first computer, we can identify other machines that complicate our endeavor. In the early 19th century, an English mathematician, Charles Babbage, designed a mechanical computer based on the technologies of looms and weaving. Although he never built the machine, he incorporated a program that was similar to the program in the Harvard machine. Indeed, the Harvard designers knew about Babbage’s ideas.
We can point to two other projects that developed the idea of the program and programmable computers. The first was Alan Turing’s 1936 paper on computable numbers. In this paper, he proposed an abstract computing machine that would be controlled by a program. Although he made no attempt to build such a machine at the time, he laid the foundation for much of the discussion of programmable computers. Also, German engineer Konrad Zuse was developing the ideas of the program in the late 1930s and early 1940s. At the same time that the University of Pennsylvania team was preparing the ENIAC, Zuse was creating a mechanical computing device that anticipated many of the ideas of the modern computer.
Ultimately, we cannot identify a first computer without restricting ourselves to a narrow set of ideas that exclude the other claimants. We might be able to identify the first electronic, binary, stored memory computer; however, by doing that, we exclude a great deal of important programming ideas and the use of these machines.
At best, we can identify a meeting that marked the start of the computing era. It occurred in the summer of 1946 at the University of Pennsylvania, the same school that had built the ENIAC. That meeting was a six-week conference that has become known as the Moore Lectures. It assembled most of the people in the world who had any experience with programmable computing machines, a task that would be impossible to do even just a few years later. The meeting looked backward rather than forward. The participants talked about what they had learned about computers and computing. At the center of the conference was a paper, “Draft Report on the EDVAC,” which had been written by some of the staff from the ENIAC project.
The report described the computer as we know it, although it takes a certain amount of careful study to understand how it conceived that computer. It defined a machine that had three elements: a processor, a control unit, and memory. The memory would hold both programs and data. The control unit would take instructions from the memory and use them to move data from memory to the processor, perform calculations, and move the results back to memory. That report also suggested one of the key features of the modern computer—the idea that programs could modify programs.
The participants in the Moore Lectures left the University of Pennsylvania in August 1946 and returned to their labs and schools to build the first generation of computers. This generation included the EDSAC at Cambridge, the EDVAC at the University of Pennsylvania, the IAS Machine at Princeton, and the Small-Scale Experimental Machine at Manchester—the “Manchester Baby.” Yet these machines did not encompass all the computing that was happening in the years after the war. Others built on the ideas of the Moore Lectures to construct their own.
There is much to celebrate in the work of the early computing pioneers. Many contributed fundamental ideas that shape the computing world of today. I generally find it best to appreciate all of them and to accept every anniversary of an early innovation as a good excuse to hold a party. After all, who does not like a good party?
As we approach the 70th anniversaries of many computing organizations and many computing accomplishments, we may do well to remember that computing and computers have been the product of a community and that one of the great accomplishments of the 20th century was how well that community was able to work together and share their results with each other. That, perhaps, is the real lesson of the invention of the computer.
About David Alan Grier
David Alan Grier is a writer and scholar on computing technologies and was President of the IEEE Computer Society in 2013. He writes for Computer magazine. You can find videos of his writings at video.dagrier.net. He has served as editor in chief of IEEE Annals of the History of Computing, as chair of the Magazine Operations Committee and as an editorial board member of Computer. Grier formerly wrote the monthly column “The Known World.” He is an associate professor of science and technology policy at George Washington University in Washington, DC, with a particular interest in policy regarding digital technology and professional societies. He can be reached at email@example.com.