Evaluating the Digital Library
By David Alan Grier

pulling tablet out of bookshelfWe put great faith in our electronic libraries of technical articles, such as IEEE Xplore. Time and again, people have told me that these libraries have completely replaced the traditional scholarly journal. “Journals are obsolete,” a friend once told me. “You can get more technical content from an electronic library in just a few seconds than you could get from a traditional journal in years.”

Now, I am a devoted user of Xplore and believe that it is a great tool for exploring the technical literature. However, the more I use it, the more I realize that you cannot separate it from the journals and the editorial boards that manage those journals.

We all understand, or at least think we understand, the way that an editorial board works. The board members solicit papers for their journal, read the manuscripts, and then send those papers to referees to get an expert review. When the referees return their reports, the editor, with the help of the board, select the best papers for publication.

The editorial process is actually a bit more sophisticated than it appears on the surface. First, the board is actually doing two things when it reviews papers. It is validating the research and then evaluating the results. These two things are not the same.

When board members validate research, they are checking to see if the work was done correctly. They make sure that the researchers used tools that match the underlying assumptions of the work, used these tools properly, analyzed the data correctly, and reached conclusions that are supposed by the data.

When an editorial board members evaluate results, they are placing a value on the work. They are determining which papers are more valuable than others. They are identifying papers that solve problems of greater importance than other problems, demonstrate a procedure that is more powerful or more flexible than other methods, or identify questions that are more interesting than other research.

Evaluation is always comparative. It is also more subjective than validation. The value that board members give to a paper reflects their ideas about the problems that they believe to be important. It also reflects the vision that they hold for the future of the field.



Does Insurance Have a Future in Governing Cybersecurity?

Human Behavior Aware Energy Management in Residential Cyber-Physical Systems

Machine Learning Systems and Intelligent Applications

Developing Children’s Regulation of Learning in Problem-Solving With a Serious Game


Editorial board members rarely write about their vision. They don’t identify important problems in the field or publish their plans for the future. When they do commit such ideas to paper, they generally write broad, meaningless statements. They write that their journal publishes papers from a very wide class of research and that they believe almost all research is important. Even though they might not write about these ideas, the editorial boards actually have very specific ideas about the important problems and the future of their field. However, they communicate these ideas in subtle ways. They talk about them at meetings. They discuss them among friends. Many board members express their values only in the concrete work that they do, the work of accepting papers that they consider valuable and rejecting those they believe to be of limited value.

Many researchers can produce valid papers without knowing anyone on an editorial board. Researchers that have been properly trained in their field can generally identify some problem that they can solve. They usually don’t need to consult with anyone else to be confident that they have done their work correctly. However, it is almost impossible to produce a highly valued paper without knowing an editorial board member, or at least someone in the community that they represent. Without this kind of connection, a researcher may never be able to determine a board’s values, the ideas the board considers important.

The IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) provides many examples for how editorial boards both validate and evaluate research. TPAMI is one of the older and more prestigious journals in its field. It began publication in 1979, an important year in its field. Machine intelligence, which is part of the bigger discipline of artificial intelligence (AI), was just starting to rebuild after a reevaluation period.

About six years before TPAMI’s founding, major research institutions in the United States and the United Kingdom had come to believe that AI research had stalled. These institutions reduced research funding for the field, an event that researchers have since called the “First Artificial Intelligence Winter.” In 1979, these institutions were open to new approaches and new kinds of research in the field. As a result, a group of AI researchers created a new journal with modest and concrete targets—the goals of creating systems that sought to find patterns in data and drew conclusions from these patterns.

From the start, the TPAMI journal had broad interests. In the first year, it published papers on facial recognition, random walk, and clustering. Yet, behind the eclectic list of topics could be found a set of methods that the editorial board judged to be valuable.

During its first years, TPAMI published about a half dozen papers on methods for recognizing Chinese characters. This was not a popular topic in other journals. The largest researcher in the field was the IBM office in Japan. This office had been working on the problem for nearly 20 years but had published few scholarly papers on the topic. Furthermore, most of this work had involved creating systems for the IBM System 360 that could store, process, and print characters. These systems allowed individuals to type, store, and print characters; however, they could not recognize handwritten or printed characters.

The TPAMI editorial board likely valued several aspects of the research on character recognition. The problem was hard, yet it was limited; an early paper noted that all characters could be reduced to combinations of 26 different elements. It also had several aspects that could be generalized to more complicated problems such as facial recognition. In all, the journal published a dozen papers on character recognition.

From our current perspective, we might conclude that the board valued this kind of research because it would expand computing technology to a large new market. However, this seems unlikely. TPAMI began publication in the early years of the Deng Xiaoping era. Few computer researchers had regular contact with their Chinese peers. Indeed, most early papers make little reference to China. The majority of researchers came from Japan or Canada. Only in 1984 do we find papers that have any connection to China.

Furthermore, the first papers provided a foundation that spread to other problems. By 1987, authors were using the methods of recognizing Chinese to analyze Arabic script. Five years later, researchers were expanding the methods of these early papers with modifications that could identify incomplete or poorly written characters.

At this distance, we can’t identify the exact reason that the TPAMI board valued these papers highly However, by looking at the papers as a whole, you can see qualities that the editorial board might have appreciated. The authors built their research on a few common methods. These methods were simple. They could be modified in obvious ways for other scripts and alphabets. The authors analyzed the algorithms carefully and provided measures that could be used to compare alternative algorithms. Ultimately, the editorial board might have valued any or all of these aspects.

When we turn to a digital library for information, we need to review the papers in the context of the editorial boards that provided them. It is tempting to believe that we can get all papers on a given subject and hence know all that there is to know about the topic. Sadly, this is rarely the case. The scholarly literature does not publish many survey articles that treat an entire subject and show how all the research fits together to address a substantial application.

Instead, we are left with academic articles that solve specific problems. These papers tend to be connected by methodology rather than by a full application. You can read all the papers in Xplore and still not know how to create a practical system that can recognize Chinese characters. Instead, you will know the kinds of methods and approaches that were valued by several editorial boards. The most prominent of these boards was TPAMI’s, but other boards published papers on character recognition as well. Hence, if you want to understand this literature and understand how the ideas help solve the problem of character recognition, you need to understand why the TPAMI board valued them highly. You can’t stop at the digital library but must consider the board that reviewed the papers and ask why it valued this work so highly.


David Alan Grier circle image

About David Alan Grier

David Alan Grier is a writer and scholar on computing technologies and was President of the IEEE Computer Society in 2013. He writes for Computer magazine. You can find videos of his writings at video.dagrier.net. He has served as editor in chief of IEEE Annals of the History of Computing, as chair of the Magazine Operations Committee and as an editorial board member of Computer. Grier formerly wrote the monthly column “The Known World.” He is an associate professor of science and technology policy at George Washington University in Washington, DC, with a particular interest in policy regarding digital technology and professional societies. He can be reached at grier@computer.org.