September 2014 Theme: Human-Computer Interaction: Present and Future Trends
Guest Editors' Introduction: Paolo Montuschi, Andrea Sanna, Fabrizio Lamberti, and Gianluca Paravati

Human-computer interaction (HCI) is a multidisciplinary research area focused on interaction modalities between humans and computers; sometimes, the more general term human-machine interface (HMI) is used to refer to the user interface in a manufacturing or process-control system. In other words, the HCI discipline investigates and tackles all issues related to the design and implementation of the interface between humans and computers. Due to its nature and goals, HCI innately involves multiple computer science-related disciplines (image processing, computer vision, programming languages, and so on) as well as disciplines related to human sciences (ergonomics, human factors, cognitive psychology, and so on). Research about HCI primarily concerns the design, implementation, and assessment of new interfaces to improve the interaction between humans and machines. The term improve can be related to several aspects, including intuitiveness of use and interface robustness.

An intuitive, natural, efficient, robust, and customizable interface can greatly reduce the gap between a human’s mental model and the way a computer, machine, or robot can accomplish a given task. Although studies about HCI date back to 1975, recent technological advances in consumer electronics have opened exciting new scenarios: gestures, hand and body poses, speech, and gaze are just a few natural interaction modes that can be used to design affordable natural user interfaces (NUIs).

The September issue of Computing Now looks at the ongoing evolution of human-machine interactions, and the great potential benefits that HCI technologies represent.

Why is HCI Playing a Key Role in ICT Development?

In the early days of computer science, designers and developers paid much less attention to making hardware and software products usable or “user friendly.” Yet, requests from a growing subset of users for easy-to-use devices eventually focused researchers’ attention on usability.

The International Standards Organization (ISO) defines usability as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.” Thus, usability defines a set of criteria such as efficiency, safety, and utility, which are related mainly to computer systems.

In the mid-1990s, another important concept became associated with usability. User experience (UX) focuses mainly on parameters related to the user: satisfaction, enjoyability, emotional fulfillment, aesthetic appeal, and so on. The concept of UX has been extended and better defined in some research areas. For instance, web interface designers often leverage the User Experience Honeycomb to identify priorities in the design phase. The honeycomb’s seven hexagons represent parameters that must be carefully balanced to provide users a satisfactory quality of experience (QoE) by ensuring that an interface is: useful, usable, desirable, findable, accessible, credible, and valuable.

Understanding humans’ mental models is another important issue in HCI. Users learn and keep knowledge and skills in different ways, often influenced by their age as well as cultural and social backgrounds. Thus, HCI studies aim to bridge gaps between users and new technologies (which change nowadays quicker than in the past). Efficient, effective, and natural forms of HCI can reduce the skill levels needed to use complex devices, thus potentially reducing inequalities among people by helping to address an issue in the “digital divide” — the gap between those who have access to ICT technologies and skills to make use of those technologies and those who have neither the access nor the skills.

New Trends and Opportunities

For many years, humans have sent commands to “machines” primarily via the keyboard-mouse paradigm — also known as WIMP (windows, icons, menus, point-and-click devices). Here, the term machine is used in a very broad sense: in addition to the point-and-click devices that are usually associated with computers, we use a keyboard of sorts to dial numbers on a telephone, to interact with a TV, to select a wide range of functions on a car dashboard, and many other activities that employ key-based interaction modalities. In most cases, the machine’s output to the user is then based on a display device such as a monitor.

As foreseen by Andy van Dam, in his vision paper published in IEEE Computer Graphics & Applications‘ first issue of the new century: “Post-WIMP interfaces will not only take advantage of more of our senses but also be increasingly based on the way we naturally interact with our environment and with other humans.”

Several affordable sensors have begun to shake up the way people interact with devices. Touch and multitouch screens have driven the change from cellular phones to smartphones, and gestures are now the main interaction modality to activate functions on personal devices. At the same time, speech recognition technologies and CPUs’ increased computational power let users efficiently provide inputs when they can’t perform gestures.

Personal devices are the most evident example of how new forms of HCI can reduce the gap between humans’ mental models and technology. One market that has led this deep innovation in HCI is entertainment. With users asking game and device makers for new ways to control characters, game console developers proposed controllers to release players from the constraints of using a keyboard and mouse. The new interface becomes a means for providing tactile feedback as well as acting as a sort of tangible interface (the controller becomes a steering wheel, a gun, or a tennis racket, for instance).

Sensors such as the Microsoft Kinect are a further step toward the implementation of fully natural interfaces in which the human body becomes the controller. The device lets users provide commands to the machine via gestures and body poses as embedded hardware performs real-time processing of raw data from a depth camera, thus obtaining a schematic of a human skeleton comprising a set of bones and joints. Recognizing the position and orientation of bones lets the hardware identify poses and gestures, which can be mapped to commands for the machine.

Researchers have also proposed sensors that can track a user’s hands. For instance, the Leap Motion can interactively track both hands of a user by identifying the positions of finger tips and the palm center, and later computing finger joints using an inverse kinematics solver. Some car makers are already proposing a hand-tracking based alternative interaction modality in lieu of traditional touch screens devoted to managing infotainment functions. Similarly, some smart TVs let users control their choices with a set of gestures, thus replacing the traditional remote control.

Found only in science fiction movies just a few years ago, the above-mentioned scenarios are now the present reality of HCI. On the other hand, new and more intriguing scenarios appear to be imminent, as brain interfaces seem poised to invert the relationship between humans and machines, for instance. This new interaction paradigm’s success will rely on future technological advances, which aim to transform interface devices into wearable and embeddable objects. Interfaces based on augmented reality (AR) technologies are clear examples of this transformation. Many applications for tourism, entertainment, maintenance, shopping, and social networks are already available for personal devices, but new wearable sensors might soon change our habits. Google Glass will be (massively) marketed in the near future, and new application fields are proposed daily. Human-machine interaction and human-machine “integration” are doomed to become very similar concepts, and indeed, Google Glass-like solutions could soon be replaced by contact lenses that implement natural eyewear-based interfaces.

New forms of HCI will significantly change our lives. New interaction paradigms offer the chance to improve quality of life for people who can’t take advantage of current interfaces — due to physical disabilities, for example. On the other hand, new issues will arise — particularly related to privacy, security, and ethics — thus potentially slowing the diffusion of new hardware and software products based on wearable (and “invisible”) devices. Although some researchers have already investigated relationships between interface design and legal and privacy issues, national legislations are heterogeneous and not yet ready to cope with present and future advances in HCI.

Theme Articles

For the first article in this month’s theme, we selected “Gestural Technology: Moving Interfaces in a New Direction,” in which author Lee Garber examines the evolution of gesture-based interfaces, with a focus on future technological trends. By detecting body, hands, face, and eyes, researchers have developed ways to use poses, gestures, and expressions to control devices. The article also investigates challenges related to gestural interfaces, especially related to usability and robustness.

In “The Road to Natural Conversational Speech Interfaces,” Charles L. Ortiz Jr. discusses challenges in implementing effective speech interfaces. Once again, efficiency and robustness play key roles in adoption, particularly related to identifying both context and a semantic representation of meaning, which are crucial for correctly interpreting natural language sentences.

Identifying context is often related to people’s affective states: emotions can deeply affect the way we interact with other people, as well as with devices. Sidney K. D’Mello and his colleagues’ “Unimodal and Multimodal Human Perception of Naturalistic Non-Basic Affective States during Human-Computer Interactions” presents a deep study about affective states during HCI. The authors focus on human multimodal affect detection, which measures a person’s state by means of a set of parameters — neutral, boredom, confusion, engagement, and frustration — when submitted to unimodal and multimodal stimulus. The article considers the differing impact on human emotions when one (unimodal), two (bimodal), or more than two (multimodal) stimuli (whether olfactory, visual, or auditory) are conveyed. Given that an interface can be designed to stimulate users in such different ways, the study of perceived emotions plays a key role in HCI research.

For a good example of how HCI can be leveraged to overcome physical or mental limitations, we next turn to “Haptic-Assisted Target Acquisition in a Visual Point-and-Click Task for Computer Users with Motion Impairments.” Authors Christopher Asque, Andy Day, and Stephen Laycock present a haptic interface that can support people with motion impairments to accomplish point-and-click tasks, thus improving their accessibility to computer software.

The final article focuses on a well-known modality: (multi)touch interaction. In “Understanding Pen and Touch Interaction for Data Exploration on Interactive Whiteboards,” Jagoda Walny and her colleagues investigate how pen and touch solutions can overcome the problem of multiple levels of indirection (implemented by menus, boxes, and panels). The article describes a touch-based  interface for interactive whiteboards that’s applied to the information visualization (InfoVis) domain; in which users are asked to interact with three types of charts in a modeless, buttonless way.

These theme articles highlight just some of the current research directions in the field. If you’re looking for further insights on other relevant application areas for HCI, consider some special issues published in IEEE Computer Society magazines and transactions. For instance, IEEE Transactions on Affective Computing recently published a special focus on affective interfaces for games; IEEE Computer Graphics & Applications considered interactive displays in its March-April 2013 issue; and Computer highlighted new HCI paradigms in its April 2012 issue.

Industry Perspectives

This month, we also have an Industry Perspective video from Ivan Tashev of Microsoft, who presents trends in NUIs with a special focus on intelligent audio interfaces that can identify speakers and adapt interactions between humans and machines.

The opportunities for HCI are enormous. Progress toward more usable and natural interfaces for human-machine interaction can yield incalculable advantages and can deeply change everyday lifeWe invite you to dig into the wealth of possibilities by starting from the articles in this month’s theme.

 

Citation

P Montuschi, A Sanna, F Lamberti, and G Paravati, “Human-Computer Interaction: Present and Future Trends,” Computing Now, vol. 7, no. 9, September 2014, IEEE Computer Society [online]; http://www.computer.org/publications/tech-news/computing-now/human-computer-interaction-present-and-future-trends.

Paolo Montuschi is a professor of computer engineering at the Polytechnic University of Turin, Italy. His research interests include computer arithmetic and architectures, computer graphics, electronic publications, and new frameworks for the dissemination of scientific knowledge. Montuschi is an IEEE Fellow, an IEEE Computer Society Golden Core member, and serves as chair of the Computer Society’s Magazine Operations Committee, associate editor in chief of IEEE Transactions on Computers, and as a member of both the IEEE Transactions on Emerging Topics in Computing steering committee and of the Computing Now advisory board. He is also a member of the IEEE Publications Services and Products Board. Please visit his personal page at http://staff.polito.it/paolo.montuschi and contact him at paolo.montuschi@polito.it.

Andrea Sanna is an associate professor at the Polytechnic University of Turin. He has published several papers in the areas of computer graphics, virtual reality, parallel and distributed computing, scientific visualization, and computational geometry. Sanna is currently involved in several national and international projects concerning distributed architectures and human-machine interaction. He is a Senior Member of ACM and serves as a reviewer for multiple international conferences and journals. Please visit his personal page at http://sanna.polito.it and contact him at andrea.sanna@polito.it.

Fabrizio Lamberti is an assistant professor at the Polytechnic University of Turin. His research interests include computational intelligence, semantic processing, distributed computing, human-computer interaction, computer graphics, and visualization. Lamberti is a Senior Member of IEEE and the IEEE Computer Society. He has published more than 90 papers in international peer-reviewed journals, magazines, and conference proceedings. Please visit his personal page at http://staff.polito.it/fabrizio.lamberti and contact him at fabrizio.lamberti@polito.it.

Gianluca Paravati is an assistant professor at the Polytechnic University of Turin. His research interests include image processing, computer graphics, virtual reality, human-computer interaction, and visualization. Paravati is a member of IEEE and the IEEE Computer Society. He serves as an editorial board member and reviewer in several refereed international journals and conferences. Contact him at gianluca.paravati@polito.it.

Translations

Spanish    |    Chinese

Translations by Osvaldo Perez and Tiejun Huang