Introduction to the Handbook
Abstract and Keywords
Since its origins some 30 years ago as a subdiscipline of both human factors and cognitive science, cognitive engineering has grown into a diverse yet coherent body of research and practitioner activity focused on informing the human-centered design of engineered systems and workplaces. To introduce this handbook, we first provide a brief statement of the perspectives that gave rise to the selection and organization of the research it presents. We then situate cognitive engineering historically, both as a maturation of cognitive science to embrace applications and as an outgrowth and extension of human factors enabled and required by developments in information technology and automation. Finally, in the concluding sections of the chapter, we apply a cognitive engineering approach to the book itself, using large-scale data collection and analysis, statistical modeling, and pictorial visualization to provide the reader with a set of windows into the contents of this handbook. We hope that these windows function as effective user-centered aids to facilitate both efficient use and to communicate the research themes comprising contemporary cognitive engineering to a broad audience of students, researchers, and practitioners.
Handbook Contents and Organization
Cognitive engineering is an interdisciplinary approach to the analysis, modeling, and design of engineered systems or workplaces, especially those in which humans and automation jointly operate to achieve system goals. The field emerged in the 1980s in response to the increased complexity of the challenges faced by system designers and by the enhanced array of opportunities afforded both by these technologies and by the maturation of cognitive science, better enabling it to inform design.
A brief inspection of the table of contents shows that this handbook is organized into four major sections:
• Cognition in Engineered Systems
• Cognitive Engineering Methods
• Cognitive Engineering Models
• Cognitive Technologies in Engineered Systems
For the reader with at least some acquaintance with cognitive engineering research, the most noteworthy aspect of this organization is most apparent when contrasted with an alternative framework organized instead around application domains:
• Cognitive Engineering in Health Care
• Cognitive Engineering in Aviation
• Cognitive Engineering in Highway Transportation
• And so forth
There are at least two reasons for being explicit in bringing the domain-general, as opposed to domain-specific, organization chosen for this handbook to the readers’ attention.
First, while it is indeed true that most cognitive engineering projects require the input of substantial domain knowledge or expertise, domain specificity (of theories, models, best practices, design (p. 4) approaches, etc.) has the potential to stand in the way of informative and efficient cross-domain generalization and the development of cognitive engineering into a mature engineering discipline in its own right. We were well aware of the extent to which some of this handbook’s contributors would be content or pleased to identify their research with one application domain or another, as they may have found it professionally reinforcing to identify with an application domain in these times of intensive academic specialization. We were nevertheless delighted to find such a substantial base of cognitive engineering researchers willing to take on the challenge we posed to write their chapters in domain-general terms (that is, every author represented in this handbook).
Second, prior to our many dialogues with contributors to help identify suitable cross-domain topics for their contributions, and seeing the resulting manuscripts, we ourselves were unsure what themes and topic areas would emerge and sustain as viable, domain-general themes around which this handbook could be organized. Unsure if our requests would yield productive results, we nevertheless put the challenge to the cognitive engineering research community to communicate useful information to readers in a way that did not presume prior knowledge of the reader’s domain of interest. We found that the community, in our judgment, succeeded admirably. On the basis of these results, we feel confident that this handbook will be of interest and use to a broad audience of practitioners, many of them engineers and computer scientists, involved with designing human-technology systems for a broad array of application domains. We also hope that the benefits to the student and academic communities will be similarly direct. This handbook may aid in selecting research problems that already show strong promise of domain generality, and thus broad relevance and impact, because current research has yet to explore and probe the full range and depth of issues involved.
We have already mentioned that one of the historical developments that led to the emergence of cognitive engineering is the maturation of cognitive science into a discipline whose theories, models, and methods are capable of guiding application. These developments have come mainly from two directions. The first involved the groundbreaking work toward creating computational models of cognition, initially, in the domain of human-computer interaction (Card, Moran, & Newell, 1983). This research has proven seminal in prompting numerous extensions resulting in various “cognitive architectures” and related approaches to modeling cognitive performance in technological interaction. A variety of these approaches are described in the Cognitive Engineering Models section of this handbook, as well as in research volumes devoted to this approach (e.g., Gluck & Pew, 2005). Research such as this continues a longstanding appreciation for the fundamental role played by modeling in the analysis and design of human-technology systems (Elkind, Card Hochberg, & Huey, 1990; Rouse, 1980; Sheridan & Ferrell, 1974). Research methods grounded in modeling, whether quantitative, computational, or otherwise, are a hallmark of both professional and research activity in engineering. Cognitive engineering is not likely to be different.
A second route by which cognitive science matured into application in a manner that helped spawn the field of cognitive engineering is through the research of Donald A. Norman, first outlined in his chapter titled “Cognitive Engineering” in the 1983 volume User Centered Systems Design. Here, Norman laid out his influential and intuitive conception of the barriers to good design lying in the “gulf of execution” (how do I get it to work?) and “gulf of evaluation” (is it working as I intended?). It is useful to consider how Norman himself understood the nature of the discipline he was putting forward at the time:
Cognitive Engineering, a term invented to reflect the enterprise I find myself engaged in: neither Cognitive Psychology, nor Cognitive Science, nor Human Factors. It is a type of applied Cognitive Science, trying to apply what is known from science to the design and construction of machines. It is a surprising business. On the one hand, there is quite a lot known in Cognitive Science that can be applied. On the other hand, our lack of knowledge is appalling. (Norman, 1983, p. 31)
Norman’s comments prompt one to consider why he made a contrast between what he was promoting and the much older discipline of human factors, which by almost any definition involves applying “what is known from science to the design and construction of machines” (e.g., Wickens, Lee, Liu, & Gordon-Becker, 2004). Norman believed there was a gap between what the discipline of human factors was offering at the time and what was needed to provide sufficient guidance for the (p. 5) design of interactive technologies. Many of these gaps have developed as important research themes in this handbook and the field of cognitive engineering more generally.
Why has cognitive engineering emerged as either a separate or subdiscipline of human factors? Many perspectives on this question exist. Google Scholar provides one useful path forward. As of the time of this writing, the only publication with more Google Scholar hits using the search term “cognitive engineering” than Norman’s previously cited (1983) chapter (over 1600 citations) is Jens Rasmussen’s book Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering, with over 2300 citations. This monograph and Rasmussen’s classic (1983) article “Skills, Rules, and Knowledge; Signals, Signs, and Symbols, and Other Distinctions in Human Performance Models” are taken by many in the cognitive engineering community to be seminal publications and landmarks at the origin of the field.
In both his 1983 article and 1986 monograph, Rasmussen, a control engineer working to ensure the safety of nuclear power plants and operations, observed that, in the crucially important area of interface design, semantics had overtaken syntax as the chief barrier to effective system control and problem diagnosis. It was not that system operators had great difficulty perceiving or attending to proximal information displays, but rather in understanding what they meant. This observation implied that the lion’s share of human factors knowledge on how to present information at an interface to best support perception or attention was, while necessary, far from sufficient to ensure effective human-machine interaction mediated by interface displays. Displaying information to which an operator can attend and perceive was important, but to Rasmussen, design guidance of this kind was insufficient if it did not also foster operator comprehension or understanding.
Rasmussen understood meaning in terms of external reference. The operator’s ultimate task, Rasmussen noted, is to monitor, control, and diagnose (and so forth) a plant or technology “behind” the interface, so to speak. The operator’s task is not merely to attend, perceive, and manipulate the proximal interface itself, although interface manipulation skills have historically served as the object of study for the lion’s share of traditional human factors research. Instead, for Rasmussen, the proximal interface must be considered functionally, not as the ultimate or end target of human interaction, but instead as a window to a distal plant or remote environment comprising the true target of work. Just as Bruner (1973) had characterized cognition as “going beyond the information given,” Rasmussen (1983) described an operator’s cognitive task in terms of exactly the same sort of going beyond, but in this case, going beyond the given interface.
This characterization applies not solely to process control, but equally to modern “knowledge workers” (Zuboff, 1984) more generally, whose windows to the world of work increasingly consist of computer interfaces of one sort or another and, as such, who are rarely able to perceive and manipulate the objects of their work in a direct fashion. Additionally, the heightened emphasis given to knowledge-based behavior in Rasmussen’s research and in an array of related cognitive science research on expertise (e.g., de Groot, 1978; Simon & Chase, 1973) served as one impetus to a line of research focused on the nature of expert decision making in cognitive engineering contexts (Klein, 1989) and the mechanisms associated with human error by otherwise well-trained, well-motivated human operators (Reason, 1990; Senders & Moray, 1991). Cognitive engineering’s primary focus on expert or otherwise knowledgeable humans is evident throughout this handbook. This focus is yet another factor that marks the discipline off from much, but not all, traditional engineering psychology and human factors research, which has historically focused on the behavior of humans with perhaps hours, but rarely months or years, of training and experience.
Rasmussen’s (1983) paper presented a conceptual framework that both acknowledged the importance of the large body of research that had grown up around relatively simple technological contexts in which the primary goal was safe and efficient interaction with a proximal interface or workplace, yet nevertheless indicated a need for a novel theory and method for better understanding the cognitive activities of knowledge workers. Rasmussen’s observations have proven prescient: The research problems that occupy the lion’s share of the attention of today’s cognitive engineers are those in which technology is not viewed as the end target of human interaction, but rather as an intermediary through which humans interact with the actual objects of work.
It is also worthwhile to consider how Rasmussen (1983) laid out what he believed to be necessary for cognitive engineering to meet its goals:
(p. 6) In our work, concern is with the timely development of models of human performance which can be useful for the design and evaluation of new interface systems. For this purpose, we do not need a single integrated quantitative model of human performance but rather an overall qualitative model which allows us to match categories of performance to types of situations. In addition, we need a number of more detailed and preferably quantitative models which represent selected human functions and limiting properties within the categories. The role of the qualitative model will generally be to guide overall design of the structure of the system including, for example, a set of display formats, while selective, quantitative models can be used to optimize the detailed designs. (p. 264)
Some 30 years after Rasmussen stated these objectives for future research, we expect that the reader will see, as illustrated by this handbook, that the array of contemporary cognitive engineering products consists largely of a toolbox of conceptual or qualitative frameworks together with a set of more formal techniques and quantitative models for detailed performance prediction.
Rasmussen’s research was also influential in providing cognitive engineering’s orientation to a unit of analysis consisting of a human-technology system, or perhaps even a human-technology-environment system, rather than the human in isolation. This was not a new idea within the engineering-oriented, human performance modeling tradition (see Pew, 2008, for a historical overview), yet Rasmussen’s observations on the fundamental ecological nature of cognitive engineering resonated with researchers interested in grounding the psychology of cognitive engineering in a scientific footing other than solely information processing theory. Cognitive engineering researchers such as Vicente (Vicente & Rasmussen, 1990; Vicente, 1999); Woods and Hollnagel (Hollnagel, Mancini, & Woods, 1988; Woods & Hollnagel, 2006); and Flach (1990) have each pursued cognitive engineering approaches influenced by the ecological theory of perceptual psychologist James J. Gibson, or, more importantly, on a unit of analysis spanning the human, cognitive tools, and the work environment. Along similar lines, and though grounded in a computational rather than an ecological framework, the pioneering research of Hutchins (1995) and his colleagues (Hollan, Hutchins, & Kirsh, 2000) on distributed cognition also brings to cognitive engineering an approach that seeks to account for how cognitive resources both internal and external to the human might combine to enable the types of performance observed in technological systems. At a general level, the guiding theoretical orientation behind all these approaches is that cognitive engineering concerns the analysis and design of integrated, human-technology systems. This general orientation is evident throughout this handbook.
Another research theme central to this handbook concerns the challenge of achieving a safe and productive coupling of humans and automation. To the extent that the discussion above has been useful in communicating the pioneering influence of Jens Rasmussen’s research on cognitive engineering, the research of Thomas B. Sheridan has played a similarly pioneering role in bringing the issues involved with human-automation interaction to the forefront of cognitive engineering research (Sheridan & Johannsen, 1976; Sheridan, 1992; Sheridan, 2002). Although much of the impetus for Rasmussen’s research came from his observations of power plant technicians engaged in troubleshooting tasks (see Vicente, 2001, for a detailed history and overview), Rasmussen was also strongly influenced by the seminal research of Thomas Sheridan, who was actively engaged in the problems of remote control and monitoring of distant vehicles in contexts such as space and undersea exploration. Sheridan coined the term “supervisory control” to describe the situation in which the human is not in direct, manual control of a system or process, but instead inputs commands to automated systems that themselves act directly on the distal system, process, or vehicle.
This seminal research by Sheridan has spawned dozens of studies over the past decades trying to characterize human-automation interaction with models or taxonomies, to understand the consequences of introducing automation into systems or workplaces, to identify and describe human tendencies in dealing with those consequences, and to identify design principles, frameworks, and techniques to support human operators or workers in doing so. As will be seen in the following sections of this chapter, if one had to name a single key topic central to this handbook, the impact of automation and information technology on the human’s role in engineered systems would be that key topic.
Finally, to both close this section on historical foundations and to set the stage for a discussion of the handbook’s contents in detail, it should be mentioned that cognitive engineering has broadened its focus even further in recent years to include a consideration of how teams and organizations (p. 7) communicate and collaborate in the performance of cognitive tasks (Salas & Fiore, 2004). While hardly a new idea, the proliferation of information and communication technologies that increasingly mediate what was once direct human interaction has highlighted the importance of these social factors. A variety of chapters in this handbook—including those on communication, teamwork, conducting experiments with teams, and the design of organizations and communities of practice—illustrate this rapidly growing dimension of cognitive engineering research.
Cognitive Engineering Themes and the Handbook Contents
These historical themes are reflected in the chapter structure and in the associated content. Figure A.1 shows a word cloud based on the contents of the 41 chapters, which provides a simple visualization of the handbook contents. The size of each word is proportional to its frequency of occurrence in the book—the large words occur often. This representation suggests the broad scope of cognitive engineering, spanning the individual operator to teams and organizations, with a focus on how systems of people and technology, often in the form of automation, influence performance. This word cloud provides a holistic view of the handbook contents that can be challenging to extract from reading the individual chapters. While useful, the word cloud is limited because it provides no link to back to the chapters. A reader seeing the importance of models and systems from the word cloud would not know what chapters to read to learn more about these topics. Formal analysis of the text represented in the word cloud can help readers to navigate the complex field of cognitive engineering.
The 41 chapters contained in this handbook demonstrate the diversity of perspectives that define the field of cognitive engineering. The organization of these chapters in the handbook represents one way of compiling this content, and we have made a concerted effort to structure these chapters in a logical fashion by placing related material together. At the same time, reading through the chapters or scanning the table of contents might not convey an adequate understanding of the field and might not lead a reader to a set of chapters of particular interest. Not all readers will want to read all 41 chapters, and readers will approach the handbook with diverse backgrounds and objectives. Different readers will need different tables of contents to satisfy their needs. We apply text analysis to the chapters to identify common themes and connections between the chapters that a single table of contents cannot provide.
We hope these themes and connections will support a focused and individualized reading of the handbook that meets the particular needs of each reader. As an example, a designer interested in situation awareness could start by reading chapters that contain “situation awareness” in the title, but the handbook is not organized to identify related chapters that might also be of interest. There is no “situation awareness” section in the handbook. The titles of other chapters might not reveal their relevance, and chapters located before or after the chapters with “situation awareness” in their title might not be particularly closely related to situation awareness. A reader interested in situation awareness might then be left to search the index. Many readers will approach the handbook with similarly individual perspectives. To support readers who desire for a focused reading of the handbook, this chapter provides a systematic analysis and representation of the handbook contents.
Three analyses support a more focused and individualized reading of the handbook. First, we identify groups of similar chapters based on the relative frequency of words occurring in each chapter. One might think of this as identifying clusters of chapters that have similar word clouds. Second, we describe topics contained in these chapters. Even chapters (p. 8) that fall into the same cluster might address different topics, and so the topics contained in each chapter indicate why a chapter belongs to a particular cluster and also indicate chapters that share the same topic even if it belongs to a different cluster. Third, we describe how shared topics connect chapters into a network of chapters. This network highlights chapters particularly central to the field as those chapters that contain themes that are shared by many other chapters. In combination, these analyses provide an alternate table of contents to the handbook that we hope will help readers navigate the field of cognitive engineering.
The text analysis techniques applied to the chapters in this book are based on the term frequency data represented in Figure A.1. Each chapter is reduced to a vector that tabulates the frequency of each word used in the chapter. The handbook can then be represented as a matrix, with each chapter as a row and each column representing the frequency of occurrence of words, such as those shown in Figure A.1. The relative frequency of occurrence of words across chapters can be analyzed as numeric data using techniques such as cluster analysis. This “bag of words” approach to text analysis does not include any information regarding the meaning of particular words or their relationship to each other within sentences. Even so, analysis of such term-frequency data often provides a surprisingly insightful view into the concepts contained in a set of documents (Landauer & Dumais, 1997; Deerwester, Dumais, Furnas, Landauer, & Harshman, 1990).
The statistical package R 2.14.1 (R Development Core Team, 2011) supported the text analysis of the handbook chapters, with text mining packages tm (Feinerer, Hornik, & Meyer, 2008) and topicmodels Grün & Hornik (2011). The graphics packages ggplot2 (Wickham, 2010) and igraph (Csárdi & Nepusz, 2006) were used to visualize the results.
Applying Ward’s method of hierarchical clustering to the data from the term-document matrix identifies similar chapters. Here, chapter similarity is based on the Euclidian distance between chapters defined by the relative frequency of each term contained in each chapter. Documents that use the same terms with the same relative frequency will be close to each other and so will fall into the same cluster. Figure A.2 shows the hierarchical cluster analysis, with the top of the hierarchy showing two sets of chapters and the bottom of the hierarchy showing individual chapters. A cut point midway in the hierarchy produces 13 clusters of chapters. The chapters generally cluster according to the table of contents of the book. Many clusters include chapters from a similar section of the book, such as chapters III.1, III.2, and III.3, 14, 15 on task analysis, work analysis, and decision-centered design. Others that are not co-located in the book are strongly related, such as chapters II.5 and III.4 on situation awareness and chapters II.4 and IV.6 on judgment. Considering how these clusters combine in the hierarchy shows that the clusters of clusters also correspond to the grouping in the book. The cluster that combines the three clusters beginning with the third cluster from the left is almost exclusively composed of chapters from section III. Importantly, this analysis uses only word frequency, with no reference to the chapter structure, and yet it the clusters reflect the structure of the book surprisingly closely.
Topic analysis reveals the themes contained in the chapters that influence cluster membership. Based on a latent Dirichlet allocation approach (Grün & Hornik, 2011), the text of the 41 handbook chapters reveals 22 distinct topics. These topics can be represented by word clouds of the terms most important in defining each topic. Figure A.3 shows the word clouds of the 22 topics, and from these word clouds of the topics names were defined. These topics and their associated word clouds provide a much richer description of the handbook content than a single word cloud in Figure A.1. These word clouds describe some of the common themes of the field of cognitive engineering, but like the overall word cloud in Figure A.1, by themselves they do not direct readers to particular chapters.
Figure A.4 shows that each topic occurred in at least three chapters and that a few topics occur in almost one quarter of the chapters. The topics of “decision heuristics,” “simulation for safety,” and “communication” each occur in four or fewer chapters, whereas the topics of “team coordination” and “practice and learning” occur in at least nine chapters. Figure A.5 shows the distribution of these topics across the chapters. Chapters are listed vertically, grouped according to the cluster analysis. The dark and light grays differentiate neighboring clusters. The horizontal axis indicates the topics of the handbook. Three topics describe each chapter, and the size of the circle represents which topic is the primary, secondary, or tertiary topic of each chapter—the large circle represents the primary topic. Generally, chapters in the same cluster share topics—the chapters on judgment share the primary topic of judgment. Chapters in some clusters do not share topics as uniformly, as in the case of the (p. 9) chapter on trust and the chapter on communities of practice. These chapters belong to the same cluster but do not share any topics. The vertical lines highlight the topics of “team coordination” and “practice and learning,” which occur in the greatest number of chapters.
Figure A.5 provides a valuable tool for understanding the content of the handbook by highlighting clusters of chapters and also by highlighting particular topics covered by these chapters. The combination of chapter clusters and the topics can help readers identify particular sets of chapters that (p. 10) address topics of interest that might not be obvious from the chapter titles or by the structure of the table of contents. Locating the topic on the horizontal axis and then tracing that column upward to find the circles that indicate chapters that include a particular topic can identify a set of chapters addressing an issue that might not be clear from the chapter’s title or section of the table of contents. Figure A.5 can act as an alternate table of contents that makes it possible for readers to quickly identify chapters that are most likely to meet their needs.
Figure A.5 shows that many chapters share topics with other chapters. These shared topics can be considered as links between chapters that form a network of chapters. In this network, a chapter might be connected to one or two other chapters or to many chapters. The structure of this network based on shared topics places each chapter into a rich context of connections with other chapters.
Network analysis measures provide a way to quantify features of the chapter network defined by shared topics. One network analysis measure considers the frequency of links between nodes in the network—degree. Nodes with many connections to other nodes have high degree or centrality. In the network of chapters, highly central nodes are those that share topics with many other chapters. Another network analysis measure—betweenness—considers the number of links to nodes that are linked to many others. Such nodes have a short path to other nodes in the network. In the network of chapters, chapters with high betweenness are linked to chapters that touch on many of the topics that define cognitive engineering. Multidimensional scaling uses the links between nodes to place nodes that share similar patterns of connections near each other in a two-dimensional space.
Figure A.6 shows the network structure of the handbook defined by shared topics. Highly central chapters are indicated by large nodes, and chapters with high betweenness are indicated by (p. 11) nodes that are dark red. In this network, chapters that are highly central also tend to have high betweeness.
Readers can use space represented by Figure A.6 to find chapters in the same “neighborhood” to identify a set of chapters of interest. For example, the chapter V.3 on ecological interface design, in the lower right of the network, is in the neighborhood of chapter III.2, which addresses a highly related topic of cognitive work analysis, and relatively close to chapters on uncertainty visualization and configural displays. Similarly, chapters on queuing models of cognition, adaptive automation, formal models of automation, attention, and neuroergonomics are all closely clustered in the middle left of the figure. Finding a chapter of interest in this space and then surveying its neighbors can guide a focused exploration of the handbook.
More generally, the space defined by the network in Figure A.6 reflects the broad themes of the historical development of cognitive engineering. The centrality and prominence of team cognition is perhaps the most notable feature of this space. Slightly above these team chapters is a series addressing supervisory control and human-automation interaction. This position suggests an important trend of technology to share many of the same features of a human team member, blurring the distinctions of supervisory control of humans versus that of automation. From this center, to the right, many chapters address cognitive engineering in broader organizations and communities, pointing to the need to consider the engineering of organizations and social networks. To the left of the teamwork chapters, many chapters focus on individual cognition, addressing topics of attention, decision making, and multitasking. The horizontal (p. 12) dimension broadly moves from individual cognition to the cognition of teams, and then networks.
In contrast to the individual-to-network span of the horizontal axis, the vertical axis broadly contrasts between cognitive task analysis at the top of the figure and cognitive work analysis at the bottom of the figure. A focus on the ecology and an analysis of the constraints and dynamics of the engineered system anchor the vertical dimension at the bottom of Figure A.6—cognitive work analysis. The top of the figure focuses on the cognitive processes and information needed to support decisions and situation awareness—cognitive task analysis. In some sense, the top of the figure represents the cognitive in cognitive engineering, and the bottom of the figure represents the engineering. The top of the figure represents the application of principles of cognitive psychology, and the bottom represents the application of an engineering perspective to characterize the plant or technology “behind” the interface. Such a simple caricature of the chapters and cognitive engineering as a whole certainly glosses over many important distinctions, but does highlight important themes that have guided the profession since its inception.
Figure A.6 collapses a complex, multidimensional space to two dimensions. Likewise, the topic analysis collapses many subtle points to 22 topics. Like any model, such simplifications distort and fail to completely capture the phenomena of interest. Careful inspection of Figures A.5 and A.6 reveals instances where the model might fail to represent reality well. For example, given the broad dimensions of Figure A.6—with analysis of individuals on the left, teams in the center, and networks on the (p. 13) right—one might expect the chapter on organization design to anchor the horizontal dimension on the right. One explanation for its location might stem from how terms are treated in the analysis, particularly the term “operator,” which is used to refer to a person and as a cognitive process in the chapter on queuing models. The text analysis techniques we used are blind to this distinction, highlighting the limits of any simple model of a complex field.Nevertheless, we hope the necessarily imperfect representations in Figures A.5 and A.6 are useful tools for exploring this handbook and the field of cognitive engineering.
The field of cognitive engineering and this handbook are large and complicated. Consequently, no single chapter structure will serve all readers’ needs. (p. 14) Text analysis reveals clusters of chapters, topics, and connections between chapters based on these topics that provide alternate ways of exploring the material in the handbook and understanding the field. The forces that initiated the emergence of cognitive engineering nearly 30 years ago have not abated, but instead have intensified. Designers face increasingly complex technologies that, properly engineered, enable people to (p. 15) work with and through such technology in new and increasingly productive ways. The chapters in this handbook profile advances and remaining challenges in designing for cognitive work.
Bruner, J. (1973). Going beyond the information given. New York, NY: Norton.Find this resource:
Card, S. K., Moran, T. P., & Newell, A. (1983). The psychology of human-computer interaction. Hillsdale, NJ: Erlbaum.Find this resource:
Csárdi, G., & Nepusz, T. (2006). The igraph software package for complex network research. InterJournal Complex Systems, CX.18, 1695.Find this resource:
Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., & Harshman, R. (1990). Indexing by Latent Semantic Analysis. Journal of the American Society for Information Science and Technology, 41(6), 391–407.Find this resource:
de Groot, A. D. (1978). Thought and choice in chess (2nd ed.). Paris, France: Mouton De Gruyter.Find this resource:
Elkind, J. I., Card, S. K., Hochberg, J., & Huey, B. M. (1990). Human performance models for computer-aided engineering. New York, NY: Academic Press.Find this resource:
Flach, J. M. (1990). The ecology of human-machine systems I: Introduction. Ecological Psychology, 2(3), 191–205.Find this resource:
Gluck, K. A., & Pew, R. W. (2005). Modeling human behavior with integrated cognitive architectures: Comparison, evaluation, and validation. Mahwah, NJ: Erlbaum.Find this resource:
Grün, B., & Hornik, K. (2011). topicmodels: An R package for fitting topic models. Journal of Statistical Software, 40(13), 1–30.Find this resource:
Hollan, J., Hutchins, E., & Kirsh, D. (2000). Distributed cognition: Toward a new foundation for human-computer interaction research. ACM Transactions on Computer-Human Interaction, 7(2), 174–196.Find this resource:
Hollnagel, E., Mancini, G., & Woods, D. D. (1988). Cognitive engineering in complex, dynamic worlds. New York, NY: Academic Press.Find this resource:
Hutchins, E. (1995). Cognition in the wild. Cambridge, MA: MIT Press.Find this resource:
Klein, G. (1989). Recognition-primed decisions. In W. B. Rouse (Ed.), Advances in man-machine systems research (pp. 47–92). Greenwich, CT: JAI Press. (p. 16) Find this resource:
Landauer, T. K., & Dumais, S. T. (1997). A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104(2), 211–240.Find this resource:
Norman, D. A. (1983). Cognitive engineering. In D.A. Norman & S.W. Draper (Eds.), User centered systems design (pp. 31–62). Hillsdale, NJ: Erlbaum.Find this resource:
Pew, R. W. (2007). Some history of human performance modeling. In W. Gray (Ed.), Integrated models of cognitive systems (pp. 29–44). New York, NY: Oxford University Press.Find this resource:
Pew, R. W. (2008). More than 50 years of history and accomplishments in human performance model development. Human Factors, 50(3), 489-496.Find this resource:
R Core Team, R. C. (2012). R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing.Find this resource:
Rasmussen, J. (1983). Skills, rules, and knowledge; Signals, signs, and symbols, and other distinctions in human performance models. IEEE Transactions on Systems, Man, and Cybernetics, SMC-13(3), 257–266.Find this resource:
Rasmussen, J. (1986). Information processing and human-machine interaction: An approach to cognitive engineering (North-Holland Series in Systems Science and Engineering, No. 12). Elsevier.Find this resource:
Reason, J. (1990). Human error. Cambridge, England: Cambridge University Press.Find this resource:
Rouse, W. B. (1980). Systems engineering models of human-machine interaction. Amsterdam, The Netherlands: North-Holland.Find this resource:
Salas, E., & Fiore, S. (2004). Team cognition. Washington, DC: APA Press.Find this resource:
Senders, J., Moray, N. P. (1991). Human error: Cause, prediction, and reduction. Series in applied psychology. Hillsdale, NJ: Erlbaum.Find this resource:
Sheridan, T. B. (1992). Telerobotics, automation, and supervisory control. Cambridge, MA: MIT Press.Find this resource:
Sheridan, T. B. (2002). Humans and automation: System design and research issues. New York, NY: John Wiley & Sons.Find this resource:
Sheridan, T. B., & Ferrell, W. R. (1974). Man-machine systems. Cambridge, MA: MIT Press.Find this resource:
Sheridan, T. B., & Johannsen, J. (1976). Monitoring behavior and supervisory control (NATO special program panel on human factors). New York, NY: Plenum Press.Find this resource:
Simon, H. A., & Chase, W. G. (1973). Skill in chess. American Scientist, 61, 394–403.Find this resource:
Vicente, K. J. (1999). Cognitive work analysis: Toward safe, productive and healthy computer-based work. Boca Raton, FL: CRC Press.Find this resource:
Vicente, K. J. (2001). Cognitive engineering research at Riso from 1962–1979. In E. Salas (Ed,), Advances in human performance and cognitive engineering research (Vol. 1, pp. 1–58). Oxford, England: Elsevier.Find this resource:
Vicente, K. J., & Rasmussen, J. (1990). The ecology of human-machine systems II: Mediating “direct perception” in complex work domains. Ecological Psychology, 2(3), 207–249.Find this resource:
Wickens, C. D., Lee, J. D., Liu, Y., & Gordon-Becker, S. (2004). An introduction to human factors engineering (2nd ed.). Upper Saddle River, NJ: Prentice-Hall.Find this resource:
Wickham, H. (2010). A layered grammar of graphics. Journal of Computational and Graphical Statistics, 19(1), 3–28.Find this resource:
Woods, D. D., & Hollnagel, E. (2006). Joint cognitive systems: Patterns in cognitive systems engineering. Boca Raton, FL: CRC Press.Find this resource:
Zuboff, S. (1984). In the age of the smart machine. New York, NY: Basic Books.Find this resource: