In recent years, researchers in a variety of specialties have studied human error. Among these specialties have been speech, typing, programming, driving, industrial accidents, and commercial aircraft disasters. Initially, error research in these specialties evolved in parallel, with only limited cross-fertilization. In recent years, however, the specialties have been growing closer and have been converging toward a common set of findings and at least partial theories. Recently, Reason  has summarized a great deal of research on human error. An edited volume by Baars [1992a] has summarized research on speech errors, which is arguably the most mature human error specialty.
Perhaps the most important aspect of convergence in human error research has been movement toward at least a partial model of human cognition consistent with a broad spectrum of human error research. Figure 1, which illustrates the basic elements of this emerging model, is drawn from Reason’s  Generic Error-Modelling System (GEMS) and Baars’ [1992b] Global Workspace (GW) Theory. Reason [1990, 46] cites several related theories.
Figure 1: Emerging Model of Cognition
Before discussing this emerging model of cognition, it is important to understand that this is not a model for errors alone. It is a model for overall cognition. Researchers now agree that both correct performance and errors follow from the same underlying cognitive processes [Reason, 1990, p. 36]. Human cognition uses processes that allow us to be amazingly fast [Reason, 1990], to respond flexibly to new situations [Reason, 1990], and to juggle several tasks at once [Flower & Hayes, 1980]. Unfortunately, these processes inevitably produce occasional errors. As Reason [1990, p. 36] expressed the situation,
"an adequate theory of human action must account not only for correct performance, but also for the more predictable varieties of human error. Systematic error forms and correct performance are seen as two sides of the same theoretical coin."
In most situations, the occasional errors produced by human cognitive processes rarely cause serious problems. In large spreadsheet models with thousands of cells, however, even a minuscule error rate can lead to disaster.
Figure 1 shows that the model has three interacting subsystems. One is the automatic subsystem, which is characterized by cognition below the level of consciousness. A great deal of our everyday cognition occurs in the automatic subsystem.
Research has shown that the automatic subsystem is characterized by a great deal of parallel operation. For example, when we prepare to speak, we actually generate several competing plans [Baars, 1980]. We flesh them out in parallel with proper syntax, phonemics, and other aspects of speech. At some point during this process of development, we select a single utterance. Evidence for parallel operation comes from errors that blend aspects of competing plans, such as Spoonerisms [Baars, 1980]. Competition among competing plans is responsible for several types of errors that are incorrect in one or a few particulars but are highly lawful in other aspects of language. In typing, our cognitive steps for hitting successive keys overlap when we type several keys in a series [Gentner, 1988]. Because of parallelism, our automatic subsystem is extremely fast.
A broad spectrum of research indicates that the automatic subsystem uses schemata [Bartlett, 1932; Neisser, 1976]—organized collections of information and response patterns. Related terms are frames [Minsky, 1975] and scripts [Schank, 1990]. When the proper conditions exist, a schema is activated. As Reason [1990, p. 51] explained, the "minutia of mental life are governed by a vast community of specialized processors (schemata), each an ‘expert’ in some recurrent aspect of the world, and each operating over brief time spans in response to very specific triggering conditions (activators)."
The automatic subsystem appears to have an enormous pool of schemata to activate. There appear to be no known limits on schemata in long-term memory in terms of either their number or the duration of retention [Reason, 1990, p. 51].
Reason  argues that there appear to be two core mechanisms for selecting schemata to be activated. The first is pattern matching. If the situation matches the activating criteria for a schema, that schema is activated.
The second mechanism is frequency gambling [Reason, 1990]. What if there is no perfect match between the situation and a single schema? Or what if the normal process of selection between competing schemata is disrupted? In that case, the schema that has been activated the most frequently under similar conditions in the past will execute. Much of the time, this frequency gambling will produce a winning strategy. In other cases, however, it will result in errors. When frequency gambling fails, we get what Reason [1990, p. 59] called a strong but wrong error. We do something we have done many times before, rather than what we should do.
The second subsystem in Figure 1 is the attentional subsystem, to draw on terminology from Reason . This is roughly—but not exactly—what we call consciousness.
The automatic subsystem is simple, relying largely on pattern recognition and frequency gambling. In contrast, the attentional system has powerful logical capabilities. Unfortunately, using the attentional mode is also "limited, sequential, slow, effortful, and difficult to sustain for more than brief periods [Reason, 1990, p. 50]."
A key characteristic of the attentional subsystem is that its resources are very limited. If we wish to memorize things, we can only remember about seven things [Miller, 1956]. If we are solving a logic problem, the resources we can allocate to memory are even more limited. These limitations can easily lead to errors in memory or logical analysis.
Even with logical attentional thinking, there appears to be schematic organization, just as there is in the automatic subsystem. In fact, it is likely that schemata in the attentional subsystem are really schemata from the automatic subsystem that rise to the attentional level [Baars, 1992b].
One aspect of schemata in the attentional subsystem is the reliance of people on "lay theories" (habitual informal theories) when they deal with technical areas such as physics [Furnham, 1988; Owen, 1986; Resnick, 1983]. Even after people receive training in specific areas, such as physics, they often revert to lay theories afterward [Resnick, 1983]. Lay theories are schemata that we have developed over many years. They are very likely to produce errors when we model situations.
Figure 1 shows that the automatic and attentional subsystems do not work independently. First, the attentional subsystem holds goals. These goals influence the activation of automatic subsystem nodes and so at least partially control the automatic subsystem. Baars [1992b] calls this aspect of the attentional subsystem the Global Workspace, after artificial intelligence theories.
If the attentional subsystem loses its goal, the entire cognitive system is likely to err. One could argue that losing the goal amounts to culpable negligence. However the limited attentional resources must also be allocated to planning for future actions [Rasmussen, 1990] and dealing with unexpected conditions. In addition, when we act in real life, we have multiple constraints on what we do, and we have to juggle many tasks and many constraints within individual tasks [Flower & Hayes, 1980]. While goals can be pushed temporarily down on something like a programming stack for later retrieval, they may not be back from the stack when needed.
The automatic subsystem can also take initiate action. If there is an error, for instance, the automatic subsystem can interrupt the attentional subsystem and demand attention [MacKay, 1992]. In this way, a schema in the automatic subsystem can influence or even grab control of the Global Workspace, so that it can influence the activation of other schemata.
The third subsystem in Figure 1 is the environment. Human cognition is not merely internal. It is situated in the environment surrounding it. Neisser  argued that schemata direct our exploration of the environment. Exploration samples specific aspects of the enormous buzzing confusion surrounding us. This, in turn, modifies our schemata.
Sellen and Norman  emphasize that the interaction with the environment takes place continuously as we plan and execute an action. First, we form a high-level intention. When we execute it, we constantly adjust our action through feedback with the environment. This adjustment takes place without burdening our limited attentional system in most cases. In addition, once the adjustment begins, it takes on a life of its own, sometimes going in unforeseen directions.
Thanks to the way that human cognition operates, errors are inevitable. This makes the detection and correction of errors critical. The questions are what percentage of errors are detected and corrected and what is the final residual undetected error rate.
Several studies have used protocol analysis to observe how people do cognitive work. In this technique, the subject voices what he or she is thinking during the process. Allwood  used this technique to study statistical problem solving. Hayes and Flower  used protocol analysis to study writing.
Both studies found that as subjects worked, they frequently stopped to check for errors. In some cases, the error detection happened immediately after an error or the suspicion of an error. In other cases, it was done systematically, as what might be called "good practice." Kellog  describes several studies that measured time spent in writing. These studies found that about a fifth of all writing time is spent in reviewing what we have already written.
Although the emerging model of human error is compelling and fits a broad spectrum of research in a qualitative fashion, it cannot, at this stage, predict error commission rates or error correction rates. Error rates are still a matter for empiricism. Fortunately, however, as this website shows, there is strong convergence in error rates across human cognitive domains, so we can have reasonable expectations about error commission and correction rates in new human cognitive domains.