Approaches to Visual Search: Feature Integration Theory and Guided Search
Abstract and Keywords
In her original Feature Integration Theory, Anne Treisman proposed that we process a limited set of basic preattentive, visual features in parallel across the visual field. Binding those features together into coherent, recognizable objects requires selective attention of item after item. In Treisman’s original conception, searches were divided into parallel feature searches and other serial self-terminating searches. Wolfe’s Guided Search model added the idea that the deployment of attention could be guided by preattentive information. In this view, the efficiency of search is related to the effectiveness of guidance on a continuum from perfect guidance, in the case of simple feature pop-out, to no guidance when no basic features distinguish target from distractors. This chapter reviews the evidence for different basic, preattentive features and describes the current understanding of the rules of guidance, the mechanics of visual search, and the relationship of these processes to visual awareness.
The visual world is full of objects and it is an interesting fact that we can see more of that world than we can understand at any given moment. Thus, looking at this picture of a park just off Market St in San Francisco (Fig.2.1), you immediately see a rectangle filled with visual stuff but you do not immediately know the answer to fairly basic questions that one might ask about the objects in the scene. What colour is the bus? (white). Are there any people present? (no). This chapter will be devoted to the investigation of these limits on our perception through the theoretical lenses of Feature Integration Theory (Treisman 1988; Treisman and Gelade 1980) and Guided Search Theory (Wolfe 1994, 2007; Wolfe, Cave, and Franzel 1989). The chapter will be organized around the four terms that make up the names of the theories. Features: What are the basic features that are seen immediately in that rectangle of an image? Integration: How are those features combined into object representations? In particular, we will focus on the need for attention-demanding feature ‘binding’ in object recognition. Guidance: How can unbound features be used to guide Search for those objects or properties of a scene that are not immediately available to us when we look at a scene?
Most of the work discussed in this chapter will involve the visual search paradigm in which an observer looks for one or more targets in a display containing some distractor (p. 12) items. Search tasks are ubiquitous in daily life. Where is my coffee cup, the cellphone, the keyboard, the mouse, etc.? Such searches are typically concluded so quickly that we do not even register them as searches. We notice when the search is more prolonged. Where on stage is my child amidst the rest of the school choir? Moreover, our civilization has created socially important search tasks from airport baggage security (Gale, Mugglestone, Purdy, and McClumpha 2000; Rubenstein 2001) to medical image perception (Berbaum et al. 1998; Krupinski, Berger, Dallas, and Roehrig 2003; Kundel and Nodine 2004; Nodine, Krupinski, and Kundel 1993). These typically require trained expert searchers.
Beyond its face validity as a task that we perform all the time, visual search is a useful paradigm in the lab because it gives us a way to quantify the capacity limitations described in our initial example. With the San Francisco image, we can assert that you were not immediately aware of the presence of the bus or the absence of humans. With stimuli like those in Fig. 2.2, we can measure that.
Suppose that we showed observers a succession of displays like this and asked them to press one key if a red square was present and another if no red square was present. We could measure reaction time (or ‘response time’—in either case, abbreviated ‘RT’) and/or accuracy as a function of ‘set size’, the number of items in the display. We would find, to a first approximation, that the set size did not matter. Observers would be as fast and accurate with the 33-item display on the right as they are with the 9-item display on the left. The slope of the RT x set size function will be near zero. We can call such searches ‘efficient’ searches. In contrast, if observers were asked to find the item of medium size amidst the large and small items of Fig.2.2, we would obtain a very different pattern of results. RTs would increase roughly linearly with set size. The slope of the RT x set size function would probably be in the range of 20–40 msec/item for trials having (p. 13) a medium-sized target. For target-absent trials, the slope would be roughly twice the target-present slope. (In fact, the ratio of absent to present slopes in an experiment with targets on 50% of trials appears to be reliably a bit more than 2:1—a fact that has theoretical importance, to be discussed below (Wolfe 1998).) If the display was presented only briefly or the observer was forced to respond by some very short deadline, we would find that the error rate for the 33-item display would be higher than the error rate for the 9-item display in the medium-target task but not in the red-target task. These speed–accuracy trade-off methods can be powerful tools in theoretical analysis of search tasks (Dukewich and Klein 2009; Guest and Lamberts 2011; McElree and Carrasco 1999). We can call tasks producing these sorts of results ‘inefficient’.
Feature Integration Theory
The distinction between what we are calling ‘efficient’ (Neisser 1963) and ‘inefficient’ (Atkinson, Holmgren, and Juola 1969) was central to the development of Treisman’s Feature Integration Theory (Treisman and Gelade 1980). She argued that there was a limited set of basic features that could be processed ‘preattentively’, in parallel. In Fig. 2.2, colour is the example of such a feature. A target, defined by a unique basic feature would ‘pop out’ of a display. It would be available to awareness and for action without apparent capacity limitations. In contrast, many other search tasks, even if they involved quite simple perceptual discriminations (e.g. the distinction between big, medium, and small) showed a pattern of search results that Treisman argued was consistent with a serial self-terminating search; a search that proceeded one item after another until the target was discovered or the search was terminated—in the simplest case, after examining every item and determining that each was not the target. In a serial, self-terminating search, (p. 14) observers would need to examine half of the items, on average, when the target was present and all of the items when it was absent. This predicted a 2:1 slope ratio between target-absent and target-present RT x set size functions; hence, the theoretical significance of the finding that those slope ratios may be reliably greater than 2:1.
The description of search tasks as ‘parallel’ and ‘serial’ has proven enduringly popular—to the distress of theoreticians who note that the patterns of results can be explained in many ways (Townsend 1971, 1990; Townsend and Wenger 2004; J. Palmer 1995; J. Palmer and McLean 1995). It is to avoid those theoretical commitments that we use the theory-neutral terms ‘efficient’ and ‘inefficient’.
Conjunction searches like the one illustrated in Fig. 2.3 were critical to Treisman’s originally dichotomous view of search. In Fig. 2.3, the target is defined by the conjunction of two features. It is the small red item. A sole small item in a field of large items or a sole red item in a field of black items would pop out. However, the small red item did not. The phenomenal experience of conjunction search was not a pop-out experience, and in a large body of Treisman’s data, the RT x set size functions had slopes in the inefficient range. Treisman’s conclusion was that while basic features were processed in parallel across the field, serial deployment of attention was required to integrate two or more features together. To change the language a little, serial attention was needed to ‘bind’ features to object representations. This was Treisman’s solution to the ‘binding problem’. There are multiple definitions of the problem (Treisman 1996). A useful way to think about it is in neural terms. If we stay with Fig. 2.3, there would be neurons that respond to an item’s size and others that respond to its colour. How would one know which colours went with which size? Having higher-order cells for all possible combinations of all features at all locations seems implausible. Thus, there was a problem, and a limited-capacity, attention-directed binding process was the solution in Feature Integration Theory.
Here we are talking about the binding of two properties of one object, colour and size. However, as Treisman (2006) explains, binding is a more general issue. She identifies no fewer than seven types of binding. Some of these are conceptually quite similar to (p. 15) ‘property binding’ (her name in this paper for the binding of basic features like colour and size). For instance, knowing that two parts belong to one object is also a binding problem. Other forms could be similar at an abstract or computational level while being rather different phenomenologically. For instance, she notes that the fine discrimination of specific orientations (e.g. 22 deg. from 26 deg.) is thought to rely on ratios of two or more broadly tuned orientation channels (Olzak and Thomas 1986). So, in some sense, the outputs of those two coarse channels must be bound to produce the fine resolution. Unlike the binding of colour and size, the unbound components are not introspectively available in this case of what Treisman calls ‘range binding’.
For those forms of binding, where the unbound properties are available, illusory conjunctions have been an important phenomenon, marshalled in support of Feature Integration Theory. The phenomenon is not easily demonstrated on the static page, but take a quick glance at Fig. 2.4 and then cover it up so that you cannot see it before returning to this text.
If you followed instructions, you are probably in a good position to list many of the colours and shapes that are present in the figure. You are probably quite sure that there was a diamond and, if asked, you are quite sure that there was no circle present. You are in a less confident position to report the conjunctions of colour and shape. What colour was the diamond? Odds are that you would not say ‘blue’ since nothing in the image was blue. However, Treisman found that there was quite a good chance that you would report seeing the diamond in the colour of one of the other shapes (Treisman and Schmidt 1982), perhaps yellow. It was as if the colours and shapes were floating free at some level in your visual system and, especially once the image was no longer present to provide a clear answer, those features could bind to form illusory conjunctions.
The original idea was that features were completely free-floating, but even in the first glimpse, the world doesn’t look like a soup of loose features, and subsequent work reveals a role for location (Prinzmetal and Keysar 1989; Hazeltine, Prinzmetal, and Elliot 1997). The illusory conjunction phenomenon went well beyond simple shapes and colours to include, for example, letters and words (Prinzmetal 1991; Treisman and Souther 1986; Virzi and Egeth 1984) and clock times (Goolkasian 1988). Thus, basic features were not (p. 16) the only units involved. It has been argued that illusory conjunctions are basically a phenomenon of memory (Briand and Klein 1989; Tsal 1989a, 1989b) and, indeed, memory for basic features can be quite terrible. In one experiment, Wolfe et al. showed observers an array of 20+ red and green dots. At one moment, signalled by a tone, one of the dots brightened and either did or did not change colour from red to green or green to red. Observers were close to chance performance when asked if the new colour was the same as the old one (Wolfe, Reinecke, and Brawn 2006). Observers knew that they had been looking at red and green dots, but the binding of colour to dot was clearly very fragile.
Still, binding errors need not be entirely in the remembered past tense. When items are relatively close to each other, especially in the periphery, observers experience ‘crowding’ phenomena (Levi 2008; Pelli and Tillman 2008) in which it may be possible to see features but not know how those features are bound together or even how they are bound to a location. That is, you might know there is a line at a location but you may be unable to determine its orientation. As Treisman would predict, attention allows illusory conjunctions to be resolved (Scolari, Kohnen, Barton, and Awh 2007) but the spatial grain of attention in the periphery is coarse (S. He, Cavanagh, and Intriligator 1996; Intriligator and Cavanagh 2001). So, away from fixation, illusory conjunctions may be a fact of life, even when the image remains continuously visible. Rosenholtz et al. talk about ‘mongrels’ that are created in the periphery by a system with weak spatial localization abilities and in which image statistics (e.g. average orientation) are calculated over multiple items. Portilla and Simoncelli (2000) developed a method for generating natural-looking textures from a set of image statistics. That is, if you took a picture of a forest, extracted the Portilla and Simoncelli statistics, and then synthesized a new image from those statistics, it would not be the original image but it would look forest-like. Rosenholtz et al. note that if this process is run over a standard search display like a search for a T among Ls, it creates new images in which some of the Ls have come to look like Ts. Arguing that this is the situation for vision away from the point of fixation, they have used the phenomenon to develop a theory of search performance (Rosenholtz, Chan, and Balas 2009). Even if crowding and mongrels are not quite the same things as illusory conjunctions, the problem of jumbled, possibly misbound features is similar. Attention-demanding binding is the Feature Integration solution to that problem (or, in more recent formulations, a leading solution among several—Treisman 2006).
Integration: Is Attentive Binding Really Necessary?
Various challenges have been presented to the idea that binding requires attention and that binding is required for object identification. Quite early, Houck and Hoffman did an experiment with the McCollough Effect (Houck and Hoffman 1986). The McCollough effect is an orientation-contingent, colour after-effect based on adapting to, say, red (p. 17) vertical and green horizontal gratings. After adaptation, black and white vertical gratings would look greenish while black and white horizontal gratings would look pink (McCollough 1965). Houck and Hoffman found that the effect could be produced without attention to the adapting stimuli even though the effect clearly requires some sort of association of orientation and colour. If attention is required to bind colour to orientation, how could orientation-contingent colour adaptation occur without attention? One could attack the notion of ‘without attention’. It is remarkably difficult to guarantee that a stimulus is unattended and it is virtually impossible to convince reviewers of this claim. As a consequence, more recent papers often refer to the ‘near-absence’ of attention (Reddy and Koch 2006; Reddy, Wilken, and Koch 2004).
Putting aside that methodological issue, there are situations where simple spatial co-occurrence of two features is all that is needed for the task at hand. Houck and Hoffman show one such case. A similar account could be invoked to explain other situations where conjunctive properties influence behaviour, apparently without attentive binding (Mordkoff, Yantis, and Egeth 1990). A particularly interesting case has to do with our remarkable ability to determine if humans or animals are present in scenes apparently without directing attention to the target object (VanRullen and Thorpe 2001; Kirchner and Thorpe 2006; Evans and Treisman 2005).
Figure 2.5 is a cartoon because we cannot easily list or portray unbound ‘animal’ or ‘oven’ features but the point is that some discriminations in laboratory tasks will not require binding. Co-occurrence may be enough, though it must be confessed that other results such as the ability to identify a face in the ‘near-absence’ of attention are harder to explain with this unbound bundle of features idea (Reddy and Koch 2006). (p. 18)
Perhaps the best way to demonstrate the critical role of attentive binding is to create simple stimuli that eliminate the effectiveness of spatial co-occurrence of features. Figure 2.6 shows an example. You are a looking for ‘pluses’ with purple vertical and green horizontal components. You will find that this is an inefficient search. (There are three targets. Did you find them all?) Wolfe and Bennett (1997) argued that this was hard because before the arrival of attention, each one of these items was an unbound collection of purple, green, vertical, and horizontal features. As all the features of one item are in more or less the same location, associated with the same object, attention and binding are required before it can be determined if purple goes with vertical or horizontal.
One could object that, in fact, a purple-vertical-green-horizontal plus is just a green-vertical-purple-horizontal plus, rotated 90 degrees and, as a consequence, it is not surprising that one is hard to find amidst the other. Figure 2.7 shows stimuli intended to counter that argument. Here the targets and distractors are clearly very different objects. The targets look like puzzle pieces and the distractors look like some unusually rectangular, single-celled organism with a flagellum. These are designed to have very similar preattentive, unbound features. Each has a closed region and some straight and curved lines. The result, in the upper of the two search displays of Fig. 2.7, is a relatively inefficient search (Wolfe and Bennett 1997). Very similar stimuli produce an easier search in the second example because the targets are now the only items with the preattentively available attribute of ‘closure’ (Chen 1982; Elder and Zucker 1993, 1998).
Treisman’s fundamental point about binding seems to be valid. If you eliminate the usefulness of spatial co-occurrence and if no basic feature distinguishes targets from distractors, an inefficient search is going to be required as the observer binds one item after another in the effort to identify a target. (p. 19)
Guidance: Efficient Conjunction Search and the Role of Guidance
While it may be true that binding requires attention, it turns out not to be true that search tasks fall into two neat categories; efficient feature, ‘pop-out’ searches that do not require binding and can be done in parallel and all other searches that do require binding and are thus inefficient. While Treisman’s original data were broadly consistent with this view, it rapidly became clear that conjunction searches, in particular, did not need to be inefficient. In the 1980s, exceptions started to appear (Alkhateeb, Morland, Ruddock, and Savage 1990; Dehaene 1989; McLeod, Driver, and Crisp, 1988; Nakayama and Silverman 1986; Sagi 1988; Zohary and Hochstein 1989). At first, it seemed like there might be a set of exceptions to the general rule, rather like irregular verb forms. Maybe stereopsis (Nakayama and Silverman 1986) or motion (McLeod et al. 1988) were special (p. 20) features that operated under different rules of binding. However, by 1989, Wolfe et al. had data showing quite efficient search for colour x orientation searches and enough other examples had surfaced that it was time for a modification of the basic Feature Integration story (Wolfe, Cave, and Franzel 1989).
In retrospect, the key idea was made obvious by Egeth et al. (1984) and illustrated in Fig. 2.8. If you are asked to search for the letter ‘V’, you will perform some sort of relatively inefficient search. (It is complicated because letters are complicated bundles of features.) If you are asked to look for a red ‘T’, you will, again, perform some sort of relatively inefficient search, but this time only through the red items. A black item is simply never going to be a red T and you can use that knowledge about the basic features of the target to guide attention to items that have the features that make them more likely to be the target. If half of the items were red in this example, then the slope of the RT x set size function would be half that of an unguided letter search because half of the letters could be eliminated without ever being attended. The key observation of Guided Search (Wolfe et al. 1989) was that this could be a general property of search tasks. If observers were asked to search for a red vertical item among red horizontals and green verticals, preattentive, basic feature information could be used to guide attention toward red items and toward vertical items. The intersection of the sets of red and vertical items would be an excellent place to look for a red vertical target. Indeed, were guidance perfect, search for a basic conjunction of two features like colour and orientation should have been perfectly efficient. Such searches are more efficient than original Feature Integration Theory predicted but the slopes are greater than zero. We will return to the apparent imperfection of guidance later. For the present, where Feature Integration Theory saw two types of search—parallel and serial—Guided Search saw two ends of a continuum of guidance. Efficient ‘parallel’ search tasks were those where guidance was adequate to allow the deployment of attention to the target item first time, every time. Inefficient searches were those where no guidance was possible beyond guidance to the presence of an object in a location. Each of these items needed to be attended to, one after the other, until the target was found (p. 21) or the search was abandoned. In between were guided searches, where some basic feature information could be used to prioritize some items as more worthy of attention than others. The slope of the RT x set size function, in this view, becomes an estimate of the percentage of items that remain candidate targets even after guidance has done its work. If unguided, inefficient search has a standard slope of, say, 40 msec/item, then a search task producing a slope of 10 msec/item would imply that guidance could eliminate 3/4 of the items from consideration.
Working out the details of this straightforward idea is not entirely straightforward. The remainder of this chapter will discuss the nature of guiding features and the mechanics of Guided Search. While this chapter will focus on Guided Search, it is worth noting that there is a family of models that share aspects of Feature Integration and/or Guided Search architecture to greater or lesser extent (Cave 1999; Tsotsos 2011; E. Cohen and Ruppin 1999; Vidyasagar 1999; Hubner 2001; Lee, Buxton, and Feng 2005; Mozer and Baldwin 2008).
The nature of guiding features
In Treisman’s original formulation, ‘the visual scene is initially coded along a number of separable dimensions, such as color, orientation, spatial frequency, brightness, direction of movement’ (Treisman and Gelade 1980). In her terminology, ‘colour’ would be a ‘dimension’ and ‘red’ a feature within that dimension. We often use the term ‘attribute’ for ‘dimension’ since dimension also gets used to talk about depth—which is a preattentive attribute. The cognitive architecture for early Feature Integration and early Guided Search was something like what is shown in Fig. 2.9. (It might or might not map directly onto neural structures.)
The stimulus was decomposed into a set of basic features. With the deployment of selective attention, the features of a particular object could be bound into a recognizable object with bound features in a particular location. In early Guided Search, the main addition to the standard Feature Integration story would be the idea that information about the decomposed features could be used to guide the deployment of attention. Thus, in Fig. 2.9, an intention to look for something horizontal would make use of the orientation map to deploy attention toward items showing some horizontal-ness in that map.
The architecture of the current version of Guided Search is somewhat different. The important change is that the representation that guides attention has been pulled out of the path from early vision to bound, recognized objects. This change was made when it became clear that the properties of the guiding representation were not the same as the building blocks of perception. Guidance is based on representations that do not ‘see’ the world as we experience it. This point is discussed more extensively elsewhere (p. 22) (Wolfe 2005, 2007, 2012; Wolfe and Horowitz 2004; Wolfe, Reijnen, Van Wert, and Kuzmova 2009). A single example will suffice here (Lindsey et al. 2010).
Suppose that your task is to search for desaturated targets. In the example in Fig. 2.10, the distractors would be fully saturated reds or blues and unsaturated whites. The targets would be pinkish on the left and pale blue on the right. This electronic or printed figure will not be colourmetrically precise, but the stimuli in the experiment were designed so that the targets lay perceptually exactly halfway between the distractors (details in Lindsey et al. 2010). This was done for a wide range of colours. Interestingly, searches for (p. 23) the pinkish (maybe ‘skin-coloured’) targets were hundreds of msec faster than searches for other desaturated colours. In perceptual space, the distance from blue to pale blue might be the same as the distance from red to pale red. However, in the guiding representation, the red–pale red difference is, apparently, much more significant.
Consequently, the current version of Guided Search adopts an architecture like that cartooned in Fig. 2.11. Between early visual processes and bound representations of objects, there is a tight bottleneck. Access to that bottleneck is gated by a guiding representation that is itself an abstraction from early vision and, as noted, does not ‘see’ the world exactly as we see it. Once an object is selected, re-entrant processes (Di Lollo, Enns, and Rensink 2000; Hochstein and Ahissar 2002) reach back to make contact with the perceptual properties of the object. Treisman’s current thinking also holds that re-entry is required for binding (Bouvier and Treisman 2010).
What are the features? (Wolfe and Horowitz 2004—revised)
A great deal of work has gone into defining the set of stimulus attributes that guide attention. Not all of this work is done by people who have adopted a Guided Search viewpoint, so in different papers, these attributes might be described as preattentive dimensions, pop-out features, etc. The goal is to determine the set of attributes that support efficient search and the guidance of attention.
Wolfe and Horowitz (2004) created a list of guiding attributes. Here that list is updated in a set of five annotated tables. Tables 1–3 1.1–1.3 rank candidates into categories of the ‘undoubted’, the ‘probable and possible’, and the ‘doubtful’ attributes. In addition, Table 2.4 is reserved for ‘complicated’ attributes and Table 2.5 introduces the idea of (p. 24) (p. 25) (p. 26) (p. 27) (p. 28) preattentive properties that modulate other guiding attributes without guiding attention in their own right. The referencing is extensive but not exhaustive.
Table 2.1 The Undoubted Feature Dimensions: ‘Undoubted’ means that the status of these properties is attested by a large body of work with converging methods
(1) . Colour is certainly a feature. The currently interesting questions have to do with what aspects of colour signals guide attention as noted in the example in Fig. 2.10.
(2) . It is possible that motion could be decomposed into separate attributes of speed and direction (Driver, McLeod, & Dienes 1992).
(3) . We are calling this dimension ‘size’ but it covers a number of properties that, again, might be treated separately. It is possible that spatial frequency should be treated as its own dimension (Bilsky & Wolfe 1995), especially considering its possible role in guidance in scenes (Oliva, Torralba, Castelhano, & Henderson 2003). See also Alvarez & Cavanagh 2008.
Table 2.2 Probable and possible feature dimensions: The items can make a reasonable case for their status as guiding attributes. However, more data would be needed to address dissenting opinions or the possibility of alternative explanations
Probable and Possible Feature Dimensions
Luminance onset (flicker)
Stereoscopic depth & tilt
Pictorial depth cues
Bergen & Julesz 1983; Cheal & Lyon 1992; Chen 1982, 1990; Kristjánsson & Tse 2001; Pilon & Friedman 1998; Pomerantz & Pristach 1989; Treisman & Gormican 1988; Tsal, Meiran, & Lamy 1995; Wolfe & Bennett 1997
Lighting direction (shading)
(4) . Luminance polarity clearly supports efficient search but it might be nothing more than the black-white or luminance axis of a 3D colour space. Thus it could be grouped with colour.
(6) . The taxonomy of depth cues as guiding attributes is not clear. Maybe it is a single, broad dimension of something like 3D layout (cf. Oliva & Torralba 2001) combining a variety of depth cues including stereopsis, the various pictorial depth cues, and shading into a representation of the 3D world. The relevant experiments would combine different guiding depth cues in a single display in order to see if they could act independently.
(7) . Depth guides attention in the sense that a ‘near’ item will pop out among far, for example. However, it also acts to modulate features like size. An item that is little in the image may be big in the world if it is far away. See ‘modulators’ in Table 5.
(8) . Rather like depth, it is not clear if ‘shape’ is one guiding dimension or many. Here we hedge our bets, listing shape and a collection of other properties that might be part of a family of shape attributes. To see the problem, consider line termination, closure, and curvature. Each supports efficient search, but are they actually independent attributes? Do an ‘O’ and a ‘C’ differ in closure or line termination or both? The issue has been complicated by the failure to settle on a generally accepted set of shape features (Kourtzi & Connor 2011; Logothetis, Pauls, & Poggio 1995; Yamane, Carlson, Bowman, Wang, & Connor 2008; Zhang et al. 2011).
(9) . The earlier evidence for guidance by shading (e.g. Ramachandran’s ‘eggs’: Ramachandran 1988) has been undermined somewhat by later work (Cavanagh 1999; Ostrovsky et al. 2004). It is possible that shading information should be grouped with other cues like stereopsis, as part of one, omnibus 3D depth property.
(10) . The evidence for shininess or gloss comes from a single experiment on binocular lustre (Wolfe & Franzel 1988). Unpublished work in our lab casts doubt on the generality of the finding.
(11) . ‘Expansion’ and/or ‘looming’ cues are somewhat problematic because they might be decomposed into a depth cue, a size cue, a motion cue, or some combination of these, though an ability to deploy attention to something that might hit you in the head seems like a good idea.
(12) . Recent evidence shows that numerosity (does this cluster contain more dots than the other clusters?) is, at best, a rather weak feature, requiring large (>3:1) ratios between target and distractor numerosities (Reijnen et al. 2011).
Table 2.3 Doubtful cases and probable non-features: These are the proposed preattentive features where the preponderance of data seems to argue against their role as guiding features. It must be acknowledged that other authors would come to different conclusions about some of these attributes. Moreover, some attributes have their status on the basis of single experiments and would benefit from further study
Doubtful cases & probable non-features
Learned features (e.g. letters)
3D volumes (e.g. geons)
Eye of origin/binocular rivalry
Batty, Cave, & Pauli 2005; Lipp 2006; Notebaert, Crombez, Van Damme, De Houwer, & Theeuwes 2011; Öhman, Flykt, & Esteves 2001; Soares, Esteves, Lundqvist, & Ohman 2009; Tipples, Young, Quinlan, Broks, & Ellis 2002
(13) . There are various claims for a guiding role for novelty and/or familiarity. The phenomena seem to be rather weak. For instance, as a general rule, a basic feature will continue to guide attention in the presence of some distractor heterogeneity in an irrelevant attribute. Thus, vertical will pop out among horizontal even if all the items are of different colour. However, while a novel mirror-reversed N might pop out among Ns (Wang et al. 1994), it is not clear that it would pop out among a heterogeneous set of normal letters.
(14) . Is it possible to learn a new preattentive feature? This is a long-standing question in visual search. Much of the work involves alphanumeric characters as over-learned sets of stimuli (Caerwinski, Lightfoot, & Shiffrin 1992; Malinowski & Hübner 2001; Sigman & Gilbert 2000; Sireteanu & Rettenbach 1995). The problem is that it is very hard to tell the difference between learning a new feature and learning to better exploit existing signals (e.g. line terminations, closure, etc.). This is a case where reasonable researchers could and do disagree (Shiffrin & Schneider 1977).
(15) . It was thought that a letter might pop out among numbers and vice versa but these results (e.g. the ‘zero-oh’ effect) have been hard to replicate.
(16) . Intersection once seemed to be a good candidate for feature status but more recent results have demoted it to ‘unlikely’ status (Wolfe & DiMase 2003).
(17) . People have a strong belief that they can detect when someone is staring at them. They even believe that they can detect someone staring at them from behind them (Simons & Chabris 2010). However, while we are very good at assessing someone else’s gaze direction, especially if it is toward us (Watt, Craven, & Quinn 2007), we are probably not able to do so in a search setting and/or without attending to that person—but see Stein, Senju, Peelen, & Sterzer 2011.
(18) . Correani et al. (2006) found that luminosity did support efficient search. However, a series of control experiments showed this to be attributable to local luminance effects and not to luminosity itself.
(19) . Wolfe and Franzel (1988) had argued that eye-of-origin information and binocular rivalry signals were not available to guide search. However, more recent results suggest some sensitivity to those signals.
(20) . There is little question that threatening stimuli elicit threat-specific responses as seen, for example, in the responses of phobics to snakes or spiders (LoBue & DeLoache 2008; Rakison & Derringer 2008; Reinecke, Rinck, & Becker 2006). However, there does not appear to be a specific role for ‘threat’ in guiding search once other features are controlled (e.g. snakes have thin, curvy, and pointed attributes as well as possibly frightening ones).
(21) . Biological motion is a special stimulus (Blake 1993; Blake & Shiffrar 2007; Johansson 1973), but to date there is not a convincing demonstration that it is capable of supporting efficient visual search. It is possible that motion that implies animacy will have feature status (Gao, McCarthy, & Scholl 2010; Gao, Newman, & Scholl 2009; Gao & Scholl 2011).
Table 2.4 Complicated cases
Faces (familiar, upright, angry, real, schematic, etc.)
D. V. Becker, Anderson, Mortensen, Neufield, & Neel 2011; S. I. Becker, Horstmann, & Remington 2011; Devue, Van der Stigchel, Brèdart, & Theeuwes 2009; Doi & Ueda 2007; Eastwood, Smilek, & Merikle 2001; Frischen, Eastwood, & Smilek 2008; Von Grünau & Anston 1995; Hansen & Hansen 1988; Hershler & Hochstein 2005, 2006; Horstmann, Bergmann, Burghaus, & Becker 2010; Langton, Law, Burton, & Schweinberger 2008; Nothdurft 1993c; D. G. Purcell, Stewart, & Skov 1996; Suzuki & Cavanagh 1995; Tong & Nakayama 1999; VanRullen 2006; M. A. Williams et al. 2002)
Other semantic categories (e.g. ‘animal’)
(22) . No candidate features have generated more controversy than the family of face features. There are many demonstrations of apparently efficient search for real faces, schematic faces, angry faces, happy faces, and so forth. There are also many papers pointing to feature confounds in these stimuli. There are good reviews in some recent articles on the topic (S. I. Becker et al. 2011; Frischen et al. 2008). In previous versions of this list, faces were placed in the unlikely category (Wolfe and Horowitz 2004) but the literature is so large and so persistent that it seems best to describe the case as ‘complicated’ and leave its resolution to the future.
Table 2.5 Modulators
(23) . The entries under this category do not appear to be preattentive features in their own right. However, they are properties that seem to be computed prior to the deployment of attention and that have an influence on other basic features. Thus, apparent depth can change the apparent size of an item and it is the apparent size, rather than the retinal or image size, that is critical in search (Aks & Enns 1996). The amodal completion of contours behind occluders can disrupt a feature. Rensink and Enns (1998) showed that the size of an element in the image could be lost when that element was tied to another element by amodal completion. On the other hand, it is possible to create oriented items that are only oriented if unoriented elements are tied together by amodal completion. Though these oriented bars are visually compelling, this orientation feature does not guide attention (Wolfe et al. 2011).
To summarize, there appear to be one to two dozen guiding attributes. Some of these are more powerful directors of attention than others. It seems clear that attributes like colour and motion guide easily and effectively while an attribute like numerosity may guide but only rather weakly. It would be wonderful if we could rank order attributes in terms of their effectiveness. However, while some direct comparison between features has been done (Nothdurft 1993a), a comprehensive hierarchy does not exist. There are hierarchies within dimensions as well. Red really does seem to be a particularly powerful guiding colour, but again, there is a vast amount we do not know, including why a feature like red should be more effective than some other colour. There is plenty of interesting speculation (Changizi, Zhang, and Shimojo 2006).
Guidance by scene-based features: The next frontier
For all the detail and complexity of the feature list already presented, it has become increasingly clear that these features are not the whole story when it comes to describing sources of guidance. This can be illustrated if you look for people in Fig. 2.12. You will not search randomly and you will rapidly find the man on the sidewalk, halfway down the street. Eye movement data indicate that you will search first in locations where people are likely to be (Ehinger, Hidalgo-Sotelo, Torralba, and Oliva 2009; Torralba, Oliva, Castelhano, and Henderson 2006). People are on horizontal surfaces. They do not generally float. In addition, you know something about the size of people. Because you can very rapidly extract the spatial layout of a scene (Greene and Oliva 2009), you should (p. 29) be able to determine if an object is a candidate human on the basis of the interaction of the size of an object in the image and its apparent depth (Sherman, Greene, and Wolfe 2011). Both of these factors may have kept you from noticing the other man, apparently very small and perched on the ledge of the first window on the right. The pixels and the rough local contrast are the same for each man (courtesy of Photoshop) but one target is far more plausible. If you found the plausible target first, it was probably scene guidance that directed your attention. While the classical features can be useful, the apparent efficiency of search in real-world scenes cannot be based on those features alone (Vickery, King, and Jiang 2005; Wolfe, Alvarez, Rosenholtz, Kuzmova, and Sherman 2011).
Our understanding of scene guidance is relatively young but it seems clear that observers make use of what can be called scene semantic guidance (forks are likely to be next to plates) and scene syntactic guidance (paintings hang on walls) (Castelhano and Heaven 2010; Henderson, Brockmole, Castelhano, and Mack 2007; Henderson and Ferreira 2004; Neider and Zelinsky 2006; Vo and Henderson 2009). Some of this guidance is probably relatively slow. After all, you cannot look for forks next to plates until you have identified the plates. However, the guidance that is based on scene structure and category can be based on information that is available very quickly from the global processing of the ‘gist’ of a scene (Fei-Fei, Iyer, Koch, and Perona 2007; Kirchner and Thorpe 2006; Oliva 2005; Sanocki and Epstein 1997). It is possible to know that you are viewing a man-made, navigable, urban scene before you have selectively attended to the various objects that make up that scene (Greene and Oliva 2009).
(p. 30) Two paths to awareness
The ability to extract some information from scenes without selective attention to objects reflects the working of a non-selective pathway from the stimulus to visual awareness (Wolfe, Vo, Evans, and Greene 2011). The capabilities of this pathway should not be overstated. You will not recognize specific objects without selective attention and binding. However, a non-selective pathway is a useful addition to the diagram in Fig. 2.11. This elaboration is shown in Fig. 2.13. Early visual processing of a scene feeds a non-selective pathway that can provide some information about spatial layout and the gist of the scene. It also feeds the Guiding Representation, here represented by colour, shape, and now gist—a scene guidance component. Finally, early vision provides the input to the selective pathway that supports object recognition. It has a selective attentional bottleneck whose selections are modulated by the Guiding Representation.
At any given moment, the contents of visual awareness include visual ‘stuff’ at all locations, provided by the non-selective pathway and one (or perhaps a few) bound objects, provided by the selective pathway. A more detailed account of this two-pathway architecture can be found in Wolfe et al. (2011).
(p. 31) Guidance: The Rules
The ability of a preattentive feature to guide attention is highly rule-governed. There are general rules that appear to operate over all dimensions and rules specific to a single dimension.
1. The greater the difference, along a preattentive dimension, between the target and the distractors, the more efficient the search (Duncan and Humphreys 1989). Thus, it will be easier to find vertical among 30 deg. tilted items than among 15 deg. tilted items.
2. The greater the differences between the distractors (distractor heterogeneity), the less efficient the search (Duncan and Humphreys 1989). Thus, it may be harder to find vertical among a mix of 15 and 30 deg. tilted distractors than among homogeneous 15 deg. distractors, even though the average difference between target and distractors is greater for the heterogeneous example (Rosenholtz 2001b).
3. The minimum target–distractor difference required to produce efficient search will be much greater than the just noticeable difference for those stimuli. Thus, with attentional scrutiny, it is possible to tell the difference between a vertical line and one tilted 1–2 deg. from vertical. It is not possible to search efficiently for vertical among 2 deg. distractors. Efficient search will require something more like 10 to 15 deg. differences (Foster and Ward 1991b; Foster and Westland 1998).
4. For purposes of guidance, differences are greater across categorical boundaries. It will be easier to find a steep item among shallow items than to find the steepest item among other steep items even if the angular differences are the same in the two conditions (Wolfe, Friedman-Hill, et al. 1992).
5. The detailed properties of guiding attributes need to be worked out separately for each attribute and are not necessarily predicted from conscious perception of the attribute. As noted earlier, for purposes of guidance, pink/peach/skin(?) colours seem to have special status (Lindsey et al. 2010). As another example, in orientation, one might wonder if guidance is represented in a 90 deg. or 180 deg. framework. Obviously, a simple line has the same orientation after a 180 deg. rotation but a polar object (e.g. a Christmas tree or sailboat) does not have the same appearance after a 180 deg. rotation. Visual search ignores object polarity. The biggest orientation difference is 90 deg., not 180 deg. (Wolfe et al. 1999).
6. Guidance will be stronger if the observer sees the actual guiding feature (e.g. the colour ‘red’) rather than merely the name of the feature (e.g. the word ‘red’) just prior to the appearance of the search display. This priming can be produced by a deliberate cue before the onset of the trial (Wolfe, Horowitz, Kenner, Hyle, and Vasan 2004). Similarly, finding the target on one trial effectively (p. 32) primes that feature for the next (Kristjánsson and Driver 2008; Maljkovic and Nakayama 1994).
7. It is easier to find the presence of a feature than to find its absence. This is the root of many search asymmetries (Treisman and Gormican 1988; Treisman and Souther 1985) in which the search for A among B is notably more efficient than the search for B among A. Thus, it is easier to find the presence of a moving target among stationary distractors than to find a stationary target among moving distractors (Dick et al. 1987; Royden et al. 2001; Horowitz et al. 2007). There is a great deal more to be said about asymmetries (Wolfe 2001), much of it first said by Treisman, as noted above. Rosenholtz has noted that the designs of asymmetry experiments are sometimes themselves asymmetrical (Rosenholtz 2001a) and this is worth keeping in mind when evaluating search asymmetry results.
8. Not all search asymmetries are evidence for the preattentive processing of the stimulus property under study. Sometimes it is easier to find A among B than B among A, not because A pops out, but because it is easier to reject a succession of Bs as distractors. In these cases, both A vs B and B vs A tend to produce inefficient slopes. Thus, for example, if there is a real ‘anger superiority effect’ (Hansen and Hansen 1988), it may not be that ‘angry’ pops out, but rather that angry faces hold attention, making it harder to move through them when they are the distractors. As a result, search for an angry target among easily dismissed, happy distractors is more efficient than search for a happy target among hard-to-dismiss, angry distractors, but neither of these searches will be efficient.
9. It is possible to guide by more than one feature at a time. In Guided Search, this is how relatively efficient conjunction search is accomplished. If there are several target features, you can guide your attention to all of them. Indeed, higher-order conjunctions with three or even six defining features can be easier than the classic two-feature conjunction search (Wolfe et al. 1989; Wolfe 2010). There is a debate about how this is accomplished. We have argued that it is possible to simultaneously guide to multiple features (Friedman-Hill and Wolfe 1995). Huang and Pashler argue that guidance to multiple features must be done in a series of nested steps. For example, one might select all red items and then select the vertical items within the red set (Huang and Pashler 2007, 2012).
10. It is not possible to guide to two features from the same dimension/attribute at the same time. Conjunctions of two colours or two orientations are inefficient (Wolfe et al. 1990). Even though it is easy to identify an item that is red and yellow, it is not efficient to search for that item among red/blue and blue/yellow distractors.
11. It is relatively efficient to search for a conjunction of an item of one colour with a part of another colour. (Find the red thing with a yellow part among red things with blue parts and blue things with yellow parts.) This suggests that the (p. 33) preattentive representations of objects have some part–whole structure to them (Wolfe, Friedman-Hill, and Bilsky 1994). Interestingly, search for orientation x orientation conjunctions of items with part–whole structure is not efficient. The vertical item with an oblique part is relatively hard to find amidst vertical items with horizontal parts and horizontal items with oblique parts (Bilsky and Wolfe 1995). The difference between colour and orientation could be due to different susceptibility to rotation. If you tilt your head, a red thing with a yellow part is still a red thing with a yellow part. A vertical thing with a horizontal part might not be rotationally invariant in the same way.
Guided Search 2013: How Do We Search?
Figure 2.14 elaborates on Fig. 2.13 to provide a roadmap of the steps in visual search as imagined by the 2013 incarnation of the Guided Search model. Sections below refer to the letters on the figure:
(p. 34) A. As described in the figure, a set of basic attributes is extracted from the early stages of visual processing. For each attribute, two forms of guidance are possible. Bottom-up guidance is stimulus-driven, based on local differences. Bottom-up guidance is essentially the same as ‘salience’ (Nothdurft 2000; Donk and van Zoest 2008; Lamy and Zoaris 2009). Top-down guidance is user-driven, based on what the observer’s current understanding of the task demands. Thus, returning to Fig. 2.8, if you are looking for a red T, colour guidance will be directed to red. If you are looking for a black T, guidance will be directed to black. Nothing has changed in the stimulus or in the bottom-up salience of items. It is the top-down guidance that changes in this case. When neurophysiological studies refer to attention to multiple items in parallel (e.g. enhancing neural responses of all red items), they are generally describing what we refer to as top-down guidance (Bichot, Rossi, and Desimone 2005; Treue and Trujillo 1999; Carrasco, Eckstein, Verghese, Boynton, and Treue 2009).
The top-down/bottom-up terminology is not entirely unambiguous. Consider priming effects. Exposure to red will speed subsequent search for red. We have described this as a form of implicit top-down guidance (Wolfe et al. 2004) because something about the observer—in this case, their history—has changed the guidance of attention. Response to the same stimulus would be different if the observer’s history was different. Others see priming as an automatic process that should be considered to be ‘bottom-up’ (Kristjánsson and Campana 2010).
B. Guidance by different attributes is combined into a ‘priority map’. In the first versions of Guided Search this was called an ‘activation map’. ‘Priority map’ (Serences and Yantis 2006) better captures the role of this representation, which is to prioritize items in the visual input for selective attention, binding, and recognition. In the absence of top-down guidance, a priority map built from pure bottom-up signals would be a ‘salience map’ (Koch and Ullman 1985; Itti and Koch 2000; Parkhurst, Law, and Niebur 2002). Each attribute makes a weighted contribution to the priority map. Thus, if you are looking for red vertical items, the weights on colour and orientation will be set high and other attributes will be de-emphasized. In order to guide to ‘red’, one can imagine either setting a high weight for a separate feature, ‘red’, or enhancing red within the colour module and setting a high weight for guidance by colour. See the work on ‘dimension weighting’ for more detail on this issue (Found and Muller 1996; Muller, Reimann, and Krummenacher 2003; Zehetleitner, Krummenacher, Geyer, Hegenloh, and Müller 2011).
Weights can be adjusted but not to the full extent that the user might wish. Notably, it does not seem possible to set the weight on bottom-up salience signals to zero. There has been a long-standing debate about what sorts of salient signals capture attention. Claims have been made for luminance onsets (Jonides and Yantis 1988; Yantis and Jonides 1990), new objects (Yantis 1993), and for many specific attributes (Franconeri and Simons 2003; Rauschenberger 2003; Turatto and Glafano 2000; Pratt, Radulescu, Guo, and Abrams 2010). There are generally counterclaims (Gibson and Kelsey 1998), but on balance, it seems that a highly salient signal from an irrelevant attribute will have some influence on the course of search, regardless of the desires of the searcher.
(p. 35) Some researchers are not fond of the idea of a priority map (Chan and Hayward 2009; Huang and Pashler 2012). However, while the details vary, some version of such a map is a part of many models of search and has been the subject of much neurophysiological investigation (Bisley and Goldberg 2010; Fecteau and Munoz 2006; Gottlieb, Balan, Oristaglio, and Schneider 2009; Li 2002; Thompson and Bichot 2004).
C. The priority map is so named because its role is to prioritize the selection of items. Since Koch and Ullman (1985), it has been proposed that something like a winner-take-all operation selects the next item for attention. This is straightforward for the first selection in an image but what about subsequent selections? The critical question is whether attention ever revisits the same location/item during the course of a search. The original assumption was that items were not revisited. This was the assumption of Feature Integration and the first versions of Guided Search. The phenomenon of ‘Inhibition of Return’ (IOR) seemed to provide a mechanism to prevent resampling from the display (Klein 1988; Posner 1980) since IOR showed that it was harder to get attention back to a previously attended item than to direct attention to a previously unattended item. However, subsequent results suggested that IOR probably only marked the most recently attended items (Abrams and Pratt 1996; Pratt and Abrams 1995; Tipper, Weaver, and Watson 1996) making it hard to see how this mechanism could prevent revisitations once the set size becomes large.
Horowitz and Wolfe did a series of experiments designed to directly test if items were being sampled with or without replacement during visual search and came to the conclusion that ‘visual search has no memory’ (Horowitz and Wolfe 1998, 2003)—at least, no memory for the deployments of covert attention. Their data were consistent with sampling with replacement, giving no role to IOR or other means of preventing revisitation. Other evidence suggested that this claim might be too strong (Peterson, Kramer, Wang, Irwin, and McCarley 2001; Shore and Klein 2000; Takeda 2004). Over the short timescale required for many standard laboratory search tasks, it seems most likely that neither extreme position is correct. Items are not sampled entirely without regard to the prior history of search but neither are they inhibited in a manner that can prevent some resampling. In Klein’s words, inhibition of return appears to be a ‘foraging facilitator’ (Klein and MacInnes 1999), one of several mechanisms that bias attention toward new items during search (Klein 2009).
When search is more prolonged, as it is in many real-world tasks, strategic plans can play a role in preventing revisiting. Reading is a simple example. If you search this page of text for the word ‘covert’, you will most likely start at the top left and read the page in a manner that eliminates most resampling from the display without requiring specific inhibition of or memory for rejected items. Your prospective plan of search serves the role that memory would serve (McDaniel, Robinson-Riegler, and Einstein 1998). Similar factors are probably at work in many real-world searches (Hollingworth 2009; Hollingworth and Henderson 2002).
D. In the 2013 version of Guided Search, the recognition/classification of an object is modelled as a diffusion process (Ratcliff 1978). While we have used a Ratcliff-style diffuser, there is, at present, no reason to choose between any particular member of the (p. 36) class of models in which information accumulates toward a threshold over time (Brown and Heathcote 2008; Donkin, Brown, Heathcote, and Wagenmakers 2011; Purcell et al. 2010). The unique aspect of the Guided Search version is that it proposes an asynchronous diffusion process. That is, items are selected one at a time and begin diffusing toward a target or distractor boundary once they are selected. Since the rate of selection (say, 20–40 Hz) is faster than the time required to identify an item (say, 150–300 msec), multiple items are diffusing at the same time. Metaphorically, this can be seen as a ‘pipeline’ or a ‘carwash’ (Moore and Wolfe 2001; Wolfe 2003), albeit a carwash in which one car could enter second and leave first.
This architecture is a hybrid of serial and parallel processing (E. Cohen and Ruppin 1999; Herd and O’Reilly 2005; Thornton and Gilden 2007; Townsend and Wenger 2004; Verghese 2001). Selection is imagined to be strictly serial though nothing very dramatic would change if a small number of items could be selected at one time. Diffusion is parallel in the sense that multiple items are undergoing the process of binding and recognition at the same time.
Diffusion models produce the positively skewed reaction time distributions that are characteristic of visual search (E. M. Palmer, Horowitz, Torralba, and Wolfe 2011; Wolfe, Palmer, and Horowitz 2010). Errors are produced when a distractor reaches the target boundary or a target reaches the distractor boundary. An error could also be generated if the search was terminated with a guess (see next section). Changes in the parameters of the diffuser can be used to model various effects seen in the data. Thus, if the boundaries are brought closer together, the error rate will rise and the RTs will decline; a speed-accuracy trade-off. False alarms and miss errors can be traded off against each other by changing the starting point of the diffuser.
When do we stop?
The architecture of Fig. 2.14 makes the process of finding a target reasonably clear. With some preattentive guidance, items are selected into the diffuser and search ends when one of those items goes over the target boundary. But what happens if there is no target or if no item reaches the target boundary or if there are an unknown number of targets in the display? At some point, search must end. When is it time to quit? Feature Integration assumed that observers would quit when all items had been examined on target absent trials (or almost all, with miss errors produced when observers quit without checking the target). Early versions of Guided Search argued that observers searched through all items that received guiding activation above some threshold. Neither of these accounts works well if distractors are not perfectly tracked during search. Moreover, in real scenes, it is completely unclear what the ‘set size’ might be or what it would mean to attend to all items (Wolfe, Alvarez, et al. 2011).
A different approach adds another diffuser to the model. In this diffuser, a signal accumulates over time and the trial is terminated when that signal crosses a quitting threshold. The threshold is set dynamically, based on the observer’s experience. After (p. 37) quitting correctly, the quitting threshold moves down, causing the observer to quit more quickly. After an error, it rises and the observer becomes more cautious about quitting. This adjustment can be observed in the pattern of RTs in search experiments (Chun and Wolfe 1996; Ishibashi, Kita, and Wolfe 2012).
Experiments that manipulate target prevalence are useful in constraining models of quitting behaviour in search (Colquhoun and Baddeley 1967; Fleck and Mitroff 2007; Wolfe, Horowitz, and Kenner 2005; Wolfe et al. 2007; Wolfe and VanWert 2010). When targets are rare, miss errors rise, false alarms fall, and RTs become shorter. If targets are common, miss errors fall, false alarms rise, and RTs become longer. To see this full pattern of behaviour, it is important to use stimuli that are ambiguous enough to produce false-alarm errors. Classic search experiments (find the T among Ls, etc.) tend not to produce false alarms. In contrast, real-world tasks like breast cancer screening (Wolfe, Birdwell, and Evans 2011) or airport baggage screening are characterized by ambiguous stimuli and very low prevalence. Wolfe and VanWert (2010) found that the effects of prevalence could be modelled as a change in criterion (which would be represented by a change in the starting point of the diffuser in Fig. 2.14) and a concurrent change in a quitting threshold. At low prevalence, the starting point moves toward the distractor bound, making miss errors more common, and the quitting threshold drops, making RTs shorter (and also increasing miss errors, if one assumes an ‘absent’ response when the quitting threshold is reached). Models that adjust only the starting point or only the quitting threshold, fail to capture the pattern of the data. It should be noted that search termination remains a complex and underinvestigated topic.
What We Didn’t Discuss
Having discussed search termination, it is time to consider chapter termination. Before ending, it is worth noting that there are a number of important topics that have been largely omitted here. Some of these will be discussed elsewhere in this volume. Any complete account of search would include some treatment of:
Eye movements: What is the relationship between covert deployments of attention and overt deployments of the eyes (Hwang, Wang, and Pomplun 2011; Kowler 2011; Malcolm and Henderson 2010; Neider, Boot, and Kramer 2010).
The psychophysics of simple searches: There is an important body of work on the fine-grained details of simple searches. The tasks used in these studies permit a degree of control that is not common in standard search tasks and not possible in real-world search tasks (Najemnik and Geisler 2005; J. Palmer, Verghese, and Pavel, 2000; Cameron, Tai, Eckstein, and Carrasco 2004; Dosher, Han, and Lu 2010; Baldassi and Verghese 2002).
Memory in repeated search: What happens when the same scene is searched more than once? Contextual cueing shows learning (Brockmole and Henderson 2006; Chun and (p. 38) Jiang 1998; Kunar, Flusberg, Horowitz, and Wolfe 2007). Repeated searches through simple, laboratory-style search displays do not produce an improvement in search efficiency (Wolfe, Klempen, and Dahlen 2000) but there is learning in repeated search through real scenes (Hollingworth 2009; Hollingworth and Henderson 2002; Vo and Wolfe 2012).
The neural basis of search: What might be the neural locus and operation of a priority map (Serences and Yantis 2006; Shipp 2004; Bisley and Goldberg 2010) or a diffuser (Ratcliff, Philiastides, and Sajda 2009) or a serial selection process (Buschman and Miller 2009; Chelazzi 1999) or set size effects (J. Y. Cohen, Heitz, Woodman, and Schall 2009)?
Of course, this is merely a sampling of the topics that have been studied from a neural perspective and a sampling of the topics, important to search, that have been omitted from this chapter.
Abrams, R. A. and Pratt, J. (1996). Spatially diffuse inhibition affects multiple locations: A reply to Tipper, Weaver, and Watson (1996). Journal of Experimental Psychology: Human Perception and Performance 22(5): 1294–1298.Find this resource:
Adams, W. J. (2008). Frames of reference for the light-from-above prior in visual search and shape judgements. Cognition 107(1): 137–150.Find this resource:
Aks, D. J. and Enns, J. T. (1992). Visual search for direction of shading is influenced by apparent depth. Perception & Psychophysics 52(1): 63–74.Find this resource:
Aks, D. J. and Enns, J. T. (1993). Early vision’s analysis of slant-from-texture. Investigative Ophthalmology and Visual Science 34(4): 1185.Find this resource:
Aks, D. J. and Enns, J. T. (1996). Visual search for size is influenced by a background texture gradient. Journal of Experimental Psychology: Human Perception and Performance 22(6): 1467–1481.Find this resource:
Alkhateeb, W. F., Morland, A. B., Ruddock, K. H., and Savage, C. J. (1990). Spatial, colour, and contrast response characteristics of mechanisms which mediate discrimination of pattern orientation and magnification. Spatial Vision 5(2): 143–157.Find this resource:
Alvarez, G. A. and Cavanagh, P. (2008). Visual short-term memory operates more efficiently on boundary features than on surface features. Perception & Psychophysics 70(2): 346–364.Find this resource:
Atkinson, R. C., Holmgren, J. E., and Juola, J. F. (1969). Processing time as influenced by the number of elements in a visual display. Perception & Psychophysics 6(6A): 321–326.Find this resource:
Baldassi, S. and Verghese, P. (2002). Comparing integration rules in visual search. Journal of Vision 2(8): 559–570.Find this resource:
Batty, M. J., Cave, K. R., and Pauli, P. (2005). Abstract stimuli associated with threat due to conditioning cannot be detected preattentively. Emotion 5(4): 418–430.Find this resource:
Bauer, B., Jolicoeur, P., and Cowan, W. B. (1996). Visual search for colour targets that are or are not linearly-separable from distractors. Vision Research 36(10): 1439–1466.Find this resource:
Bauer, B., Jolicoeur, P., and Cowan, W. B. (1998). The linear separability effect in color visual search: Ruling out the additive color hypothesis. Perception & Psychophysics 60(6): 1083–1093.Find this resource:
Becker, D. V., Anderson, U. S., Mortensen, C. R., Neufield, S. L., and Neel, R. (2011). The Face in the Crowd Effect unconfounded: Happy faces, not angry faces, are more efficiently detected in single- and multiple-target visual search tasks. JEP: General 140(4): 637–659. (p. 39) Find this resource:
Becker, S. I., Horstmann, G., and Remington, R. W. (2011). Perceptual grouping, not emotion, accounts for search asymmetries with schematic faces. Journal of Experimental Psychology: Human Perception and Performance 37(6): 1739–1757.Find this resource:
Berbaum, K. S., Franken, E. A., Jr., Dorfman, D. D., Miller, E. M., Caldwell, R. T., Kuehn, D. M., et al. (1998). Role of faulty visual search in the satisfaction of search effect in chest radiography. Academic Radiology 5(1): 9–19.Find this resource:
Bergen, J. R. and Julesz, B. (1983). Rapid discrimination of visual patterns. IEEE Transactions on Systems, Man, and Cybernetics SMC-13: 857–863.Find this resource:
Bergen, J. R. and Adelson, E. H. (1988). Early vision and texture perception. Nature 333: 363–364.Find this resource:
Bichot, N. P., Rossi, A. F., and Desimone, R. (2005). Parallel and serial neural mechanisms for visual search in macaque area V4. Science 308(5721): 529–534.Find this resource:
Bilsky, A. A. and Wolfe, J. M. (1995). Part–whole information is useful in size X size but not in orientation X orientation conjunction searches. Perception & Psychophysics 57(6): 749–760.Find this resource:
Bisley, J. W. and Goldberg, M. E. (2010). Attention, intention, and priority in the parietal lobe. Annual Review of Neuroscience 33(1): 1–21.Find this resource:
Blake, R. (1993). Cats perceive biological motion. Psychological Science 4(1): 54–57.Find this resource:
Blake, R. and Shiffrar, M. (2007). Perception of human motion. Annual Review of Psychology 58: 47–73.Find this resource:
Bouvier, S. and Treisman, A. (2010). Visual feature binding requires reentry. Psychological Science 21(2): 200–204.Find this resource:
Braddick, O. J. and Holliday, I. E. (1991). Serial search for targets defined by divergence or deformation of optic flow. Perception 20(3): 345–354.Find this resource:
Brand, J. (1971). Classification without identification in visual search. Quarterly Journal of Experimental Psychology 23: 178–186.Find this resource:
Braun, J. (1993). Shape-from-shading is independent of visual attention and may be a texton. Spatial Vision 7(4): 311–322.Find this resource:
Bravo, M. J. (1998). A global process in motion segregation. Vision Research 38(6): 853–864.Find this resource:
Brawn, P. and Snowden, R. J. (1999). Can one pay attention to a particular color? Perception & Psychophysics 61(5): 860–873.Find this resource:
Briand, K. A. and Klein, R. M. (1989). Has feature integration come unglued? A reply to Tsal. Journal of Experimental Psychology: Human Perception and Performance 15(2): 401–406.Find this resource:
Brockmole, J. R. and Henderson, J. M. (2006). Using real-world scenes as contextual cues for search. Visual Cognition 13(1): 99–108.Find this resource:
Brown, J. M., Weisstein, N., and May, J. G. (1992). Visual search for simple volumetric shapes. Perception & Psychophysics 51(1): 40–48.Find this resource:
Brown, S. D. and Heathcote, A. (2008). The simplest complete model of choice response time: Linear ballistic accumulation. Cognitive Psychology 57(3): 153–178.Find this resource:
Bundesen, C., Kyllingsbaek, S., Houmann, K. J., and Jensen, R. M. (1997). Is visual attention automatically attracted by one’s own name? Perception & Psychophysics 59(5): 714–720.Find this resource:
Burr, D. C., Baldassi, S., Morrone, M. C., and Verghese, P. (2009). Pooling and segmenting motion signals. Vision Research 49(10): 1065–1072.Find this resource:
Buschman, T. J. and Miller, E. K. (2009). Serial, covert shifts of attention during visual search are reflected by the frontal eye fields and correlated with population oscillations. Neuron 63: 386–396.Find this resource:
Caerwinski, M., Lightfoot, N., and Shiffrin, R. (1992). Automatization and training in visual search. American Journal of Psychology 105(2): 271–315.Find this resource:
Cameron, E. L., Tai, J. C., Eckstein, M. P., and Carrasco, M. (2004). Signal detection theory applied to three visual search tasks—identification, yes/no detection and localization. Spatial Vision 17(4–5): 295–325. (p. 40) Find this resource:
Carrasco, M., Eckstein, M., Verghese, P., Boynton, G., and Treue, S. (2009). Visual attention: Neurophysiology, psychophysics and cognitive neuroscience. Vision Research 49(10): 1033–1036.Find this resource:
Carter, R. C. (1982). Visual search with color. Journal of Experimental Psychology: Human Perception and Performance 8: 127–136.Find this resource:
Castelhano, M. S. and Heaven, C. (2010). The relative contribution of scene context and target features to visual search in scenes. Attention, Perception, & Psychophysics 72(5): 1283–1297.Find this resource:
Cavanagh, P., Arguin, M., and Treisman, A. (1990). Effect of surface medium on visual search for orientation and size features. Journal of Experimental Psychology: Human Perception and Performance 16(3): 479–492.Find this resource:
Cavanagh, P. (1999). Pictorial art and vision. In R. A. Wilson and Frank C. Keil (eds.), MIT Encyclopedia of Cognitive Science (pp. 648–651). Cambridge, Mass.: MIT Press.Find this resource:
Cave, K. (1999). The FeatureGate model of visual selection. Psychological Research 62(2–3): 182–194.Find this resource:
Champion, R. A. and Warren, P. A. (2008). Rapid size scaling in visual search. Vision Research 48(17): 1820–1830 [doi: 10.1016/j.visres.2008.05.012].Find this resource:
Chan, L. K. H. and Hayward, W. G. (2009). Feature integration theory revisited: Dissociating feature detection and attentional guidance in visual search. Journal of Experimental Psychology: Human Perception and Performance 35(1): 119–132.Find this resource:
Changizi, M. A., Zhang, Q., and Shimojo, S. (2006). Bare skin, blood and the evolution of primate colour vision. Biology Letters 2(2): 217–221.Find this resource:
Cheal, M. and Lyon, D. (1992). Attention in visual search: Multiple search classes. Perception & Psychophysics 52(2): 113–138.Find this resource:
Chelazzi, L. (1999). Serial attention mechanisms in visual search: A critical look at the evidence. Psychological Research 62(2–3): 195–219.Find this resource:
Chen, L. (1982). Topological structure in visual perception. Science 218: 699–700.Find this resource:
Chen, L. (1990). Holes and wholes: A reply to Rubin and Kanwisher. Perception & Psychophysics 47: 47–53.Find this resource:
Chen, L. (2005). The topological approach to perceptual organization. Visual Cognition 12(4): 553–637.Find this resource:
Chun, M. M. and Wolfe, J. M. (1996). Just say no: How are visual searches terminated when there is no target present? Cognitive Psychology 30: 39–78.Find this resource:
Chun, M. M. and Jiang, Y. (1998). Contextual cueing: Implicit learning and memory of visual context guides spatial attention. Cognitive Psychology 36: 28–71.Find this resource:
Cohen, E. and Ruppin, E. (1999). From parallel to serial processing: A computational study of visual search. Perception & Psychophysics 61(7): 1449–1461.Find this resource:
Cohen, J. Y., Heitz, R. P., Woodman, G. F., and Schall, J. D. (2009). Neural basis of the set-size effect in frontal eye field: Timing of attention during visual search. Journal of Neurophysiology 101(4): 1699–1704.Find this resource:
Colquhoun, W. P. and Baddeley, A. D. (1967). Influence of signal probability during pretraining on vigilance decrement. Journal of Experimental Psychology 73(1): 153–155.Find this resource:
Correani, A., Scot-Samuel, N., and Leonards, U. (2006). Luminosity—a perceptual ‘feature’ of light-emitting objects? Vision Research 46(22): 3915–3925.Find this resource:
D’Zmura, M. (1991). Color in visual search. Vision Research 31(6): 951–966.Find this resource:
Daoutis, C. A., Pilling, M., and Davies, I. R. L. (2006). Categorical effects in visual search for colour. Visual Cognition 14(2): 217–240. (p. 41) Find this resource:
Dehaene, S. (1989). Discriminability and dimensionality effects in visual search for featural conjunctions: A functional pop-out. Perception & Psychophysics 46(1): 72–80.Find this resource:
Devue, C., Van der Stigchel, S., Brèdart, S., and Theeuwes, J. (2009). You do not find your own face faster; you just look at it longer. Cognition 111(1): 114–122.Find this resource:
Di Lollo, V., Enns, J. T., and Rensink, R. A. (2000). Competition for consciousness among visual events: The psychophysics of reentrant visual processes. Journal of Experimental Psychology: General 129: 481–507.Find this resource:
Dick, M., Ullman, S., and Sagi, D. (1987). Parallel and serial processes in motion detection. Science 237: 400–402.Find this resource:
Doi, H. and Ueda, K. (2007). Searching for a perceived stare in the crowd. Perception 36(5): 773–780.Find this resource:
Donk, M. and van Zoest, W. (2008). Effects of salience are short-lived. Psychological Science 19(7): 733–739.Find this resource:
Donkin, C., Brown, S., Heathcote, A., and Wagenmakers, E.-J. (2011). Diffusion versus linear ballistic accumulation: Different models but the same conclusions about psychological processes? Psychonomic Bulletin & Review 18(1): 61–69.Find this resource:
Donnelly, N., Humphreys, G. W., and Riddoch, M. J. (1991). Parallel computation of primitive shape descriptions. Journal of Experimental Psychology: Human Perception and Performance 17(2): 561–570.Find this resource:
Dosher, B. A., Han, S., and Lu, Z.-L. (2010). Information-limited parallel processing in difficult heterogeneous covert visual search. Journal of Experimental Psychology: Human Perception and Performance 36(5): 1128–1144.Find this resource:
Driver, J., McLeod, P., and Dienes, Z. (1992). Are direction and speed coded independently by the visual system? Evidence from visual search. Spatial Vision 6(2): 133–147.Find this resource:
Dukewich, K. R. and Klein, R. M. (2009). Finding the target in search tasks using detection, localization, and identification responses. Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale 63(1): 1–7.Find this resource:
Duncan, J. (1983). Category effects in visual search: A failure to replicate the ‘oh-zero’ phenomenon. Perception & Psychophysics 34(3): 221–232.Find this resource:
Duncan, J. (1988). Boundary conditions on parallel processing in human vision. Perception 17: 358.Find this resource:
Duncan, J. and Humphreys, G. W. (1989). Visual search and stimulus similarity. Psychological Review 96: 433–458.Find this resource:
Eastwood, J. D., Smilek, D., and Merikle, P. M. (2001). Differential attentional guidance by unattended faces expressing positive and negative emotion. Perception & Psychophysics 63(6): 1004–1013.Find this resource:
Egeth, H. E., Virzi, R. A., and Garbart, H. (1984). Searching for conjunctively defined targets. Journal of Experimental Psychology: Human Perception and Performance 10: 32–39.Find this resource:
Ehinger, K. A., Hidalgo-Sotelo, B., Torralba, A., and Oliva, A. (2009). Modelling search for people in 900 scenes: A combined source model of eye guidance. Visual Cognition 17(6): 945–978.Find this resource:
Elder, J. and Zucker, S. (1993). The effect of contour closure on the rapid discrimination of two-dimensional shapes. Vision Research 33(7): 981–991.Find this resource:
Elder, J. and Zucker, S. (1994). A measure of closure. Vision Research 34(24): 3361–3369.Find this resource:
Elder, J. and Zucker, S. (1998). Evidence for boundary-specific grouping. Vision Research 38(1): 142–152. (p. 42) Find this resource:
Enns, J. (1986). Seeing textons in context. Perception & Psychophysics 39(2): 143–147.Find this resource:
Enns, J. T. and Rensink, R. A. (1990). Sensitivity to three-dimensional orientation in visual search. Psychological Science 1(5): 323–326.Find this resource:
Enns, J. T., Rensink, R. A., and Douglas, R. (1990). The influence of line relations on visual search. IOVS (Supplement) 31(4): 105.Find this resource:
Enns, J. T. and Rensink, R. A. (1993). A model for the rapid discrimination of line drawing in early vision. In D. Brogan, A. Gale, and K. Carr (eds.), Visual Search 2 (pp. 73–89). London: Taylor & Francis.Find this resource:
Epstein, W., Babler, T., and Bownds, S. (1992). Attentional demands of processing shape in three-dimensional space: Evidence from visual search and precuing paradigms. Journal of Experimental Psychology: Human Perception and Performance 18(2): 503–511.Find this resource:
Evans, K. K. and Treisman, A. (2005). Perception of objects in natural scenes: Is it really attention-free? Journal of Experimental Psychology: Human Perception and Performance 31(6): 1476–1492.Find this resource:
Fahle, M. (1991a). A new elementary feature of vision. Investigative Ophthalmology and Visual Science 32(7): 2151–2155.Find this resource:
Fahle, M. (1991b). Parallel perception of vernier offsets, curvature, and chevrons in humans. Vision Research 31(12): 2149–2184.Find this resource:
Fahle, M. and Harris, J. P. (1998). The use of different orientation cues in vernier acuity. Perception & Psychophysics 60(3): 405–426.Find this resource:
Farmer, E. W. and Taylor, R. M. (1980). Visual search through color displays: Effects of target-background similarity and background uniformity. Perception & Psychophysics 27: 267–272.Find this resource:
Fecteau, J. H. and Munoz, D. P. (2006). Salience, relevance, and firing: A priority map for target selection. Trends in Cognitive Sciences 10(8): 382–390.Find this resource:
Fei-Fei, L., Iyer, A., Koch, C., and Perona, P. (2007). What do we perceive in a glance of a real-world scene? Journal of Vision 7(1): 10.Find this resource:
Findlay, J. M. (1973). Feature detectors and vernier acuity. Nature 241(5385): 135–137.Find this resource:
Fleck, M. S. and Mitroff, S. R. (2007). Rare targets rarely missed in correctable search. Psychological Science 18(11): 943–947.Find this resource:
Flowers, J. H. and Lohr, D. J. (1985). How does familiarity affect visual search for letter strings? Perception & Psychophysics 37: 557–567.Find this resource:
Foster, D. H. and Ward, P. A. (1991a). Asymmetries in oriented-line detection indicate two orthogonal filters in early vision. Proceedings of the Royal Society (London B) 243: 75–81.Find this resource:
Foster, D. H. and Ward, P. A. (1991b). Horizontal-vertical filters in early vision predict anomalous line-orientation frequencies. Proceedings of the Royal Society (London B) 243: 83–86.Find this resource:
Foster, D. H. and Westland, S. (1998). Multiple groups of orientation-selective visual mechanisms underlying rapid oriented-line detection. Proceedings of the Royal Society (London B) 265: 1605–1613.Find this resource:
Foster, D. H. and Savage, C. J. (2002). Uniformities and asymmetries of rapid curved-line detection explained by parallel categorical coding of contour. Vision Research 42: 2163–2175.Find this resource:
Found, A. and Muller, H. J. (1996). Searching for unknown feature targets on more than one dimension: Investigating a ‘dimension weighting’ account. Perception & Psychophysics 58(1): 88–101.Find this resource:
Found, A. and Muller, H. J. (2001). Efficient search for size targets on a background texture gradient: Is detection guided by discontinuities in the retinal-size gradient of items? Perception 30(1): 21–48. (p. 43) Find this resource:
Franconeri, S. L. and Simons, D. J. (2003). Moving and looming stimuli capture attention. Perception & Psychophysics 65(7): 999–1010.Find this resource:
Friedman-Hill, S. R. and Wolfe, J. M. (1995). Second-order parallel processing: Visual search for the odd item in a subset. Journal of Experimental Psychology: Human Perception and Performance 21(3): 531–551.Find this resource:
Frischen, A., Eastwood, J. D., and Smilek, D. (2008). Visual search for faces with emotional expressions. Psychological Bulletin 134(5): 662–676.Find this resource:
Frith, U. (1974). A curious effect with reversed letters explained by a theory of schema. Perception & Psychophysics 16(1): 113–116.Find this resource:
Gale, A. G., Mugglestone, M. D., Purdy, K. J., and McClumpha, A. (2000). Is airport baggage inspection just another medical image? In E. A. Krupinski (ed.), Medical Imaging 2000: Image Perception and Performance, Proceedings of SPIE, vol. 3981 (pp. 184–192). Bellingham, Wash.: Society of Photo-Optical Instrumentation Engineers.Find this resource:
Gao, T., Newman, G. E., and Scholl, B. J. (2009). The psychophysics of chasing: A case study in the perception of animacy. Cognitive Psychology 59(2): 154–179.Find this resource:
Gao, T., McCarthy, G., and Scholl, B. J. (2010). The wolfpack effect: Perception of animacy irresistibly influences interactive behavior. Psychological Science 21(12): 1845–1853.Find this resource:
Gao, T. and Scholl, B. J. (2011). Chasing vs. stalking: Interrupting the perception of animacy. Journal of Experimental Psychology: Human Perception and Performance 37(3): 669–684.Find this resource:
Gibson, B. S. and Kelsey, E. M. (1998). Stimulus-driven attentional capture is contingent on attentional set for displaywide visual features. Journal of Experimental Psychology: Human Perception and Performance 24(3): 699–706.Find this resource:
Gilchrist, I. D., Humphreys, G. W., and Riddoch, M. J. (1996). Grouping and extinction: Evidence for low-level modulation of visual selection. Cognitive Neuropsychology 13(8): 1223–1249.Find this resource:
Golcu, D. and Gilbert, C. D. (2009). Perceptual learning of object shape. Journal of Neuroscience 29(43): 13621–13629.Find this resource:
Goolkasian, P. (1988). Illusory conjunctions in the processing of clock times. Journal of General Psychology 115(4): 341–353.Find this resource:
Gottlieb, J., Balan, P. F., Oristaglio, J., and Schneider, D. (2009). Task-specific computations in attentional maps. Vision Research 49(10): 1216–1226.Find this resource:
Green, B. F. and Anderson, L. K. (1956). Color coding in a visual search task. Journal of Experimental Psychology 51: 19–24.Find this resource:
Greene, M. R. and Oliva, A. (2009). The briefest of glances: The time course of natural scene understanding. Psychological Science 20(4): 464–472.Find this resource:
Greene, M. R. and Wolfe, J. M. (2011). Global image properties do not guide visual search. Journal of Vision 11(6): doi: 10.1167/11.6.18.Find this resource:
Grice, G. R. and Canham, L. (1990). Redundancy phenomena are affected by response requirements. Perception & Psychophysics 48(3): 209–213.Find this resource:
Guest, D. and Lamberts, K. (2011). The time course of similarity effects in visual search. Journal of Experimental Psychology: Human Perception and Performance 37(6): 1667–1688.Find this resource:
Gurnsey, R., Humphrey, G. K., and Kapitan, P. (1992). Parallel discrimination of subjective contours defined by offset gratings. Perception & Psychophysics 52(3): 263–276.Find this resource:
Hansen, C. H. and Hansen, R. D. (1988). Finding the face in the crowd: An anger superiority effect. Journal of Personality and Social Psychology 54(6): 917–924.Find this resource:
Hazeltine, R. E., Prinzmetal, W., and Elliot, K. (1997). If it’s not there, where is it? Locating illusory conjunctions. Journal of Experimental Psychology: Human Perception and Performance 23(1): 263–277. (p. 44) Find this resource:
He, S., Cavanagh, P., and Intriligator, J. (1996). Attentional resolution and the locus of visual awareness. Nature 383(6598): 334–337.Find this resource:
He, Z. J. and Nakayama, K. (1992). Surfaces versus features in visual search. Nature 359: 231–233.Find this resource:
Henderson, J. M. and Ferreira, F. (2004). Scene perception for psycholinguists. In J. M. Henderson and F. Ferreira (eds.), The Interface of Language, Vision, and Action: Eye Movements and the Visual World (pp. 1–58). New York: Psychology Press.Find this resource:
Henderson, J. M., Brockmole, J. R., Castelhano, M. S., and Mack, M. (2007). Image salience versus cognitive control of eye movements in real-world scenes: Evidence from visual search. In R. van Gompel, M. Fischer, W. Murray, and R. Hill (eds.), Eye Movement Research: Insights into Mind and Brain (pp 537–562). Oxford: Elsevier.Find this resource:
Herd, S. and O’Reilly, R. (2005). Serial search from a parallel model. Vision Research 45(24): 2987–2992.Find this resource:
Hershler, O. and Hochstein, S. (2005). At first sight: A high-level pop-out effect for faces. Vision Research 45(13): 1707–1724.Find this resource:
Hershler, O. and Hochstein, S. (2006). With a careful look: Still no low-level confound to face pop-out. Vision Research 46(18): 3028–3035.Find this resource:
Hochstein, S. and Ahissar, M. (2002). View from the top: Hierarchies and reverse hierarchies in the visual system. Neuron 36: 791–804.Find this resource:
Holliday, I. E. and Braddick, O. J. (1991). Pre-attentive detection of a target defined by stereoscopic slant. Perception 20: 355–362.Find this resource:
Hollingworth, A. and Henderson, J. M. (2002). Accurate visual memory for previously attended objects in natural scenes. Journal of Experimental Psychology: Human Perception and Performance 28(1): 113–136.Find this resource:
Hollingworth, A. (2009). Two forms of scene memory guide visual search: Memory for scene context and memory for the binding of target object to scene location. Visual Cognition 17(1): 273–291.Find this resource:
Horowitz, T. S. and Wolfe, J. M. (1998). Visual search has no memory. Nature 394(6 August): 575–577.Find this resource:
Horowitz, T. S. and Wolfe, J. M. (2003). Memory for rejected distractors in visual search? Visual Cognition 10(3): 257–298.Find this resource:
Horowitz, T. S., Wolfe, J. M., DiMase, J., and Klieger, S. B. (2007). Visual search for type of motion is based on simple motion primitives. Perception 36: 1624–1634.Find this resource:
Horstmann, G., Bergmann, S., Burghaus, L., and Becker, S. (2010). A reversal of the search asymmetry favoring negative schematic faces. Visual Cognition 18(7): 981–1016.Find this resource:
Houck, M. R. and Hoffman, J. E. (1986). Conjunction of color and form without attention: Evidence from an orientation-contingent color after-effect. Journal of Experimental Psychology: Human Perception and Performance 12: 186–199.Find this resource:
Huang, L. and Pashler, H. (2007). A boolean map theory of visual attention. Psychological Review 114(3): 599–631.Find this resource:
Huang, L. and Pashler, H. (2012). Distinguishing different strategies of across-dimension attentional selection. Journal of Experimental Psychology: Human Perception and Performance 38(2): 453–464.Find this resource:
Hubner, R. (2001). A formal version of the Guided Search (GS2) model. Perception & Psychophysics 63(6): 945–951.Find this resource:
Hwang, A. D., Wang, H.-C., and Pomplun, M. (2011). Semantic guidance of eye movements in real-world scenes. Vision Research 51(10): 1192–1205. (p. 45) Find this resource:
Intriligator, J. and Cavanagh, P. (2001). The spatial resolution of visual attention. Cognitive Psychology 43(3): 171–216.Find this resource:
Ishibashi, K., Kita, S., and Wolfe, J. M. (2012). The effects of local prevalence and explicit expectations on search termination times. Attention, Perception, & Psychophysics 74: 115–123.Find this resource:
Itti, L. and Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research 40(10–12): 1489–1506.Find this resource:
Johansson, G. (1973). Visual perception of biological motion and a model for its analysis. Perception & Psychophysics 14: 201–211.Find this resource:
Johnston, W. A., Hawley, K. J., and Farnham, J. M. (1993). Novel popout: Empirical boundaries and tentative theory. Journal of Experimental Psychology: Human Perception and Performance 19(1): 140–153.Find this resource:
Jonides, J. and Gleitman, H. (1972). A conceptual category effect in visual search: O as letter or digit. Perception & Psychophysics 12: 457–460.Find this resource:
Jonides, J. and Yantis, S. (1988). Uniqueness of abrupt visual onset in capturing attention. Perception & Psychophysics 43: 346–354.Find this resource:
Julesz, B. (1981). A theory of preattentive texture discrimination based on first order statistics of textons. Biological Cybernetics 41: 131–138.Find this resource:
Julesz, B. and Bergen, J. R. (1983). Textons, the fundamental elements in preattentive vision and perceptions of textures. Bell Systems Technical Journal 62: 1619–1646.Find this resource:
Julesz, B. (1984). A brief outline of the texton theory of human vision. Trends in Neurosciences, 7(February): 41–45.Find this resource:
Julesz, B. and Krose, B. (1988). Features and spatial filters. Nature 333: 302–303.Find this resource:
Kanbe, F. (2009). Which is more critical in identification of random figures, endpoints or closures? Japanese Psychological Research 51(4): 235–245.Find this resource:
Kawahara, J. I. (1993). The effect of stimulus motion on visual search. Japanese Journal of Psychology 64(5): 396–400.Find this resource:
Kinchla, R. A. (1974). Detecting target elements in multielement arrays: A confusability model. Perception & Psychophysics 15(1): 149–158.Find this resource:
Kinchla, R. A. and Collyer, C. E. (1974). Detecting a target letter in briefly presented arrays: A confidence rating analysis in terms of a weighted additive effects model. Perception & Psychophysics 16(1): 117–122.Find this resource:
Kirchner, H. and Thorpe, S. J. (2006). Ultra-rapid object detection with saccadic eye movements: Visual processing speed revisited. Vision Research 46(11): 1762–1776.Find this resource:
Kleffner, D. A. and Ramachandran, V. S. (1992). On the perception of shape from shading. Perception & Psychophysics 52(1): 18–36.Find this resource:
Klein, R. (1988). Inhibitory tagging system facilitates visual search. Nature 334: 430–431.Find this resource:
Klein, R. M. and MacInnes, W. J. (1999). Inhibition of return is a foraging facilitator in visual search. Psychological Science 10(July): 346–352.Find this resource:
Klein, R. M. (2009). On the control of attention. Canadian Journal of Experimental Psychology 63(3): 240–252.Find this resource:
Koch, C. and Ullman, S. (1985). Shifts in selective visual attention: Towards the underlying neural circuitry. Human Neurobiology 4: 219–227.Find this resource:
Kourtzi, Z. and Connor, C. E. (2011). Neural representations for object perception: Structure, category, and adaptive coding. Annual Review of Neuroscience 34: 45–67.Find this resource:
Kovacs, I. and Julesz, B. (1993). A closed curve is much more than an incomplete one: Effect of closure in figure-ground segmentation. Proceedings of the National Academy of Sciences USA 90(16): 7495–7497. (p. 46) Find this resource:
Kowler, E. (2011). Eye movements: The past 25 years. Vision Research 51(13): 1457–1483.Find this resource:
Kristjánsson, A. and Tse, P. U. (2001). Curvature discontinuities are cues for rapid shape analysis. Perception & Psychophysics 63(3): 390–403.Find this resource:
Kristjánsson, A. and Driver, J. (2008). Priming in visual search: Separating the effects of target repetition, distractor repetition and role-reversal. Vision Research 48(10): 1217–1232.Find this resource:
Kristjánsson, A. and Campana, G. (2010). Where perception meets memory: A review of repetition priming in visual search tasks. Attention, Perception, & Psychophysics 72(1): 5–18.Find this resource:
Krueger, L. E. (1984). The category effect in visual search depends on physical rather than conceptual differences. Perception & Psychophysics 35(6): 558–564.Find this resource:
Krupinski, E. A., Berger, W. G., Dallas, W. J., and Roehrig, H. (2003). Searching for nodules: What features attract attention and influence detection? Academic Radiology 10(8): 861–868.Find this resource:
Kunar, M. A., Flusberg, S. J., Horowitz, T. S., and Wolfe, J. M. (2007). Does contextual cueing guide the deployment of attention? Journal of Experimental Psychology: Human Perception and Performance 33(4): 816–828.Find this resource:
Kundel, H. L. and Nodine, C. F. (2004). Modeling visual search during mammogram viewing. Paper presented at the conference Medical Imaging 2004: Image Perception, Observer Performance, and Technology Assessment.Find this resource:
Lamy, D. and Zoaris, L. (2009). Task-irrelevant stimulus salience affects visual search. Vision Research 49(11): 1472–1480.Find this resource:
Langton, S. R. H., Law, A. S., Burton, A. M., and Schweinberger, S. R. (2008). Attention capture by faces. Cognition 107(1): 330–342.Find this resource:
Lee, K. W., Buxton, H., and Feng, J. (2005). Cue-guided search: A computational model of selective attention. IEEE Transactions on Neural Networks 16(4): 910–924.Find this resource:
Levi, D. M. (2008). Crowding: An essential bottleneck for object recognition. A mini-review. Vision Research 48(5): 635–654.Find this resource:
Levin, D. T., Takarae, Y., Miner, A. G., and Keil, F. (2001). Efficient visual search by category: Specifying the features that mark the difference between artifacts and animals in preattentive vision. Perception & Psychophysics 63(4): 676–697.Find this resource:
Li, Z. (2002). A salience map in primary visual cortex. Trends in Cognitive Sciences 6(1): 9–16.Find this resource:
Lindsey, D. T., Brown, A. M., Reijnen, E., Rich, A. N., Kuzmova, Y. I., and Wolfe, J. M. (2010). Color channels, not color appearance or color categories, guide visual search for desaturated color targets. Psychological Science 21(9): 1208–1214.Find this resource:
Lipp, O. (2006). Of snakes and flowers: Does preferential detection of pictures of fear-relevant animals in visual search reflect on fear-relevance? Emotion 26(2): 296–308.Find this resource:
LoBue, V. and DeLoache, J. S. (2008). Detecting the snake in the grass: Attention to fear-relevant stimuli by adults and young children. Psychological Science 19(3): 284–289.Find this resource:
Logothetis, N. K., Pauls, J., and Poggio, T. (1995). Shape representation in the inferior temporal cortex of monkeys. Current Biology 5(5): 552–563.Find this resource:
Malcolm, G. L. and Henderson, J. M. (2010). Combining top-down processes to guide eye movements during real-world scene search. Journal of Vision 10(2): 1–11.Find this resource:
Malinowski, P. and Hübner, R. (2001). The effect of familiarity on visual-search performance: Evidence for learned basic features. Perception & Psychophysics 63(3): 458–463.Find this resource:
Maljkovic, V. and Nakayama, K. (1994). Priming of popout: I. Role of features. Memory & Cognition 22(6): 657–672.Find this resource:
McCollough, C. (1965). Color adaptation of edge-detectors in the human visual system. Science 149: 1115–1116. (p. 47) Find this resource:
McDaniel, M. A., Robinson-Riegler, B., and Einstein, G. O. (1998). Prospective remembering: Perceptually driven or conceptually drive processes? Memory & Cognition 26(1): 121–134.Find this resource:
McElree, B. and Carrasco, M. (1999). The temporal dynamics of visual search: Evidence for parallel processing in feature and conjunction searches. Journal of Experimental Psychology: Human Perception and Performance 25(6): 1517–1539.Find this resource:
McLeod, P., Driver, J., and Crisp, J. (1988). Visual search for conjunctions of movement and form is parallel. Nature 332: 154–155.Find this resource:
McSorley, E. and Findlay, J. M. (2001). Visual search in depth. Vision Research 41(25–26): 3487–3496.Find this resource:
Monnier, P. and Nagy, A. L. (2001). Uncertainty, attentional capacity and chromatic mechanisms in visual search. Vision Research 41(3): 313–328.Find this resource:
Moore, C. M., Elsinger, C. L., and Lleras, A. (2001). Visual attention and the apprehension of spatial relations: The case of depth. Perception & Psychophysics 63(4): 595–606.Find this resource:
Moore, C. M. and Wolfe, J. M. (2001). Getting beyond the serial/parallel debate in visual search: A hybrid approach. In K. Shapiro (ed.), The Limits of Attention: Temporal Constraints on Human Information Processing (pp. 178–198). Oxford: Oxford University Press.Find this resource:
Moraglia, G. (1989a). Display organization and the detection of horizontal lines segments. Perception & Psychophysics 45: 265–272.Find this resource:
Moraglia, G. (1989b). Visual search: Spatial frequency and orientation. Perceptual & Motor Skills 69(2): 675–689.Find this resource:
Mordkoff, J. T., Yantis, S., and Egeth, H. E. (1990). Detecting conjunctions of color and form in parallel. Perception & Psychophysics 48(2): 157–168.Find this resource:
Morgan, M. J., Giora, E., and Solomon, J. A. (2008). A single ‘stopwatch’ for duration estimation, a single ‘ruler’ for size. Journal of Vision 8(2): 14.1–14.8.Find this resource:
Mozer, M. C. and Baldwin, D. (2008). Experience-guided search: A theory of attentional control. In D. K. J. Platt and Y. Singer (eds.), Advances in Neural Information Processing Systems (pp. 1033–1040). Cambridge, Mass.: MIT Press.Find this resource:
Muller, H. J., Reimann, B., and Krummenacher, J. (2003). Visual search for singleton feature targets across dimensions: Stimulus- and expectancy-driven effects in dimensional weighting. Journal of Experimental Psychology: Human Perception and Performance 29(5): 1021–1035.Find this resource:
Nagy, A. L. and Sanchez, R. R. (1990). Critical color differences determined with a visual search task. Journal of the Optical Society of America—A 7(7): 1209–1217.Find this resource:
Nagy, A. L., Young, T., and Neriani, K. (2004). Combining information in different color-coding mechanisms to facilitate visual search. Vision Research 44(25): 2971–2980.Find this resource:
Najemnik, J. and Geisler, W. S. (2005). Optimal eye movement strategies in visual search. Nature 434(7031): 387–391.Find this resource:
Nakayama, K. and Silverman, G. H. (1986). Serial and parallel processing of visual feature conjunctions. Nature 320: 264–265.Find this resource:
Neider, M. B. and Zelinsky, G. J. (2006). Scene context guides eye movements during visual search. Vision Research 46(5): 614–621.Find this resource:
Neider, M. B., Boot, W. R., and Kramer, A. F. (2010). Visual search for real-world targets under conditions of high target–background similarity: Exploring training and transfer in younger and older adults. Acta Psychologica 34(1): 29–39.Find this resource:
Neisser, U. (1963). Decision time without reaction time: Experiments in visual scanning. American Journal of Psychology 76: 376–385.Find this resource:
Nodine, C. F., Krupinski, E. A., and Kundel, H. L. (1993). Visual processing and decision making in search and recognition of targets. In D. Brogan, A. Gale, and K. Carr (eds.), Visual Search 2 (pp. 239–249). London: Taylor & Francis. (p. 48) Find this resource:
Notebaert, L., Crombez, G., Van Damme, S., De Houwer, J., and Theeuwes, J. (2011). Signals of threat do not capture, but prioritize, attention: A conditioning approach. Emotion 11(1): 81–89.Find this resource:
Nothdurft, H.-C. (1991). Different effects from spatial frequency masking in texture segregation and text on detection tasks. Vision Research 31(2): 299–320.Find this resource:
Nothdurft, H.-C. (1993a). The role of features in preattentive vision: Comparison of orientation, motion and color cues. Vision Research 33(14): 1937–1958.Find this resource:
Nothdurft, H.-C. (1993b). The conspicuousness of orientation and visual motion. Spatial Vision 7(4): 341–366.Find this resource:
Nothdurft, H.-C. (1993c). Faces and facial expression do not pop-out. Perception 22: 1287–1298.Find this resource:
Nothdurft, H.-C. (2000). Salience from feature contrast: Additivity across dimensions. Vision Research 40: 1183–1201.Find this resource:
O’Toole, A. J. and Walker, C. L. (1997). On the preattentive accessibility of stereoscopic disparity: Evidence from visual search. Perception & Psychophysics 59(2): 202–218.Find this resource:
Öhman, A., Flykt, A., and Esteves, F. (2001). Emotion drives attention: Detecting the snake in the grass. Journal of Experimental Psychology: General 130: 3.Find this resource:
Oliva, A. and Torralba, A. (2001). Modeling the shape of the scene: A holistic representation of the spatial envelope. International Journal of Computer Vision 42(3): 145–175.Find this resource:
Oliva, A., Torralba, A., Castelhano, M. S., and Henderson, J. M. (2003). Top-down control of visual attention in object detection. Paper presented at the Proceedings of the IEEE International Conference on Image Processing, 14–17 September, Barcelona, Spain.Find this resource:
Oliva, A. (2005). Gist of the scene. In L. Itti, G. Rees, and J. Tsotsos (eds.), Neurobiology of Attention (pp. 251–257). San Diego, Calif.: Academic Press/Elsevier.Find this resource:
Olzak, L. A. and Thomas, J. P. (1986). Seeing spatial patterns. In K. R. Boff, L. Kaufmann, and J. P. Thomas (eds.), Handbook of Perception and Human Performance (ch. 7). New York: Wiley & Sons.Find this resource:
Ostrovsky, Y., Cavanagh, P., and Sinha, P. (2004). Perceiving illumination inconsistencies in scenes. Perception 34: 1301–1314.Find this resource:
Paffen, C., Hooge, I., Benjamins, J., and Hogendoorn, H. (2011). A search asymmetry for interocular conflict. Attention, Perception, & Psychophysics 73(4): 1042–1053.Find this resource:
Palanica, A. and Itier, R. J. (2011). Searching for a perceived gaze direction using eye tracking. Journal of Vision 11(2): article 19. doi: 10.1167/11.2.19.Find this resource:
Palmer, E. M., Horowitz, T. S., Torralba, A., and Wolfe, J. M. (2011). What are the shapes of response time distributions in visual search? Journal of Experimental Psychology: Human Perception and Performance 37(1): 58–71.Find this resource:
Palmer, J. (1995). Attention in visual search: Distinguishing four causes of a set size effect. Current Directions in Psychological Science 4(4): 118–123.Find this resource:
Palmer, J. and McLean, J. (1995). Imperfect, unlimited-capacity, parallel search yields large set-size effects. Paper presented at the Society for Mathematical Psychology, Irvine, Calif.Find this resource:
Palmer, J., Verghese, P., and Pavel, M. (2000). The psychophysics of visual search. Vision Research 40(10–12): 1227–1268.Find this resource:
Parkhurst, D., Law, K., and Niebur, E. (2002). Modeling the role of salience in the allocation of overt visual attention. Vision Research 42(1): 107–123.Find this resource:
Pelli, D. G. and Tillman, K. A. (2008). The uncrowded window for object recognition. Nature Neuroscience 11(10): 1129–1135.Find this resource:
Peterson, M. S., Kramer, A. F., Wang, R. F., Irwin, D. E., and McCarley, J. S. (2001). Visual search has memory. Psychological Science 12(4): 287–292. (p. 49) Find this resource:
Pilon, D. and Friedman, A. (1998). Grouping and detecting vertices in 2-D, 3-D, and quasi-3-D objects. Canadian Journal of Experimental Psychology 52(3): 114–127.Find this resource:
Pomerantz, J. R. and Pristach, E. A. (1989). Emergent features, attention, and perceptual glue in visual form perception. Journal of Experimental Psychology: Human Perception and Performance 15(4): 635–649.Find this resource:
Portilla, J. and Simoncelli, E. P. (2000). A parametric texture model based on joint statistics of complex wavelet coefficients. International Journal of Computer Vision 40(1): 49–71.Find this resource:
Posner, M. I. (1980). Orienting of attention. Quarterly Journal of Experimental Psychology 32: 3–25.Find this resource:
Pratt, J. and Abrams, R. A. (1995). Inhibition of return to successively cued spatial locations. Journal of Experimental Psychology: Human Perception and Performance 21(6): 1343–1353.Find this resource:
Pratt, J., Radulescu, P. V., Guo, R. M., and Abrams, R. A. (2010). It’s alive! Animate motion captures visual attention. Psychological Science 21(11): 1724–1730.Find this resource:
Prinzmetal, W. and Keysar, B. (1989). Functional theory of illusory conjunctions and neon colors. Journal of Experimental Psychology: General 118(2): 165–190.Find this resource:
Prinzmetal, W. (1991). Automatic processes in word perception: An analysis from illusory conjunctions. Journal of Experimental Psychology: Human Perception and Performance 17(4): 902–923.Find this resource:
Purcell, D. G., Stewart, A. L., and Skov, R. B. (1996). It takes a confounded face to pop out of a crowd. Perception 25(9): 1091–1108.Find this resource:
Purcell, B. A., Heitz, R. P., Cohen, J. Y., Schall, J. D., Logan, G. D., and Palmeri, T. J. (2010). Neurally constrained modeling of perceptual decision making. Psychological Review 117(4): 1113–1143.Find this resource:
Rakison, D. H. and Derringer, J. (2008). Do infants possess an evolved spider-detection mechanism? Cognition 107(1): 381–393.Find this resource:
Ramachandran, V. S. (1988). Perception of shape from shading. Nature 331: 163–165.Find this resource:
Ratcliff, R. (1978). A theory of memory retrieval. Psychological Review 85(2): 59–108.Find this resource:
Ratcliff, R., Philiastides, M. G., and Sajda, P. (2009). Quality of evidence for perceptual decision making is indexed by trial-to-trial variability of the EEG. Proceedings of the National Academy of Sciences of the United States of America 106(16): 6539–6544.Find this resource:
Rauschenberger, R. (2003). Attentional capture by auto- and allo-cues. Psychonomic Bulletin & Review 10(4): 814–842.Find this resource:
Reddy, L., Wilken, P., and Koch, C. (2004). Face-gender discrimination is possible in the near-absence of attention. Journal of Vision 4(2): 106–117.Find this resource:
Reddy, L. and Koch, C. (2006). Face identification in the near-absence of focal attention. Vision Research 46(15): 2336–2343.Find this resource:
Reijnen, E., Krummenacher, J., and Wolfe, J. M. (2011). Coarse guidance by numerosity in visual search. Attention, Perception, & Psychophysics 75(1): 16–28.Find this resource:
Reinecke, A., Rinck, M., and Becker, E. S. (2006). Spiders crawl easily through the bottleneck: Visual working memory for negative stimuli. Emotion 6(3): 438–449.Find this resource:
Rensink, R. A. and Enns, J. T. (1998). Early completion of occluded objects. Vision Research 38: 2489–2505.Find this resource:
Rensink, R. A. and Cavanagh, P. (2004). The influence of cast shadows on visual search. Perception 33(11): 1339–1358.Find this resource:
Rosenholtz, R. (2001a). Search asymmetries? What search asymmetries? Perception & Psychophysics 63(3): 476–489. (p. 50) Find this resource:
Rosenholtz, R. (2001b). Visual search for orientation among heterogeneous distractors: Experimental results and implications for signal-detection theory models of search. Journal of Experimental Psychology: Human Perception and Performance 27(4): 985–999.Find this resource:
Rosenholtz, R., Chan, S., and Balas, B. (2009). A crowded model of visual search. Journal of Vision 9(8): 1197–1197.Find this resource:
Royden, C. S., Wolfe, J., and Klempen, N. (2001). Visual search asymmetries in motion and optic flow fields. Perception & Psychophysics 63(3): 436–444.Find this resource:
Rubenstein, J. (2001). Test and Evaluation Plan: X-ray Image Screener Selection Test (No. DOT/FAA/AR-01/47). Washington, D.C.: Office of Aviation Research.Find this resource:
Rubin, J. M. and Kanwisher, N. (1985). Topological perception: Holes in an experiment. Perception & Psychophysics 37: 179–180.Find this resource:
Rushton, S. K., Bradshaw, M. F., and Warren, P. A. (2007). The pop-out of scene-relative object movement against retinal motion due to self-movement. Cognition 105: 237–245.Find this resource:
Sagi, D. (1988). The combination of spatial frequency and orientation is effortlessly perceived. Perception & Psychophysics 43: 601–603.Find this resource:
Sagi, D. (1990). Detection of an orientation singularity in Gabor textures: Effect of signal density and spatial frequency. Vision Research 30(9): 1377–1388.Find this resource:
Sakai, K., Morishita, M., and Matsumoto, H. (2007). Set-size effects in simple visual search for contour curvature. Perception 36(3): 323–334.Find this resource:
Sanocki, T. and Epstein, W. (1997). Priming spatial layout of scenes. Psychological Science 8: 374–378.Find this resource:
Scolari, M., Kohnen, A., Barton, B., and Awh, E. (2007). Spatial attention, preview, and popout: Which factors influence critical spacing in crowded displays? Journal of Vision 7(2): 1–23.Find this resource:
Serences, J. T. and Yantis, S. (2006). Selective visual attention and perceptual coherence. Trends in Cognitive Sciences 10(1): 38–45.Find this resource:
Sherman, A. M., Greene, M. R., and Wolfe, J. M. (2011). Depth and size information reduce effective set size for visual search in real-world scenes. Journal of Vision 11(11): 1334.Find this resource:
Shiffrin, R. M. and Gardner, G. T. (1972). Visual processing capacity and attentional control. Journal of Experimental Psychology 93(1): 72–82.Find this resource:
Shiffrin, R. M. and Schneider, W. (1977). Controlled and automatic human information processing, II: Perceptual learning, automatic attending, and a general theory. Psychological Review 84: 127–190.Find this resource:
Shipp, S. (2004). The brain circuitry of attention. Trends in Cognitive Sciences 8(5): 223–230.Find this resource:
Shneor, E. and Hochstein, S. (2006). Eye dominance effects in feature search. Vision Research 46(25): 4258–4269.Find this resource:
Shore, D. I. and Klein, R. M. (2000). On the manifestations of memory in visual search. Spatial Vision 14(1): 59–75.Find this resource:
Sigman, M. and Gilbert, C. D. (2000). Learning to find a shape. Nature Neuroscience 3(3): 264–269.Find this resource:
Simons, D. J. and Chabris, C. F. (2010). The Invisible Gorilla. New York: Crown/Random House.Find this resource:
Sireteanu, R. and Rettenbach, R. (1995). Perceptual learning in visual search: Fast enduring but non-specific. Vision Research 35(14): 2037–2043.Find this resource:
Skarratt, P. A., Cole, G. G., and Gellatly, A. R. (2009). Prioritization of looming and receding objects: Equal slopes, different intercepts. Attention, Perception, & Psychophysics 71(4): 964–970.Find this resource:
Smith, S. L. (1962). Color coding and visual search. Journal of Experimental Psychology 64: 434–440. (p. 51) Find this resource:
Soares, S. C., Esteves, F., Lundqvist, D., and Ohman, A. (2009). Some animal specific fears are more specific than others: Evidence from attention and emotion measures. Behaviour Research and Therapy 47(12): 1032–1042.Find this resource:
Sousa, R., Brenner, E., and Smeets, J. B. J. (2009). Slant cue are combined early in visual processing: Evidence from visual search. Vision Research 49(2): 257–261.Find this resource:
Spalek, T. M., Kawahara, J., and Di Lollo, V. (2009). Flicker is a primitive visual attribute. Canadian Journal of Experimental Psychology 63(4): 319–322.Find this resource:
Stein, T., Senju, A., Peelen, M. V., and Sterzer, P. (2011). Eye contact facilitates awareness of faces during interocular suppression. Cognition 119(2): 307–311.Find this resource:
Stuart, G. W. (1993). Preattentive processing of object size: Implications for theories of size perception. Perception 22(10): 1175–1193.Find this resource:
Sun, J. and Perona, P. (1996a). Preattentive perception of elementary three-dimensional shapes. Vision Research 36(16): 2515–2529.Find this resource:
Sun, J. and Perona, P. (1996b). Where is the sun? Investigative Ophthalmology and Visual Science 37(3): S935.Find this resource:
Suzuki, S. and Cavanagh, P. (1995). Facial organization blocks access to low-level features: An object inferiority effect. Journal of Experimental Psychology: Human Perception and Performance 21(4): 901–913.Find this resource:
Symons, L. A., Cuddy, F., and Humphrey, K. (2000). Orientation tuning of shape from shading. Perception & Psychophysics 62(3): 557–568.Find this resource:
Takeda, Y. (2004). Search for multiple targets: Evidence for memory-based control of attention. Psychonomic Bulletin & Review 11(1): 71–76.Find this resource:
Takeuchi, T. (1997). Visual search of expansion and contraction. Vision Research 37(15): 2083–2090.Find this resource:
Taylor, S. and Badcock, D. (1988). Processing feature density in preattentive perception. Perception & Psychophysics 44: 551–562.Find this resource:
Theeuwes, J. and Kooi, J. L. (1994). Parallel search for a conjunction of shape and contrast polarity. Vision Research 34(22): 3013–3016.Find this resource:
Theeuwes, J. (1995). Abrupt luminance change pops out; abrupt color change does not. Perception & Psychophysics 57(5): 637–644.Find this resource:
Thompson, K. G. and Bichot, N. P. (2004). A visual salience map in the primate frontal eye field. Progress in Brain Research 147: 249–262.Find this resource:
Thornton, T. L. and Gilden, D. L. (2007). Parallel and serial process in visual search. Psychological Review 114(1): 71–103.Find this resource:
Tipper, S. P., Weaver, B., and Watson, F. L. (1996). Inhibition of return to successively cued spatial locations: Commentary on Pratt and Abrams (1995). Journal of Experimental Psychology: Human Perception and Performance 22(5): 1289–1293.Find this resource:
Tipples, J., Young, A., Quinlan, P., Broks, P., and Ellis, A. (2002). Searching for threat. Quarterly Journal of Experimental Psychology 55(3): 1007–1026.Find this resource:
Tong, F. and Nakayama, K. (1999). Robust representations for faces: Evidence from visual search. Journal of Experimental Psychology: Human Perception and Performance 25(4): 1016–1035.Find this resource:
Torralba, A., Oliva, A., Castelhano, M. S., and Henderson, J. M. (2006). Contextual guidance of eye movements and attention in real-world scenes: The role of global features on object search. Psychological Review 113(4): 766–786.Find this resource:
Townsend, J. T. (1971). A note on the identification of parallel and serial processes. Perception & Psychophysics 10: 161–163. (p. 52) Find this resource:
Townsend, J. T. (1990). Serial and parallel processing: Sometimes they look like Tweedledum and Tweedledee but they can (and should) be distinguished. Psychological Science 1: 46–54.Find this resource:
Townsend, J. T. and Wenger, M. J. (2004). The serial-parallel dilemma: A case study in a linkage of theory and method. Psychonomic Bulletin & Review 11(3): 391–418.Find this resource:
Treisman, A. M. and Gelade, G. (1980). A feature-integration theory of attention. Cognitive Psychology 12: 97–136.Find this resource:
Treisman, A. M. and Schmidt, H. (1982). Illusory conjunctions in the perception of objects. Cognitive Psychology 14: 107–141.Find this resource:
Treisman, A. M. and Souther, J. (1985). Search asymmetry: A diagnostic for preattentive processing of separable features. Journal of Experimental Psychology: General 114: 285–310.Find this resource:
Treisman, A. M. and Souther, J. (1986). Illusory words: The roles of attention and of top-down constraints in conjoining letters to form words. Journal of Experimental Psychology: Human Perception and Performance 12: 3–17.Find this resource:
Treisman, A. M. (1988). Features and objects: The 14th Bartlett memorial lecture. Quarterly Journal of Experimental Psychology—A 40: 201–237.Find this resource:
Treisman, A. M. and Gormican, S. (1988). Feature analysis in early vision: Evidence from search asymmetries. Psychological Review 95: 15–48.Find this resource:
Treisman, A. M. (1996). The binding problem. Current Opinion in Neurobiology 6: 171–178.Find this resource:
Treisman, A. M. (2006). How the deployment of attention determines what we see. Visual Cognition 14(4–8): 411–443.Find this resource:
Treue, S. and Trujillo, J. C. M. (1999). Feature-based attention influences motion processing gain in macaque visual cortex. Nature 399: 575–579.Find this resource:
Tsal, Y. (1989a). Do illusory conjunctions support feature integration theory? A critical review of theory and findings. Journal of Experimental Psychology: Human Perception and Performance 15(2): 394–400.Find this resource:
Tsal, Y. (1989b). Further comments on feature integration theory. A reply to Briand and Klein. Journal of Experimental Psychology: Human Perception and Performance 15(2): 407–410.Find this resource:
Tsal, Y., Meiran, N., and Lamy, D. (1995). Towards a resolution theory of visual attention. Visual Cognition 2(2/3): 313–330.Find this resource:
Tsotsos, J. (2011). A Computational Perspective on Visual Attention. Cambridge, Mass.: MIT Press.Find this resource:
Turatto, M. and Glafano, G. (2000). Color, form, and luminance capture attention in visual search. Vision Research 40(13): 1639–1643.Find this resource:
VanRullen, R. and Thorpe, S. J. (2001). Is it a bird? Is it a plane? Ultra-rapid visual categorisation of natural and artifactual objects. Perception 30(6): 655–668.Find this resource:
VanRullen, R. (2006). On second glance: Still no high-level pop-out effect for faces. Vision Research 46(18): 3017–3027.Find this resource:
Verghese, P. and Nakayama, K. (1994). Stimulus discriminability in visual search. Vision Research 34(18): 2453–2467.Find this resource:
Verghese, P. and Pelli, D. G. (1994). The scale bandwidth of visual search. Vision Research 34(7): 955–962.Find this resource:
Verghese, P. (2001). Visual search and attention: A signal detection approach. Neuron 31: 523–535.Find this resource:
Vickery, T. J., King, L.-W., and Jiang, Y. (2005). Setting up the target template in visual search. Journal of Vision 5(1): 81–92.Find this resource:
Vidyasagar, T. R. (1999). A neuronal model of attentional spotlight: Parietal guiding the temporal. Brain Research Reviews 30(1): 66–76.Find this resource:
Virzi, R. A. and Egeth, H. E. (1984). Is meaning implicated in illusory contours? Journal of Experimental Psychology: Human Perception and Performance 10: 573–580. (p. 53) Find this resource:
Vo, M. L.-H. and Henderson, J. M. (2009). Does gravity matter? Effects of semantic and syntactic inconsistencies on the allocation of attention during scene perception. Journal of Vision 9(3): 1–15.Find this resource:
Vo, M. L.-H. and Wolfe, J. M. (2012). When does repeated search in scenes involve memory? Looking at versus looking for objects in scenes. Journal of Experimental Psychology: Human Perception and Performance 38(1): 23–41.Find this resource:
Von Grünau, M. and Dubé, S. (1994). Visual search asymmetry for viewing direction. Perception & Psychophysics 56(2): 211–220.Find this resource:
Von Grünau, M. and Anston, C. (1995). The detection of gaze direction: A stare-in-the-crowd effect. Perception 24(11): 1297–1313.Find this resource:
von Muhlenen, A. and Muller, H. (1999). Visual search for motion-form conjunctions: Selective attention to movement direction. Journal of General Psychology 126: 289–317.Find this resource:
Wang, L., Zhang, K., He, S., and Jiang, Y. (2010). Searching for life motion signals: Visual search asymmetry in local but not global biological-motion processing. Psychological Science 21(8): 1083–1089.Find this resource:
Wang, Q., Cavanagh, P., and Green, M. (1994). Familiarity and pop-out in visual search. Perception & Psychophysics 56(5): 495–500.Find this resource:
Watt, R., Craven, B., and Quinn, S. (2007). A role for eyebrows in regulating the visibility of eye gaze direction. Quarterly Journal of Experimental Psychology (Hove) 60(9): 1169–1177.Find this resource:
Wheatley, C., Cook, M. L., and Vidyasagar, T. R. (2004). Surface segregation influences pre-attentive search in depth. NeuroReport 15(2): 303–305.Find this resource:
Williams, D. and Julesz, B. (1992). Perceptual asymmetry in texture perception. Proceedings of the National Academy of Sciences USA 89(14): 6531–6534.Find this resource:
Williams, L. G. (1966). The effect of target specification on objects fixed during visual search. Perception & Psychophysics 1: 315–318.Find this resource:
Williams, M. A., Moss, S. A., and Bradshaw, J. L. (2002). Searching for the eyes and mouth: Is the stare-in-the-crowd effect specific to the eyes? Perception & Psychophysics, MS 02–153.Find this resource:
Wolfe, J. M. and Franzel, S. L. (1988). Binocularity and visual search. Perception & Psychophysics 44: 81–93.Find this resource:
Wolfe, J. M., Cave, K. R., and Franzel, S. L. (1989). Guided Search: An alternative to the Feature Integration model for visual search. Journal of Experimental Psychology: Human Perception and Performance 15: 419–433.Find this resource:
Wolfe, J. M., Yu, K. P., Stewart, M. I., Shorter, A. D., Friedman-Hill, S. R., and Cave, K. R. (1990). Limitations on the parallel guidance of visual search: Color X color and orientation X orientation conjunctions. Journal of Experimental Psychology: Human Perception and Performance 16(4): 879–892.Find this resource:
Wolfe, J. M. and Friedman-Hill, S. R. (1992). Visual search for orientation: The role of angular relations between targets and distractors. Spatial Vision 6(3): 199–208.Find this resource:
Wolfe, J. M., Yee, A., and Friedman-Hill, S. R. (1992). Curvature is a basic feature for visual search. Perception 21: 465–480.Find this resource:
Wolfe, J. M., Friedman-Hill, S. R., Stewart, M. I., and O’Connell, K. M. (1992). The role of categorization in visual search for orientation. Journal of Experimental Psychology: Human Perception and Performance 18(1): 34–49.Find this resource:
Wolfe, J. M. (1994). Guided Search 2.0: A revised model of visual search. Psychonomic Bulletin and Review 1(2): 202–238.Find this resource:
Wolfe, J. M., Friedman-Hill, S. R., and Bilsky, A. B. (1994). Parallel processing of part/whole information in visual search tasks. Perception & Psychophysics 55(5): 537–550. (p. 54) Find this resource:
Wolfe, J. M. and Bennett, S. C. (1997). Preattentive object files: Shapeless bundles of basic features. Vision Research 37(1): 25–43.Find this resource:
Wolfe, J. M. (1998). What do 1,000,000 trials tell us about visual search? Psychological Science 9(1): 33–39.Find this resource:
Wolfe, J. M., Klempen, N. L., and Shulman, E. P. (1999). Which end is up? Two representations of orientation in visual search. Vision Research 39(12): 2075–2086.Find this resource:
Wolfe, J. M., Klempen, N., and Dahlen, K. (2000). Post-attentive vision. Journal of Experimental Psychology: Human Perception and Performance 26(2): 693–716.Find this resource:
Wolfe, J. M. (2001). Asymmetries in visual search: An Introduction. Perception & Psychophysics 63(3): 381–389.Find this resource:
Wolfe, J. M. (2003). Moving towards solutions to some enduring controversies in visual search. Trends in Cognitive Sciences 7(2): 70–76.Find this resource:
Wolfe, J. M. and DiMase, J. S. (2003). Do intersections serve as basic features in visual search? Perception 32(6): 645–656.Find this resource:
Wolfe, J. M., Horowitz, T., Kenner, N. M., Hyle, M., and Vasan, N. (2004). How fast can you change your mind? The speed of top-down guidance in visual search. Vision Research 44(12): 1411–1426.Find this resource:
Wolfe, J. M. and Horowitz, T. S. (2004). What attributes guide the deployment of visual attention and how do they do it? Nature Reviews Neuroscience 5(6): 495–501.Find this resource:
Wolfe, J. M. (2005). Guidance of visual search by preattentive information. In L. Itti, G. Rees, and J. Tsotsos (eds.), Neurobiology of Attention (pp. 101–104). San Diego, Calif.: Academic Press/Elsevier.Find this resource:
Wolfe, J. M., Horowitz, T. S., and Kenner, N. M. (2005). Rare items often missed in visual searches. Nature 435: 439–440.Find this resource:
Wolfe, J. M., Reinecke, A., and Brawn, P. (2006). Why don’t we see changes? The role of attentional bottlenecks and limited visual memory. Visual Cognition 19(4–8): 749–780.Find this resource:
Wolfe, J. M. (2007). Guided Search 4.0: Current progress with a model of visual search. In W. Gray (ed.), Integrated Models of Cognitive Systems (pp. 99–119). New York: Oxford.Find this resource:
Wolfe, J. M., Horowitz, T. S., VanWert, M. J., Kenner, N. M., Place, S. S., and Kibbi, N. (2007). Low target prevalence is a stubborn source of errors in visual search tasks. JEP: General 136(4): 623–638.Find this resource:
Wolfe, J. M., Reijnen, E., Van Wert, M. J., and Kuzmova, Y. (2009). In visual search, guidance by surface type is different than classic guidance. Vision Research 49(7): 765–773.Find this resource:
Wolfe, J. M. (2010). Bound to guide: A surprising, preattentive role for conjunctions in visual search. Paper presented at the Vision Science Society meeting: Naples, Fla, May 2010.Find this resource:
Wolfe, J. M. and Myers, L. (2010). Fur in the midst of the waters: visual search for material type is inefficient. Journal of Vision 10(9): 8.Find this resource:
Wolfe, J. M., Palmer, E. M., and Horowitz, T. S. (2010). Reaction time distributions constrain models of visual search. Vision Research 50: 1304–1311.Find this resource:
Wolfe, J. M. and VanWert, M. J. (2010). Varying target prevalence reveals two, dissociable decision criteria in visual search. Current Biology 20(2): 121–124.Find this resource:
Wolfe, J. M., Reijnen, E., Horowitz, T., Pedersini, R., Pinto, Y., and Hulleman, J. (2011). How does our search engine ‘see’ the world? The case of amodal completion. Attention, Perception, & Psychophysics 73(4): 1054–1064.Find this resource:
Wolfe, J. M., Alvarez, G. A., Rosenholtz, R., Kuzmova, Y. I., and Sherman, A. M. (2011). Visual search for arbitrary objects in real scenes. Attention, Perception, & Psychophysics 73(6): 1650–1671. (p. 55) Find this resource:
Wolfe, J. M., Birdwell, R. L., and Evans, K. K. (2011). If you don’t find it often, you often don’t find it: Disease prevalence is a source of miss errors in screening mammography. Paper presented at the Annual Radiological Society of North America meeting: Chicago, Ill., 30 November 2011.Find this resource:
Wolfe, J. M., Vo, M. L., Evans, K. K., and Greene, M. R. (2011). Visual search in scenes involves selective and nonselective pathways. Trends in Cognitive Sciences 15(2): 77–84.Find this resource:
Wolfe, J. M. (2012). The rules of guidance in visual search. Lecture Notes in Computer Science: 7143. Indo-Japan conference on Perception and Machine Intelligence, 1–10.Find this resource:
Yamane, Y., Carlson, E. T., Bowman, K. C., Wang, Z., and Connor, C. E. (2008). A neural code for three-dimensional object shape in macaque inferotemporal cortex. Nature Neuroscience 11(11): 1352–1360.Find this resource:
Yantis, S. and Jonides, J. (1990). Abrupt visual onsets and selective attention: Voluntary versus automatic allocation. Journal of Experimental Psychology: Human Perception and Performance 16(1): 121–134.Find this resource:
Yantis, S. (1993). Stimulus-driven attentional capture. Current Directions in Psychological Science 2(5): 156–161.Find this resource:
Zehetleitner, M., Krummenacher, J., Geyer, T., Hegenloh, M., and Müller, H. (2011). Dimension intertrial and cueing effects in localization: Support for pre-attentively weighted one-route models of saliency. Attention, Perception, & Psychophysics 73(2): 349–363.Find this resource:
Zhang, Y., Meyers, E. M., Bichot, N. P., Serre, T., Poggio, T. A., and Desimone, R. (2011). Object decoding with attention in inferior temporal cortex. Proceedings of the National Academy of Sciences 108(21): 8850–8855.Find this resource:
Zhaoping, L. (2008). Attention capture by eye of origin singletons even without awareness: A hallmark of a bottom-up saliency map in the primary visual cortex. Journal of Vision 8(5): 1–18.Find this resource:
Zhaoping, L. and Frith, U. (2011). A clash of bottom-up and top-down processes in visual search: The reversed letter effect revisited. Journal of Experimental Psychology: Human Perception and Performance 37(4): 997–1006.Find this resource:
Zohary, E. and Hochstein, S. (1989). How serial is serial processing in vision? Perception 18: 191–200.Find this resource: