Philosophy of Neuroscience
Abstract and Keywords
This article indicates problems that can be addressed in the philosophy of neuroscience. The first issue is to understand the shape or nature of the science as a whole. Neuroscience is a data-rich discipline. It is a science that consists of an abundance of facts, but no theories. Another problem relates to how to analyze core concepts, principles, methods, and fundamental questions unique to that science. Neuroscientists follow the principle of functional localization which states that brain functions are localized to particular anatomical sites. The study about how the brain supports consciousness is a new development. The article also illuminates traditional philosophical questions with attention to explain empirical results. A broad range of topics has been addressed in neurophilosophy, including many aspects of perception; representation; the emotions; and the nature of pain. Neuroscience has proved to be relevant to the philosophy of psychology.
Introduction: Is There a Philosophy of Neuroscience?
Among the beneficiaries of the biological revolution in the latter half of the twentieth century were the fields that study the brain and nervous system. This group of disciplines, often individuated by chosen methodology or “level” of investigation and united by a common focus on the nervous system, has come to be known collectively as “the neurosciences” or just “neuroscience.” A young science, neuroscience has enjoyed a period of astounding expansion in recent years. The Society for Neuroscience, established in 1970 with 500 members, now has a membership of 37,500. The field has a number of dedicated journals; neuroscience papers appear with considerable frequency in Science and Nature; and neuroscience departments and programs in the universities in the United States alone number in the hundreds. Neuroscience has all of the marks of a serious, concerted scientific endeavor with the social structure and the public, private, and governmental support to flourish. Many substantial sciences have led to a subdiscipline in philosophy that focuses on that science. Is there a philosophy of neuroscience?
This answer to this question is both yes and no. There is certainly a growing number of philosophers who are interested in applying the findings of neuroscience to a variety of questions in the philosophy of mind, epistemology, and other branches of philosophy. Many of these philosophers identify themselves as “neurophilosophers,” using the neologism coined by Patricia Churchland (1986), who is credited with founding the field (see also Churchland & Sejnowski 1992; Churchland 2002). However, neurophilosophy is often distinguished from the philosophy of neuroscience, which is taken to be about the philosophical problems internal to neuroscience itself: the analysis of theoretical concepts; the investigation of the science's methodologies; the relation of neuroscience to other sciences; and other such questions. Unlike neurophilosophy, the philosophy of neuroscience, on this conception of the discipline, has not received much attention as a branch of philosophical inquiry. There are but a handful of philosophers of science who focus on neuroscience.
For the purposes of this chapter, however, we will take “philosophy of neuroscience” more broadly, to be any philosophical investigation where neuroscience plays an important role. On this view, there is at least one area—neurophilosophy—in which the philosophy of neuroscience is thriving. Although this is theft rather than honest toil, it permits us to survey a much broader field than we would otherwise be able to do. The aim of this chapter is not to give a summary of this field, which is fairly clearly in its infancy, but rather to follow other philosophers of neuroscience (e.g., Bechtel et al. 2001; Machamer et al. 2001; Hacker & Bennett 2003) and try to indicate something of the range of problems that can be addressed in the philosophy of neuroscience, broadly construed.
In order to elucidate the jobs for a philosophy of neuroscience, one might begin by determining the motivations for doing philosophy of any science. There are at least four major reasons for such an enterprise: (1) to understand the shape or nature of that science as a whole; (2) to analyze or elucidate particular core concepts, principles, methods, and fundamental questions unique to that science. These might also include the assumptions that drive the formulation of experiments, the interpretation of experimental results, and conceptual puzzles particular to that science; (3) to illuminate traditional philosophical questions with attention to empirical results; and (4) to better understand the structure or nature of science as a whole. While the core of philosophy of neuroscience may be best characterized by the first two of these motivations, much more of the work done to date has focused on the third and fourth. This highlights the scope of work yet to be undertaken in philosophy of neuroscience. The discussion below is divided into four sections, one for each of these four issues.
A few explanatory remarks are necessary before we begin. Although our topic is the philosophy of neuroscience, we sometimes discuss the work of philosophers and sometimes the work of neuroscientists themselves. We do this for two reasons. First, some of the debates in the philosophies of physics and of biology include both philosophers and scientists, and this seems to us to be a desirable thing. There is no reason, therefore, that the philosophy of neuroscience ought to be a club for philosophers only. Second, rather than restricting ourselves to questions that philosophers have already addressed, part of our aim here is to point to some of the neuroscientific literature that philosophers might fruitfully investigate in the future. For reasons of space, we usually restrict ourselves to a single illustration or author in each subsection, though we try, where possible, to mention other issues and writers of interest. Our choice of illustration should not be taken to imply that (p. 351) we think this or that bit of philosophy or neuroscience is the most important in the area. Sometimes, our choice is dictated by our own knowledge; sometimes by a sense of what might be interesting to nonspecialists; and sometimes by what seems striking. Finally, we have tried to be ambitious and to cover as wide a territory as possible, but we have not attempted to be exhaustive.
1. Understanding the Shape of Neuroscience
A central motivation for doing philosophy of a particular science is to understand the nature of that science. Physics, for instance, is a field that has long been a subject of philosophical study. Physics has a number of very broad theories, including the theories of special and general relativity, quantum mechanics, electromagnetic field theory, and statistical mechanics. While physics lacks a single overarching theory (the much‐sought‐after “Theory of Everything”) that integrates all of the subtheories, the scope of each of the theories is broad enough and the phenomena subsumed under them diverse enough that physics is considered particularly theory‐rich. Philosophy of physics tends to focus upon the interpretation of theories, such as quantum mechanics, and on elucidating concepts that play a role in these theories, such as time, spacetime, entropy, etc. Similarly, a central focus of the philosophy of biology is evolutionary biology, a particularly theory‐rich domain. The philosophy of evolutionary biology focuses largely on elaborating and elucidating concepts in Darwinian theory and the modern synthesis (see, for example, chapters 1, 2, 9, 12 in this volume). Thus, the most prominent philosophies of a science are characterized by broad and successful theories.
Neuroscience, in contrast, has very few broad theories. It might be said that the field is governed by a few global frameworks—a crude physicalism and perhaps computationalism—but these serve as fundamental or guiding assumptions rather than theories: They don't provide neuroscientists with predictive power in the way that physical theories do. Furthermore, the assumptions of physicalism and computationalism aren't proprietary to neuroscience but are borrowed from other fields and applied to nervous systems. Given its lack of theoretical richness, and the rather local character of the theories that do exist, neuroscience looks quite different from both physics and evolutionary biology. These features raise the following questions: Is a theoretical framework completely lacking in neuroscience? And, if not, to what extent does it have a theory? In what ways does the theory or conceptual framework it has compare to those of more mature sciences? What consequences might these differences have for a philosophy of neuroscience? We might even ask: Can there be a philosophy of neuroscience without a broad or successful theory?
To this last, we think the answer is yes. Certainly it is possible, and perhaps invaluable, to approach philosophical questions about the brain and brain science in the absence of an overarching theory. The character of the philosophy that results may be radically different from the character of the traditional philosophies of science, but there is no mold that the philosophy of a science need fit. A better question may be: What would a philosophy of neuroscience look like, if it lacks a broad theoretical foundation? One promising supposition is that a philosophy of neuroscience will focus more upon the articulation of methodological concepts employed in research than do the philosophies that focus on sciences with rich theories. Further, one might think that the absence of a broad theory might give philosophers of neuroscience greater scope for making a contribution to the conceptual development of neuroscience. Thus, a philosophy of neuroscience may appear to be more piecemeal or unintegrated than the philosophies of physics or of biology, and this may reflect the openness of the inquiry, the lack of an accepted theoretical framework, and even the absence of an accepted set of questions.
Despite the lack of broad theories, neuroscience is a data‐rich discipline: We know many facts about how brains develop, operate, and respond to insult. The rate of accretion of factual knowledge can be appreciated by perusing the many thousands of posters and talks presented each year at the annual meeting for the Society for Neuroscience. On the basis of these data, neuroscientists have developed a large stable of local explanations or domain‐restricted theories. We prefer to reserve the term “theory” for abstract, highly general characterizations of phenomena that provide a big picture of the mechanics or dynamics of a system, and instead to characterize the bulk of the neuroscientific understanding we currently possess in terms of models—relatively domain‐specific sketches of how various processes occur.
So, whereas neuroscience is theory‐poor, it is model‐rich. We understand how neurons integrate inputs from other neurons and what causes them to fire; how synaptic connections are strengthened; and how this process may be involved in learning and memory. We have a coarse understanding of how visual processing works and which brain areas are involved in a number of higher cognitive functions. The list of articulated models is extensive, yet theoretical understanding remains piecemeal.
A further look at the types of fundamental principles that are up for grabs shows neuroscience to be especially interesting philosophically. For example, is it right to think of the brain as a computing device? Do individual neurons compute, and what should neuroscience inherit from computationalism? If neurons compute, how should we characterize their computational tasks? What is the correct level of explanation to seek in understanding brain function? Should we attend to the activities of single neurons, of cell assemblies, or of subcellular processes? What properties of neurons are functionally relevant? Is the functionally significant information coded in action potentials (“spikes”)—the all‐or‐nothing changes in voltage by which neurons affect neighboring neurons—or in spike properties, such as rate, temporal pattern, or temporal correlation? Should neuroscientists be at (p. 353) tending to the statistics of neural firing, the dynamical properties of networks of neurons, or other neural properties?
These questions are all on the table, and all have philosophical implications. The fundamental nature of these inquiries leads some to question whether we even enjoy basic knowledge about brain function because of the extent to which our understanding may depend on suspect assumptions and principles (Hardcastle 1999b). Rather than showing that neuroscience lacks the theoretical foundation for philosophical analysis, these issues instead reveal the rich potential that a philosophical approach to neuroscience offers. Indeed, the reluctance of many neuroscientists to theorize much beyond the data (for example, in consciousness studies) may create a niche for philosophers of neuroscience.
The question remains whether neuroscience is the sort of science that is doomed to be theory‐poor, or whether this poverty is due to its relative immaturity as a science. It is worth noting that most methods for studying the brain's anatomy and physiology became available only in the mid‐twentieth century, and many of the most powerful were invented only in the 1990s. The brain is an exceedingly complex biological organ which has evolved to perform a variety of sophisticated tasks. It yields its secrets grudgingly. Nonetheless, there is no principled reason why we cannot expect that, in time, we will be able to formulate more general theories about the neural processing that underlies these diverse functions.
For the present, neuroscience is a science that consists of an abundance of facts, a large number of local models, but no articulated overarching theories. It thus differs substantially from the exemplars of science classically taken as objects for philosophy, such as physics, and for that reason, a philosophy of neuroscience may prove to be especially interesting.
2. Elucidating Concepts, Principles, Methods, and Fundamental Questions
A second goal of a philosophy of neuroscience is to clarify core concepts and principles of neuroscience. In particular, ideas that are not part of neuroscientific theory but which constitute assumptions on which research depends are of particular importance. The list of potential targets for philosophical treatment is long, and it remains to be seen which concepts will prove to be philosophically interesting or important. Some of the concepts we think might be worth investigating are the neural coding of information (e.g., Barlow 1996; Black 1994; Eggermont 1998; Garson 2003); the concept of perceptual binding and its neural mechanisms (p. 354) (e.g., Revonsuo 1999; Clark 2001; Hardcastle 1996, 1997); and the related concepts of plasticity, development, and innateness (Buller & Hardcastle 2000; Quartz 2003). Clear candidates for a philosophical exploration of neuroscientific methods include tissue staining and labeling; tract tracing; lesion studies; single‐cell and multicellular recording techniques; pharmacological manipulations; the various methods of neuroimaging; and genetic and molecular biological techniques, such as knockouts, knockins, and gene silencing. Philosophical investigation of these techniques will have to ask the epistemological question: What is learned by using them? This in turn should yield a clearer conception of the norms that these techniques must satisfy to deliver reliable data.
Here, we provide brief discussions of an important principle, method, and fundamental question in neuroscience, as well as an extended example of the philosophy of one important concept: the receptive field.
2.1. Core principles: Functional localization
Since the early days of the study of the brain, most neuroscientists have been committed to the principle of functional localization—the view that brain functions are localized to particular anatomical sites (e.g., Hardcastle & Stewart 2002; Lloyd 2000; Mundale 2002; Young 1990). Implicit adherence to the principle shapes both the design and interpretation of many neuroscientific experiments. Much of neuroscience is devoted to determining what those functions are and how they are implemented in brain tissue. Numerous techniques, including lesion studies and neuroimaging, confirm that the brain is not equipotential; theoretical arguments from psychology, philosophy, and evolutionary psychology have been developed to support the related notion of modularity in brain systems (Buller & Hardcastle 2000) and the more distantly related notion of psychological modularity (Fodor 1983).
Despite the prevalence of the idea, it remains unclear how we ought to construe the specialization of brain function. What sorts of function are localized to different brain areas, how should they be characterized, and how local are they? These are important questions, at least to the extent that their answers influence both the design and interpretation of many neuroscientific experiments.
For instance, an ongoing debate in the functional‐imaging literature has concerned whether an area in the fusiform gyrus is specialized for faces (Yovel & Kanwisher 2004; Kanwisher & Yovel in press) or for the discrimination of classes of objects for which we have particular expertise (Tarr & Gauthier 2000; Gauthier et al. in press). That debate has unfolded in the context of the shared belief that the ventral visual areas are specialized for recognizing some classes of objects. However, novel methods of analysis in neuroimaging have called even this fundamental principle into question by revealing that numerous cortical areas carry distributed information about many classes of objects and that the identities of objects can be recovered from patterns of low‐level activations across multiple cortical visual (p. 355) areas (Haxby et al. 2001; Hanson et al. 2004). These new developments suggest the need for a rethinking of the nature of widely held commitments about the organization of the visual system as well as about functional specification in the brain.
2.2. Core methods: Functional imaging
The relation of structure to function is central to all of natural science and no less so in the sciences of the mind. Classical artificial intelligence, for example, is famous for arguing that the notions of structure and function are separable: Since the very same computations can be run on different hardware, function can be viewed as only contingently related to its implementation. Neuroscience does not assert a theoretically significant distinction between structure and function but rather sees the two as intertwined. This is not to say that the distinction does not arise. In contemporary cognitive neuroscience, the relation between structure and function is most important in brain‐imaging studies of cognitive function. There are serious doubts about what animal models can tell us about cognition, and, as a result, the advent of brain imaging has heralded a new era in the study of the brain. Functional brain imaging techniques, especially functional magnetic resonance imaging (fMRI), are now widely accessible. In these studies, the brains of participants are monitored while they are doing a cognitive task. The anatomical sites of activation are identified by an increase in metabolism marked by blood flow to that site. Profiles of blood flow changes give insight into the functional anatomy of the task, and studies like this are used to try to make inferences about the cognitive processes involved in task performance.
The central methodological question regarding these studies is: What is the relation between data concerning active brain structures and theories of cognitive function? The brain‐imaging literature is substantial. At the time of this writing, a PubMed search returns almost 7,000 entries for the phrase “brain and f (or functional) MRI” between 1990 and 2006—more than a study a day, on average, for sixteen years. And the practice of brain imaging is becoming so important to the discipline that it is now difficult to get a job in some fields without having done some. Against this background, it is surprising that there are those who think that the answer to the question that opened this paragraph is: none. Further, there are those who believe not only that functional imaging has made no contribution to the understanding of cognitive function but that such a contribution is in principle impossible (see Coltheart 2004; Fodor 1999; Henson 2005; Poeppel 1996; Uttal 2001; Van Orden & Paap 1997).
Van Orden and Paap (1997), for example, argue that functional neuroimaging can contribute to developing cognitive theory only if three assumptions are satisfied, but none of them is likely to be true. The first assumption is that the researcher possesses a correct cognitive theory of the phenomenon in question as well as knowledge of which areas of the brain perform the functions postulated by that theory. One method of carrying out imaging studies is to produce two sets of (p. 356) images, one in which a particular functional cognitive component is activated and a second in which that component alone is not activated. The second image is then subtracted from the first, and the brain area responsible for that cognitive component is identified. If, however, one's cognitive theory is incorrect and does not correctly specify the functional components of the cognitive task, then the contrasting images will fail to isolate the neural basis of the hypothesized cognitive component. Until we have a correct theory of a cognitive function, we cannot begin to identify the neural mechanisms that subserve it.
The second assumption is that the neural components that implement the cognitive function must be feed‐forward only. Suppose a researcher has a cognitive model that identifies a number of components. If the aim of an imaging task is to isolate the neural locus of a particular component, then the experiment will involve one task in which the component is active and a second which is identical except for the inactivity of that component. As long as components do not interact—that is, the activity of a downstream component does not feed back to earlier components to change their behavior—then subtraction is possible. If, however, there is feedback, then the brain region activated may not represent a single cognitive component but rather the net effect of a change in task as it ramifies through the brain. Since, as Van Orden and Paap argue, feedback is the norm in cognition (as it is in brain function), the subtractive method fails.
The related third assumption is that contrasting tasks must alter the smallest number of cognitive components. Here, the idea is that, in order to identify the neural locus of a cognitive function, one must keep the remaining components fixed. However, Van Orden and Paap argue that the evidence from cognitive psychology points to the relevance of context for cognition and to the interactions among putative cognitive modules. If this is the case, then it may never be possible to isolate a single module with a change in cognitive task. Any change will alter the function of many modules, and brain activation will be impossible to interpret.
A number of scientists and philosophers (including ourselves) believe that Van Orden and Paap's analysis is overly pessimistic and that neuroimaging can yield useful and sometimes indispensable information without meeting these assumptions (see, e.g., Bogen 2002). Nonetheless, given the great importance of functional neuroimaging in contemporary neuroscience, and the enormous resources of money and time that are being spent on it, considerably more attention should be paid to investigating the logic of functional imaging as well as to evaluating its scope and limits. This is a job for the philosopher of neuroscience if anything is.
2.3. Core questions: The neural correlates of consciousness
Philosophy of neuroscience is also in the business of identifying and exploring fundamental questions in neuroscience. There is perhaps no question more fundamental and vexing about the brain than how it supports consciousness. The (p. 357) acceptance of the study of consciousness as a valid scientific pursuit is a relatively new development. Previously, it had been thought that since our only access to consciousness is through subjective report, it could not be scientifically investigated. This attitude has changed in part because of the development of new neuroscientific techniques that enable the noninvasive study of brain activity in humans, and in part because of the realization that finding the neural correlates of consciousness (NCC)—the neural activity patterns that presumably form the brain basis for consciousness—is scientifically tractable even if consciousness itself is not. Scientists and philosophers have investigated the questions of how to find the NCC (Frith et al. 1999; Crick & Koch 1998, 2003) and what kinds of phenomena are significant. For instance, some distinguish the components of consciousness, such as awareness or level of arousal, and the contents of consciousness (Rees et al. 2002; Laureys 2005), while others suggest that there are different kinds of consciousness, such as Block's (2005) access consciousness and phenomenal consciousness. It remains to be seen whether any of these taxonomies carve consciousness at its joints and which provide a fruitful framework for finding the NCC. Fractionating consciousness in the service of pursuing the NCC is an obvious job for philosophy of neuroscience, as is a critical evaluation of scientific results in light of various philosophical distinctions about consciousness. The search for the NCC has proceeded apace (Rees et al. 2002; Crick & Koch 2003), and there seems to be general agreement that activity in medial frontal regions, the posterior cingulate/precuneus, and higher‐order association areas are involved in consciousness. Beyond that, however, there is little agreement even on what seem like relatively straightforward questions, such as whether activity in early sensory cortices contributes to determining the contents of consciousness (Rees et al. 2002; Pins & Ffytche 2003). Nor is it clear whether the neural correlates found thus far correspond to processing subserving awareness itself, perceptual identification, introspection, etc. Aside from the methodological issues attending the search for the NCC, the central question about consciousness, namely, what we will have understood about consciousness if we succeed in finding its neural correlates, thus far remains untouched by neuroscience.
2.4. Core concepts: Receptive fields
In the remainder of this section, we offer a brief discussion of one neuroscientific concept of interest, the concept of the receptive field (RF). The RF may be the most important concept of visual neurophysiology, one of the best‐understood branches of contemporary neuroscience. The concept of a neuron's RF has become central to understanding visual neurophysiology and the physiology of other perceptual modalities, as well as to the construction of models of perceptual function.
In visual neuroscience, the RF of a neuron refers to the location in space relative to the perceiver (or, according to some, the location on the perceiver's retina) at which a stimulus with particular properties will evoke a response in that (p. 358) neuron. Thus, the RF describes spatial and stimulus properties of neurons. Receptive field structure also describes their temporal properties. The response of a neuron to a particular stimulus is the result of the effect of that stimulus over a time interval. Thus, neuronal responses integrate the effect of a stimulus over that interval.
We do not have the space for a discussion of all of the questions that might be addressed in connection with the concept of the RF (see Bair 2005; Wörgötter & Eysel 2000). We mention two. First, the concept of the RF applies to sensory neurons, especially visual neurons. Whether there is a parallel notion for other neurons is an important question. Second, the function of neurons can also be explored by considering their “projective fields”: the neurons to which a particular neuron sends information. There is some reason to think that inferring function solely from the RF can be misleading and that the addition of projective field information may be an important corrective (Lehky & Sejnowski 1988). This is a place where modeling of neural function may be particularly useful.
We hope the reader will agree that the concept of the RF is of sufficient interest to the philosopher of science on its own, but, in case stronger motivation is required, we note that the status of the RF has relevance outside of the technical philosophy of neuroscience. The RF of a neuron is characterized by reference to the spatial and temporal properties of the stimuli that affect that neuron. This presents the philosopher of mind with an interesting question regarding reduction. It has been argued (Burge 1979) that one obstacle to mind‐brain reduction is externalism about mental content. Since neural phenomena are apparently internal—that is, a description of neural facts makes no reference to anything outside the body housing the brain—neural facts are by themselves insufficient to capture, and thus to reduce, mental content. Because RFs are articulated by referring to the properties of the external world, however, they may be examples of neural properties that are externalist. Thus, the interpretation of the concept of the RF may have consequences for mind‐brain reductionism.
The notion of the RF was first formulated by Hartline (1938) and was put to use in the work of later investigators, including David Hubel and Torsten Weisel. Their Nobel prize–winning work on the primary visual area, known as striate cortex or V1, at the end of the 1950s and early 1960s founded modern visual neurophysiology. Hubel and Weisel were recording from individual neurons—that is, measuring their change in voltage by means of microelectrodes—in cat visual cortex. Quite by accident, they discovered that certain stimuli caused the cells to respond: As one of the slides from which stimuli were being projected was being taken out of the ophthalmoscope, the neuron from which they had been recording produced a barrage of spikes (see Hubel & Weisel 1998). They eventually realized that the neuron was responding not to the image on the slide but to the faint image of its moving edge. Subsequent investigation revealed that one class of neurons in V1, which Hubel and Weisel dubbed “simple cells,” are activated when the stimulus falls (p. 359) on a particular spot on the retina and are deactivated when the stimulus falls on adjacent parts. Other neurons, which Hubel and Weisel called “complex cells,” have different properties. (Ringach 2004).
In the following years, neurons were identified that respond to a wide range of more or less complex and specific stimuli, including the discovery (Quian Quiroga, Reddy, Kreiman, Koch, & Fried 2005) of neurons outside of the visual cortex which (overturning decades of prediction to the contrary) may respond preferentially to particular individuals—in the case of one subject, Halle Berry, and another, Jennifer Aniston. (It is in fact difficult to know to what exactly these neurons are responding. What is significant is that they respond to the same individual in different poses and orientations.)
It was Hubel and Weisel who first attempted to build simple models of the anatomical structure of the RF. Hubel and Weisel envisaged RFs as having a hierarchical organization, with the RFs of cells farther away from the retina being built up out of the RFs of ones closer to the retina. Thus, for example, they modeled the RFs of simple cells as arising out of the RFs of neurons in the lateral geniculate nucleus (LGN), a part of the thalamus to which visual information is sent from the retina and, in turn, the RF structure of complex cells as arising out of simple‐cell RFs. What is central to what we will call the “classical” conception of the RF is that RFs are construed as fixed properties of neurons. They are fixed because they arise out of the anatomy of the visual system, which does not change once it has fully developed.
Other work in visual neurophysiology—though controversial (see Rust & Movshon 2005)—may be beginning to reveal the limitations of the classical conception of the RF and suggesting a different picture. This development has been the result, in part, of methodological changes in visual neurophysiology. Hubel and Weisel's experiments, and the enormous number of studies that followed over the next forty years, had two important features. First, they recorded from animals (usually cats) that were anesthetized. Second, they made use of very simple stimuli, such as moving bars, gratings, Gabor filters, and the like. Both of these restrictions were adopted in order to better control the experiments. Awake animals move their eyes (as well as their heads, if they are permitted to do so), and it is very difficult to be sure which stimulus is driving a particular neuron. If complex stimuli are used, it is difficult to infer which aspect of the stimulus is of greatest interest to the neuron.
A significant price is paid for these methodological choices. It has long been known that anesthetics can affect the behavior of neurons (Robertson 1965; see also Kayser, Salazaar, & König 2003), and the stimuli used in classical visual neurophysiology bear no resemblance to the environments that the visual system is built to deal with. In recent years, therefore, neurophysiologists have taken on the daunting task of recording from neurons exposed to natural stimuli in awake animals, and some of these experiments have begun to provide evidence for at least two ideas. First, RFs are not fixed properties of neurons. They may change, and over periods as short as tens of milliseconds. More important, there is evidence (p. 360) that the responses of neurons in V1 are stimulus‐dependent. That is, the structure of the RF of a V1 cell depends in part on the stimulus to which that cell is exposed.
David, William, and Gallant (2004), for example, recorded from neurons in V1 in awake macaque monkeys. They compared two classes of stimuli. One class had statistically similar spatial and temporal properties to those of natural (but monochromatic) scenes of the sort that would be seen by an animal freely exploring its environment. A second class of stimuli was composed of moving sinusoidal gratings. Two features of the neurons' responses are important. First, about fifty milliseconds after the onset of the natural image, the neuron fires strongly and continues to fire less strongly while the image is presented. No such temporal response structure is observed in response to the moving gratings; the neuron's firing pattern appears to be relatively random. The neuron thus seems to be highly sensitive to the onset and offset of the natural stimulus; this is not the case with respect to the gratings. Second, the strength of the neuron's response to the natural images varies with different spatial properties, whereas the neuron's response to the gratings seems to be largely the same for each image. The neuron thus seems to be coding differences in spatial features in the natural images but not doing so in the case of the gratings.
These differences in and of themselves are not the significant findings. It is only to be expected that dramatically different stimuli—moving gratings, on the one hand, and complex natural images, on the other—will elicit different neural responses. The important question is whether the differences in response are due solely to the differences in stimuli or to other factors. In order to distinguish this question, David et al. developed two models of the RF of the neuron, one based on the data of neural responses to natural images and a second based on the data of responses to the gratings. Each of the models was then used to predict the responses of a neuron to novel natural images. If the RF of V1 neurons is static, then the two models of the RF should be the same and should therefore predict the responses of a neuron to novel stimuli equally well; if they are different, they should predict neural responses differently. In fact, the RF model that derives from the responses to natural images predicts responses to novel natural images significantly better than the RF model that derives from responses to gratings.
Two consequences follow. First, it appears that the picture of V1 activity that derives from responses to artificial stimuli—which is to say, the overwhelming majority of studies since the mid‐twentieth century—may not approximate terribly well the way V1 neurons actually behave when they are contributing to perception. Second, and more important, it seems that the receptive fields of V1 neurons are actually altered by the stimuli to which they are exposed. They are dynamic in the sense that the RF of the neuron changes with the category of stimulus.
On the basis of this and related findings, Bair (2005, 463) has made the following prediction:
Whether this is the case is obviously an empirical matter, and a matter for modeling in neurophysiology. If we are witnessing the demise of the classical notion of the RF, however, this is also a matter for the philosopher of neuroscience, who may, if expert enough, participate in the interpretation of the neurophysiological data. She may also participate in the process of trying to construct a new conception of the RF (or a successor concept) and think about what this conceptual change will do to how we think about neuronal function and perceptual representation. (For a discussion of some options, see Chirimuuta & Gold forthcoming.) We very much doubt, for example, that the history of philosophical thinking about mental representation has equipped us at all to think about a representational mechanism that changes with the properties of the thing represented. Finally, if the RF does indeed turn out to be an “emergent” property in Bair's (2005) sense, then it is possible that the argument against reductionism discussed above may no longer pose a problem for the would‐be reductionist. If the RF can be eliminated in favor of the properties of circuitry and synaptic mechanisms, as Bair suggests, and those properties are all internalist in nature, then the externalist obstacle to reductionism disappears.
[W]e might have to let go of the notion of a unique receptive field, and the idea that there is an appropriate or unbiased stimulus ensemble that will reveal the (p. 361) function of a cell. The primacy of the RF as a concept for embodying the function of V1 neurons will be replaced by a set of circuits and synaptic mechanisms as our computational models begin to explain ever more response properties. The receptive field can then be understood as an emergent property that changes with the statistics of the input.
Although this discussion has focused on a theoretical question about a neuroscientific concept, we believe that this issue, and others like it, would benefit by the addition of philosophers to the debate. Just as philosophers of science have participated in trying to investigate the fundamental concepts of other scientific theories, the concepts of neuroscience present important and interesting challenges.
3. Illuminating Philosophical Questions
Among the reasons that a philosophy of neuroscience might capture one's interest is that it bears relations to many issues in other branches of philosophy. It is likely that advances in the neurosciences will change how we approach these questions, and it is possible that the close relation to so many philosophical issues distinguishes the philosophy of neuroscience from other branches of the philosophy of science.
3.1. Philosophy of mind
As we noted above, neurophilosophy is the discipline that attempts to make use of neuroscience to illuminate problems about the mind. It seems very likely that neurophilosophy has been a more active area than the philosophy of neuroscience narrowly construed not only because the connections between neuroscience and philosophy of mind are the most direct but because the philosophers drawn to neuroscience are often philosophers of mind. Although most philosophers of mind are typically functionalists who believe that mental entities are not identical to their physical realizers, it nonetheless seems plausible to many that a better understanding of the physical implementation of animal and human mental phenomena will contribute to understanding the mind more broadly. And to many nonfunctionalists, of course, the relevance of neuroscience is even greater.
A broad range of topics has been addressed in neurophilosophy, including many aspects of perception (Akins 1996; Keeley 2002; Clark 1993); representation (Bechtel 2001; Jacobson 2003; Mandik 2003; O'Keefe & Nadel 1978; Rolls 2001; Stufflebeam 2001); the emotions (Hardcastle 1999c); and the nature of pain (Hardcastle 1999a).
One area of considerable interest is the relation between perception and action (Noë 2004; Mandik 2005). Common sense about perception suggests that perceptually guided action is based on our conscious perception of the environment. Thus, for example, when I pick up a coffee cup, I experience my action as depending upon the conscious representation of the properties of the cup and its location in three‐dimensional space. Ungerleider and Mishkin (1982) described two distinct anatomical pathways in the visual system, which they dubbed the “what” and the “where” pathways. The what pathway was thought to subserve the representation of the properties of an object or scene; the where pathway was thought to represent the location of an object in space. In 1995, Milner and Goodale proposed a new way of thinking of this latter pathway. Instead of location in space, they proposed that this pathway is responsible for guiding motor interactions with the object. Rather than “where,” this pathway carries information about “how” to behave visually with respect to an object. Some of the most compelling evidence for their hypothesis comes from their study of a patient, DF, who had suffered carbon monoxide poisoning, leading to localized damage to the what pathway. Milner and Goodale found that DF was effectively blind; she could not, for example, correctly identify the orientation of a line in front of her. Surprisingly, however, she could successfully post a letter through a slot, whatever its two‐dimensional orientation. That is, she exhibited normal visually guided behavior without being able to “see” in the folk sense. Another patient, RV, exhibited the reverse dissociation, known as optic ataxia. RV could describe objects while being unable to grasp them.
Both Merleau‐Ponty (1962) and Gareth Evans (1985) hypothesized that an important part of vision is the perception of the space in which visually guided behavior takes place. However, both took the perception of space to be a specific function of the conscious representations of visual scenes. If Milner and Goodale (p. 363) are right, visually guided behavior depends in large measure on an unconscious system that is functionally separable from the system responsible for producing representations of object properties, including conscious representations. The experience of visually guided behavior as depending on these conscious representations is thus largely illusory. These findings are important not only to the philosophical theory of perception but to a number of other philosophical questions about consciousness, the role of the body in mental life, and the role of representation in cognition.
3.2. Philosophy of psychology
Unsurprisingly, neuroscience has proved to be relevant to the philosophy of psychology. Most important, a good deal of work has addressed the relation between psychology and neuroscience as theories (see, e.g., Gold & Stoljar 1999; Hatfield 2000; Schouten & de Jong 1999). The work of Patricia and Paul Churchland has been central in this area. Famously (or infamously), they have predicted, and advocated for, the elimination of folk psychology and its replacement with the language of neuroscience (see, especially, P. M. Churchland 1981; P. S. Churchland 1986). Other topics of interest include functionalism and multiple realizability (Bechtel & Mundale 1999; Kim 2002; Couch 2004) and the viability of various approaches to modeling the mental (Eliasmith 2003; Gluck et al. 2003).
One psychological phenomenon of considerable interest has been the understanding of other minds. Since the 1990s, there has been a significant debate about how children learn to “mind read”—that is, to interpret and predict the mental states of others (Davies & Stone 1995). Mind reading is an ability that seems essential to social interaction, and it is hypothesized to be absent in autism (Baron‐Cohen, Leslie, & Frith 1985) and lost in schizophrenia (Frith 1992). Two broad hypotheses have been developed to explain this ability. The “theory theory” asserts that children develop a tacit commonsense theory of others' mental lives, which they can deploy to reason about the mental states of others during social interactions. The theory is constituted of causal laws, and mind reading is construed as an instance of theoretical, if unconscious, ratiocination. The second hypothesis is the simulation theory. According to this account, rather than abstractly theorizing about what others will do or what their goals are, we reason about others' mental states by “pretending” to be them. That is, we simulate others' mental lives in the same neural systems that are involved in representing our own mental states, and process them with the neural machinery for decision making and planning we use to generate our own behavior. Rather than issuing in behavior or the adoption of other mental states, however, these systems are taken “offline”: They issue in predictions only. According to simulation theory, offline simulation provides us with a grasp of the mental states of others and the ability to predict what they will do.
Neurophysiology may have made contact with this debate with the discovery of “mirror neurons” (Gallese, Fadiga, Fogassi, & Rizzolatti 1996; Rizzolatti, Fadiga, Gallese, & Fogassi 1996). These neurons are found in an area of the macaque monkey brain known as premotor area F5, and there is evidence that neurons of an analogous kind are present in the human brain (Fadiga, Fogassi, Pavesi, & Rizzolatti 1995). Stimulation of F5 neurons produces orchestrated hand and mouth movements (and not merely muscle contractions), which seems to indicate that these neurons are involved in the execution of goal‐directed actions (see Gallese & Goldman 1998). Intriguingly, mirror neurons, a subpopulation of F5 neurons, are activated both when an action is performed by a monkey and when the monkey observes another individual performing the same action. They are not activated when actions seem accidental or when similar movements are carried out by inanimate entities.
What are mirror neurons for? Gallese and Goldman (1998) hypothesize that mirror neurons are involved in the development of mind reading. In particular, they argue that mirror neurons may be part of the mechanism that allows individuals to retrodict goals from behavior. Suppose someone performs an action, such as picking up a cup. The mirror neurons of an observer will be activated by this action and will lead to the motor plan for the same action that was observed. The observer will thus be in a mental state of offline planning for an action, and this plan includes a goal. On this hypothesis, therefore, mirror neurons give the observer access to the goal of the action and thus put her in the place of the performer of the action. Because mirror neurons put an observer into something like the same neural state as an agent, they may be one implementation of a simulation system devoted to reading the minds of others. If Gallese and Goldman's account is correct, then, it provides evidence that it is simulation that underpins mind reading rather than theory, at least in this limited context.
Neuroscience seeks to understand the biological system that represents the world and reasons about its representations. Advances in neuroscience may thus have the potential to influence our approach to a number of epistemological questions, including those regarding the nature of knowledge and belief, the justification of belief, and the roles of reason and emotion in grounding knowledge. Much of epistemology is normative, and neuroscience is bound to yield descriptive information only. However, it is not implausible to suppose that our normative views will be affected by our best picture of how the brain works.
An area in which philosophers of neuroscience have made contact with epistemology is the question of the pathological beliefs represented by psychiatric delusions (see, e.g., Davies, Coltheart, Langdon, & Breen 2001; Gerrans 2002; Hohwy 2004). The case for the relevance of delusion to philosophy was first made by Stone and Young (1997), and a number of other philosophers have joined the debate. One question raised by the study of delusion concerns the “structure” of belief. Quine taught us that belief is a web and that a change in epistemic com (p. 365) mitments ramifies throughout that web. Delusions, however, seem to violate this principle of epistemology. “Monothematic delusions” represent a specially acute form of the problem. Patients suffering from these delusions are largely unimpaired except for the presence of a single delusional belief, or a small family of beliefs. The best known of the monothematic delusions is the Capgras delusion—the belief that someone or something, often a loved one, has been duplicated, and the object or person with which the sufferer is in contact is the duplicate. If beliefs, construed as real psychological entities, constitute a web, then one would predict that it is impossible to adopt a new belief (especially one having as many ramifications as the Capgras delusion) without dramatic changes in much of the rest of the epistemic web. But this is not the case. Patients with the Capgras delusion do not seem to integrate their delusion into the rest of their beliefs. Of course, delusions are pathological beliefs (assuming they are beliefs at all), and their disconnection from the web of belief may be precisely part of what makes them pathological. Just how beliefs can come to be so disconnected is an epistemological question worth exploring.
Beliefs (or beliefs together with desires) typically motivate action. A second way in which delusions defy the standard picture of belief is that many delusions fail to motivate action. For example, it is not uncommon for individuals who suffer from the Capgras delusion to continue to live with the person they believe to be a duplicate of their spouse. They may make no attempt to locate the true spouse nor to find out what happened to them. This is not the pattern with all delusions, but it is common enough to require explanation. Again, one might hypothesize that one aspect of delusional pathology is exactly that delusions can fail to motivate. An exploration of delusion may thus help us to understand the processes of motivation and their relation to belief.
Finally, work on delusion has also addressed the question of the rationality of people with delusions. One theoretical account of rationality is procedural: Accordingly, rationality is purely a matter of satisfying the norms of reasoning, whatever the contents of one's beliefs and desires. In keeping with this Humean tradition, one of the central cognitive theories of delusion claims that people with delusions adopt beliefs on the basis of less evidence than do normal individuals, and indeed there is evidence that delusional individuals “jump to conclusions” (Garety & Hemsley 1994). It is possible, therefore, that delusional individuals accept hypotheses that others would reject because their standards of evidence are not sufficiently high. However, it is unclear whether this difference from nondelusional subjects should count as a reason for deeming people with delusions to be irrational. While their reasoning differs from control subjects, it actually approximates the Bayesian norms better than do controls (although this is not, in and of itself, sufficient for rationality). Moreover, the differences in reasoning between people with delusions and those without are rather subtle, and it is not entirely plausible that this explains the florid irrationality of their beliefs. One possibility is that people with delusions are irrational not because they violate procedural norms but because the contents of their beliefs are irrational (see Lewis 1986). If this is the (p. 366) case, then a philosophical theory of rationality is not exhausted by procedural norms alone.
The philosophy of neuroscience addresses the metaphysics of mind to the extent that it is concerned with the relation between psychology and neuroscience as sciences of the mind. If, for example, the eliminativism advocated by the Churchlands were vindicated, at least some versions of the mind‐body problem would be significantly affected. Further, as neuroscience progresses, we expect that we will learn something about the ontology of particular mental properties, states, and processes.
Neuroscience also makes contact with other metaphysical topics, including classical metaphysical issues, such as free will and mental causation (see, e.g., Libet 1985; Tibbetts 2004; de Vignemont & Fourneret 2004). The self is another topic of perennial interest. In one of the earliest papers in neurophilosophy, Nagel (1971) explored the phenomenon of brain bisection, or “split brain,” and its consequences for an account of personhood. By now, the idea of the split brain is familiar (see Gazzaniga 2005, for a review). Surgical severing of the corpus callosum and related structures that connect the two hemispheres of the brain was first carried out in 1940 in an effort to prevent epileptic seizures from spreading from one hemisphere to the other. Studies carried out on animals by Roger Sperry in the 1950s (see, e.g., Sperry 1961) and on humans by Michael Gazzaniga in the 1960s revealed that complete commissurotomy prevents information from moving from one hemisphere to the other and permits the study of the differing functions of each. In the majority of people, language comprehension and production is localized to the left hemisphere. Information presented to the right hemisphere alone, therefore, cannot be articulated verbally by the split‐brain patient.
Early experiments sometimes seemed to provide evidence that each hemisphere of the split‐brain patient had what Gazzaniga refers to as its own point of view. He (Gazzaniga 2005, 657) writes:
There were moments when one hemisphere seemed to be belligerent while the other was calm. There were times when the left hand (controlled by the right hemisphere) behaved playfully with an object that was held out of view while the left hemisphere seemed perplexed about why.
Nagel (1971) considers what the early split‐brain experiments say about selfhood, in both split‐brain and intact individuals. Given the appearance of at least partial independence of the hemispheres, he asks how many minds split‐brain patients have. He considers five possibilities: (1) Because the left hemisphere is the locus of language production and comprehension (in most people), these patients have one mind in the left hemisphere, and the right hemisphere is a sort of automaton; (2) they have one mind in the left hemisphere, but the right hemisphere occasionally exhibits conscious mentality that is disconnected from the mind; (p. 367) (3) they have two minds, one linguistic and one nonlinguistic; (4) they have one fragmented mind constituted of the contents of both hemispheres; and (5) they typically have one mind, but the mind can be split in the context of split‐brain experiments. Nagel argues that none of the five possibilities is defensible and concludes that there is no number of minds, or selves, that the split‐brain patient has. He further suggests that what the split‐brain experiments show is that our own sense of the unity of our minds is an illusion. The mind is in fact constituted of a number of functions that are better integrated in non‐split‐brain individuals than in split‐brain patients. In all cases, however, the experience of mental unity hides the diversity and disconnection that lies beneath it.
Other work on split‐brain patients has also contributed to the related question of the nature of the self. Turk Heatherton, Kelley, Funnell, Gazzaniga & Macrae (2002) investigated self‐recognition in a split‐brain patient. In this experiment, the patient, JW, was shown a series of pictures. One was of himself and a second was of a familiar person, MG. Nine other images were created by morphing these two images. Each one represented a 10% shift from JW to MG. The images were presented to each hemisphere separately, and the patient was asked to decide whether the picture was of himself or of MG. When the images were presented to JW's right hemisphere, he tended to identify it as MG, whereas he identified himself more often when the pictures were presented to his left. Although this experiment is only a first step toward investigating the question of self‐representation in the brain, it suggests that the left hemisphere is biased toward self‐recognition and the right toward the recognition of familiar others.
The area in which research and progress in neuroscience bears upon ethical questions has been termed “neuroethics” (Roskies 2002). At first blush, it might seem that neuroscience would have little to contribute to moral philosophy. However, some philosophers have begun to think that neuroscience can in fact teach us something about ethics. Neuroscience is already informing our ideas about moral psychology, and there is good evidence that understanding how moral cognition works will have a bearing on our philosophical conception of morality. Some arguments have already been made linking the neurobiology of moral cognition to issues in metaethics (Roskies 2003).
One of the earliest findings in the area, and the one that has sparked the greatest interest among philosophers, is the study of Greene, Sommerville, Nystrom, Darley, and Cohen (2001). A well‐known puzzle of moral philosophy, known as the trolley problem, is raised by the intuitions engendered by two imaginary moral dilemmas (see Thomson 1986). In the first, an out‐of‐control trolley is heading for five people walking on the track. You are in a position to pull a lever to move the trolley to another track, where it will hit and kill only one person. Do you pull the lever to save the five and kill the one? Most people say that they would. In the second (p. 368) dilemma, to save five people, you must push someone off a bridge. Do you do it? Most people say that they would not. Given that both scenarios involve killing one person to save five, one might expect people's intuitions to be the same, either for or against killing one to save many. But they are not. The puzzle is: Why not?
Greene and colleagues presented the runaway trolley and footbridge scenarios to subjects in a functional magnetic resonance imaging (fMRI) paradigm and found that the pattern of neural activity generated by making a moral decision was different in the two cases. In the first scenario, increased activity in brain regions associated with working memory was observed (dorsolateral prefrontal and parietal areas), whereas in the second, increased activity in brain regions associated with social and emotional cognition was found (medial frontal gyrus, posterior cingulate gyrus, and bilateral superior temporal sulcus). When nonmoral dilemmas were presented to subjects, the pattern of activation was the same as in the first moral scenario.
Greene and colleagues hypothesize that the neural data provide evidence for the view that the contrasting moral judgments elicited by the runaway trolley cases derive from what are, in fact, two different cognitive processes for coming to a moral judgment, each of which is subserved by different functional neuroanatomy. Moral scenarios like the footbridge case they categorize as “personal.” Personal dilemmas must satisfy three conditions: (1) The action must be represented as authored by an agent; (2) it must involve a likelihood of serious bodily harm; and (3) the action must be represented as having a person as victim. Dilemmas like these activate brain regions that are evolutionarily old and deal with social and emotional stimuli. This system produces fast and intuitive responses to moral problems.
In contrast, scenarios like the trolley case activate a neural system that appeared more recently in evolutionary time. Greene and colleagues hypothesize that it is the same system that is concerned with any problem that requires abstract reasoning, including “impersonal” moral dilemmas—those that do not satisfy the three conditions above. Because impersonal moral dilemmas are dealt with by an “all‐purpose” reasoning system, the pattern of neural activation associated with these dilemmas is the same as that evoked by nonmoral dilemmas. What distinguishes the runaway trolley case from the footbridge case is that, in the first case, the agent merely deflects an existing threat (by pulling a lever to divert the trolley) whereas, in the second, he is the author of the harm by pushing someone off the bridge. In the first case, therefore, condition 1 is not satisfied because the agent merely “edits,” but does not author, the action.
These tantalizing results seem to have significant implications for a number of traditional moral debates, not least the debate between Humeans and Kantians about the relative importance of reason and emotion in moral judgment (Greene et al. 2004). Considerable work remains to be done, however, in order adequately to elucidate the psychological and neural structure of moral decision making and its implications for philosophy.
In addition to the neuroscientific work that aspires to illuminate moral matters, a number of pressing ethical questions raised by the practice of neuroscience (p. 369) are being addressed. One of the central goals of neuroscience is control of, and intervention in, brain function (Craver 2007). In addition to their promise in curing disease and dysfunction, advances in neuroscience raise the possibilities of cognitive enhancement, and noninvasive neuroimaging techniques raise issues about privacy and coercion, among others. We expect to see a significant expansion of this area as the ethical problems of neuroscience quickly become pressing (see, e.g., Farah 2005; Illes & Raffin 2002).
Surprisingly perhaps, neuroscience has also made inroads into aesthetics. Because aesthetic experience is, at least in part, perceptual experience, some effort has been made to better understand aesthetic experience by appealing to the psychology and neurobiology of perception. A 1999 volume (vol. 6, nos. 6–7) of the Journal of Consciousness Studies was devoted to the topic of art and the brain. In that same year, the distinguished visual neurophysiologist Semir Zeki published Inner Vision (Zeki 1999) in which he argues that modern art is deeply affected by the way the visual system works.
A lovely example of the application of neuroscience to the perception of visual art is Margaret Livingstone's (2002) analysis of the Mona Lisa's smile. Livingstone cites Gombrich's (1998, 300) famous Story of Art to pose the problem:
What strikes us first is the amazing degree to which Lisa looks alive. She really seems to look at us and to have a mind of her own. Like a living being, she seems to change before our eyes and to look a little different every time we come back to her … . Sometimes she seems to mock at us, and then again we seem to catch something like sadness in her smile.
Livingstone's contention is that the mystery of the Mona Lisa—in any event, the mystery of her smile—is the result of the anatomy of the viewer's retina. The center of the retina—the fovea—is more densely packed with photoreceptors than is the periphery and thus has the greatest spatial resolution. For that reason, we foveate on parts of an object or scene in order to make out its details. As one moves away from the fovea, photoreceptor density declines and acuity decreases. Livingstone claims, however, that this does not mean that peripheral vision is poor, merely that it is specialized for other things, such as organizing a scene, seeing large objects, and alerting us to places in space where we should direct foveal vision.
Thus, if one looks at a picture with different parts of the retina, a finer or coarser representation will result. However—and this is the crucial claim—coarser representations are not necessarily poorer ones. Rather, coarser images provide different information from that provided by finer images. In order to see the effect of this on the Mona Lisa, Livingstone filtered the image three times to extract only its coarse, medium, or fine spatial components. In each of the three images, the expression on Lisa's face seems to be different—more cheerful in the coarse image (p. 370) and sadder in the finer one. As the eyes move over the picture, coarser or finer images of Lisa's face will be processed by the visual system and, with those images, different expressions will be detected—now cheerful, then mocking and sad. The mystery of the Mona Lisa smile, then, is that with a shifting gaze, our interpretation of the emotion on Lisa's face is altered and, as Gombrich says, “[l]ike a living being, she seems to change before our eyes and to look a little different every time we come back to her.”
4. Understanding the Nature of Science
A final reason to pursue the philosophy of neuroscience is to understand the project of natural science in general. Many of our views about science are informed by a philosophical treatment of particular paradigmatic sciences, usually physics. The philosophy of science has traditionally focused on physics and evolutionary biology for at least two reasons. First, if one is in the business of trying to understand the character and workings of science, it makes sense to focus on sciences that are well developed both in methodology and in the success of their theories. Second, many philosophers of science and scientists themselves believe that scientific explanation is at its best when it appeals to “fundamental” theories. Thus, explanations of the physical universe would best be explained by physics, and explanations of living phenomena by evolutionary theory.
Against this motivation for focusing on physics and evolutionary biology, one might worry that some important features of scientific methodology could be obscured by attention to well‐developed theories. Science may function quite differently in the early stages of development, and this possibility justifies an investigation into a young science like neuroscience. Further, there is no reason a priori to believe that all sciences function in the same way, and this suggests that comparative studies, and studies that range across a diversity of sciences, will illuminate the structure of science. Finally, the fraught relationship that neuroscience has with psychology provides a reason for thinking that neuroscience may exhibit unique features. Whether philosophy of neuroscience provides novel views of science or shores up familiar views, the endeavor can enrich our picture of the nature and progress of science.
4.1. Reduction and levels of explanation
The fraught relation between psychology and neuroscience just mentioned derives, presumably, from the mind‐body problem. We do not yet know whether the (p. 371) relation between psychology and neuroscience is therefore sui generis or whether progress in understanding that relation might produce new models of intertheoretic relations generally. Further investigation is required to establish whether or not a better understanding of the relation between neuroscience and psychology will illuminate the structure of other sciences.
We have already encountered the Churchlands' view that the way to resolve the tension between psychology and neuroscience is to eliminate psychology and to redescribe psychological phenomena in neural terms. A more traditional candidate relation, both in the sciences of the mind and other sciences, is reduction, according to which (in the classical syntactic picture of Nagel 1961), a reducing theory, together with a set of bridge laws or definitions, can be used to derive the laws of the reduced theory. Reduction was the goal of the identity theory of the mind which was, for better or worse, superseded by functionalism. Despite its having fallen out of favor in the philosophy of science, some notion of stepwise reduction between levels of neural organization remains an implicit goal of cognitive neuroscience. Moreover, functionalism requires that there be physical realizers of mental states for particular species or individuals. A narrow reduction of species‐ or individual‐specific mental states is recognized as a possible goal by some philosophers of mind (see, e.g., Kim 1992) even though this isn't reduction in the classical sense. Neuroscience has not, to this point, produced a reduction of any reasonable fragment of psychological theory although there are examples concerning which a case could be made. Kandel and colleagues' theory of elementary learning in Aplysia is one example (see Gold & Stoljar 1999 and references therein).
The current inauspicious state of the classical reductionist project may be the result of a number of factors. Neuroscience may not yet have the resources to take a run at any substantial psychological phenomenon; reduction may turn out to be an empirical impossibility; or the classical conception of reduction may prove to be inappropriate for the psychology‐neuroscience case. The latter possibility is of greatest interest to scientists and philosophers of science. Bickle (1998), for example, argues that classical reductionism failed precisely because it adopted Nagel's conception of the reduction relation. On Bickle's view, other conceptions of intertheoretic reduction are available, and they may offer the possibility of a “new wave” form of reductionism.
4.2. Explanation and mechanism
The issue of reduction is historically linked to that of explanation. According to the classical covering‐law model of scientific explanation, they were one and the same: To reduce a phenomenon is to explain it. More recent work, however, distinguishes explanation from reduction. Although some maintain the importance of reduction as a goal for neuroscience, others argue that the philosophical focus on reduction represents been a wrong turn, largely attributable to the dominance of physics in the philosophy of science. According to Craver (2007), for instance, a (p. 372) close look at the history and practice of neuroscience shows that “reduction so mischaracterizes the unity of neuroscience that it cannot serve as a regulative ideal.” He claims that the idea that everything must be explainable at the fundamental level (of the neuron, synapse, molecule, etc.) is misguided and that there is no single neuroscientific level of explanation. According to Craver, neuroscience has two primary goals, explanation and intervention in brain function, and that the appropriate form for scientific explanations is not reductive, unifying, or model based, but rather causal‐mechanistic. Following Salmon and others, Craver argues that explanation involves determining the causes of a phenomenon. Thus, neuroscientific explanations aim to describe mechanisms (including components, activities, and organizations) with explicit causal structure. Furthermore, he argues that a mosaic of explanations at different levels is appropriate for explaining these diverse phenomena and that the mosaic picture, and not reduction, best captures the unity of neuroscience.
4.3. Inference in neuroscience
Our understanding of brain function is made possible, but also shaped and limited, by the numerous methods available to neuroscientists for investigating brain structure, function, chemistry, and the like. Understanding these techniques, and the kind of information they provide, is another crucial task for philosophy of neuroscience.
The discussion of the receptive field in section 2.4 highlights a problem faced by all of empirical science concerning the way in which available techniques produce data that can bias theory. The difficulty of using natural images as stimuli in electrophysiological experiments led to a conception of the RF which may be an artifact of the artificial stimuli that have been used since the mid‐twentieth century. All neuroscientific techniques provide us with a similarly limited window onto the vastly complex picture of how our brains work, with each technique enabling us to probe some spatially and temporally constrained aspect of brain function. Understanding the limitations of each technique is a central task for philosophy of neuroscience.
The limitations of available techniques is one of a set of scientific concerns having to do with making inferences from data to theory. A second question that may be of special importance in neuroscience is whether inferences about human cognition can be made from animal models. Whereas such inferences are made regularly in the biomedical sciences, it seems clear that, despite the genetic overlap across mammals, human cognition is qualitatively different, at least in some domains.
A third issue of potential interest to philosophers is the logic of making inferences from data to theory in different branches of neuroscience. The logic of inference is particularly well worked out in cognitive neuropsychology, so we use it as an illustration. Cognitive neuropsychology is a branch of cognitive psychology (p. 373) (not neuropsychology; see Coltheart 2001) in which behavioral data from individuals with brain damage are used to provide evidence for or against cognitive models. In such studies, inferences can be made about how the brain subserves behavior by characterizing both the lesion and its behavioral effects (see, e.g., Glymour 1994). This is one of the few ways that neuroscientists are able to investigate human brain function by making use of nature's own experiments.
Coltheart (2001) presents a useful overview of the evidence of cognitive deficits with brain damage. (For early discussions, see Bub & Bub 1988; Caramazza 1984; Shallice 1988.) The evidence falls into one of three categories: associations, dissociations, and double dissociations. One finds an association when a patient with brain damage is impaired on two tasks—say, understanding written words and understanding spoken words. A dissociation is present when a patient is impaired on one task, such as understanding written words, but not another, such as understanding spoken words. A double dissociation requires two patients, one of whom is impaired on one task but not a second (e.g., impaired on understanding written words but not spoken words), and a second patient who is impaired on the second task but not the first (e.g., impaired on understanding spoken, but not written, words).
Of the three kinds of evidence for a theory, double dissociation is the strongest. Suppose one finds an association between reading and aural word comprehension. One reasonable hypothesis supported by this finding is that a single cognitive module subserves the comprehension of both written and spoken words. However, a second hypothesis is also plausible, namely, that written‐word comprehension and spoken‐word comprehension are subserved by separate cognitive modules that are implemented by adjacent brain areas. Thus, damage to that part of the brain is likely to damage both. Association data are limited, then, because they can't distinguish these two hypotheses.
The converse problem arises for dissociation. The evidence of a patient who is impaired on written comprehension but not on oral comprehension provides evidence for two distinct cognitive modules. However, a second hypothesis is also plausible here, namely, that there is only one module, which is damaged but not completely so. In such a case, it is possible that a patient can carry out easier tasks subserved by this module but not more difficult ones. If one supposes that written comprehension is more difficult than oral comprehension, that could explain the facts presented by the patient.
In double dissociation, however, competing hypotheses of the kind just discussed do not arise. (That is not to say that different cognitive theories of the dissociation are not possible.) If we have one patient who is impaired on written comprehension but is able to comprehend spoken words and another patient who is impaired on spoken words but is able to comprehend written words, the objection to the evidence of dissociation in one patient cannot be made. Spoken‐word comprehension cannot be harder than written‐word comprehension because the first patient can do it despite not being able to do the latter task. Nor can written‐word comprehension be harder than spoken‐word comprehension because (p. 374) the second patient can do that task despite not being able to do the other. Double dissociations thus provide strong evidence for the existence of two modules subserving the dissociated abilities.
Well‐developed accounts of inference from data to theory such as that in cognitive neuropsychology are extremely useful in establishing the strength of neuroscientific theories, and one role that philosophers of neuroscience can play is to propose such accounts. As we argued above, the case of functional brain imaging is a particularly pressing one (Bogen 2002).
A Science of the Brain: The Very Idea
Not everything that is doable should be done. It seems clear that there is a philosophy of neuroscience, but it's not clear why this should be the case. After all, there is not much in the way of philosophy of chemistry, geology, or physiology. More important, there is no philosophy of cardiology nor of nephrology. What makes the study of this organ so special?
It seems there are at least four criteria for having a philosophy of x; if a discipline fails all four, we might be inclined to think that a philosophical treatment of it is unnecessary. The reasons are: (1) X is particularly important in our understanding of the world; (2) x presents particular puzzles of interest; (3) x has particular philosophical significance; and (4) x has special epistemological value.
Physics and evolutionary biology arguably satisfy all four. How does neuroscience fare? Certainly, it is important in our understanding of the world: The brain mediates both cognition and action, and these phenomena occupy central places in the conceptual landscape. Insofar as neuroscience can illuminate our understanding of cognition and action, it is a good candidate for a philosophical treatment. Neuroscience also presents interesting questions which may benefit from philosophical investigation, just as quantum mechanics has raised puzzles that require philosophical attention. These questions might include: How should the representational capacities of neurons be understood? Is computation a good model for what the brain does? How do neuroscientific methods enable us to access phenomena of interest, and when do they fail? How do we integrate neuroscientific insights with those from other disciplinary inquiries?
Neuroscience also satisfies the third criterion. Physics has particular philosophical importance because of its historical claim to be the fundamental level of science and to be that level to which all physical phenomena in the universe may ultimately reduce. Neuroscience plays a similar role with respect to cognition: As we have seen, it has been argued that all mental phe (p. 375) nomena can ultimately be couched in the language of neuroscience (or alternatively, that all mental phenomena can be explained in terms of brain phenomena). Whether or in what way this is the case remains a matter of intense debate.
Finally, neuroscience also seems to satisfy the fourth criterion. It seems to have a unique epistemological value, in that it may help to provide us with a level of self‐understanding that many other sciences do not. It may also illuminate the very nature of understanding itself.
In general, there seems to be little doubt that the primary motivation for a philosophy of neuroscience is the attempt to understand the nature of the mind and how it arises out of the physical substrate of the brain. Whether there should be a philosophy of neuroscience, therefore, depends in large measure on how relevant understanding the brain is to understanding the mind, and this is largely an empirical question. Antireductionist arguments in the philosophy of mind have taught us that very little of scientific interest follows from the fact that the mind is constituted of the brain and its functions. Even if it is a metaphysical truth that the mind is identical to the brain, the science of the mind may or may not turn out to be a science of the brain. At the end of the day, the psychological sciences may produce our best theory of the mind with neuroscience providing what is usually referred to deprecatingly as “mere implementation.” In contrast, neuroscience may mature in such a way as to make direct contact with mental phenomena. Whether there ought to be a philosophy of neuroscience in the long run will depend on the closeness of the connection between neuroscience and the mind, although, as we have argued, even if antireductionism triumphs, there might still be sufficient motivation for having a philosophy of the brain sciences. At the moment, however, whether neuroscience is relevant to understanding the mind is one of the most important issues in philosophy. For that reason alone, if the philosophy of neuroscience did not exist, it would be necessary to invent it.
We are grateful to David Chalmers, Mazviita Chirimuuta, and Carl Craver for very helpful suggestions on an earlier draft of this chapter. We are also grateful to Catherine Carriere for research assistance and, particularly, to John O'Dea, who did a substantial literature search in the philosophy of neuroscience.
Akins, K. 1996. Of sensory systems and the “aboutness” of mental states. Journal of Philosophy 13:337–72.Find this resource:
(p. 376) Bair, W. 2005. Visual receptive field organization. Current Opinion in Neurobiology 15:459–64.Find this resource:
Barlow, H. 2001. Redundancy reduction revisited. Network: Computation in Neural Systems 12:241–53.Find this resource:
Baron‐Cohen, S., Leslie, A. M., & Frith, U. 1985. Does the autistic child have a “theory of mind”? Cognition 21:37–46.Find this resource:
Bechtel, W. 2001. Representations: From neural systems to cognitive systems. In W. Bechtel et al. (eds.), Philosophy and the Neurosciences: A Reader, 332–49. Oxford: Blackwell.Find this resource:
Bechtel, W., Mandik, P., Mundale, J., & Stufflebeam, R. S. 2001. Philosophy and the Neurosciences: A Reader. Oxford: Blackwell.Find this resource:
Bechtel, W., & Mundale, J. 1999. Multiple realizability revisited: Linking cognitive and neural states. Philosophy of Science 66:175–207.Find this resource:
Bickle, J. 1998. Psychoneural Reduction: The New Wave. Cambridge, Mass.: MIT Press.Find this resource:
Black, I. B. 1994. Information in the Brain. Cambridge, Mass.: MIT Press.Find this resource:
Block, N. 2005. Two neural correlates of consciousness. Trends in Cognitive Sciences 9(2):46–52.Find this resource:
Bogen, J. 2002. Epistemological custard pies from functional brain imaging. Philosophy of Science 69:S59–71.Find this resource:
Bub, J., & Bub, D. 1988. On the methodology of single‐case studies in cognitive neuropsychology. Cognitive Neuropsychology 5:565–82.Find this resource:
Buller, D. J., & Hardcastle, V. G. 2000. Evolutionary psychology, meet developmental neurobiology: Against promiscuous modularity. Brain and Mind 1:307–25.Find this resource:
Burge, T. 1979. Individualism and the mental. In P. A. French, T. E. Uehling, & H. K. Wettstein (eds.), Midwest Studies in Philosophy IV: Studies in Metaphysics. Minneapolis: University of Minnesota Press.Find this resource:
Caramazza, A. 1984. The logic of neuropsychological research and the problem of patient classification in aphasia. Brain and Language 21:9–20.Find this resource:
Chirimuuta, M., & Gold, I. J. forthcoming. The receptive field in transition. In J. Bickle (ed.), Handbook of Philosophy of Neuroscience. Oxford: Oxford University Press.Find this resource:
Churchland, P.M. 1981. Eliminative materialism and propositional attitudes. Journal of Philosophy 77, 67–90.Find this resource:
Churchland, P. S. 1986. Neurophilosophy. Cambridge, Mass.: MIT Press.Find this resource:
Churchland, P. S. 2002. Brain‐wise: Studies in Neurophilosophy. Cambridge, Mass.: MIT Press.Find this resource:
Churchland, P. S., & Sejnowski, T. J. 1992. The Computational Brain. Cambridge, Mass.: MIT Press.Find this resource:
Clark, A. 1993. Sensory Qualities. Oxford: Oxford University Press.Find this resource:
Clark, A. 2001. Some logical features of feature integration. In Werner Backhaus (ed.), Neuronal Coding of Perceptual Systems. New Jersey: World Scientific.Find this resource:
Coltheart, M. 2001. Assumptions and methods in cognitive neuropsychology. In B. Rapp (ed.), Handbook of Cognitive Neuropsychology. Philadelphia: Psychology Press.Find this resource:
Coltheart, M. 2004. Brain imaging, connectionism and cognitive neuropsychology. Cognitive Neuropsychology 2:21–25.Find this resource:
Couch, M. B. 2004. A defense of Bechtel and Mundale. Philosophy of Science 71:198–204.Find this resource:
Craver, C. F. 2005. Beyond reduction: Mechanisms, multifield integration and the unity of neuroscience. Studies in History and Philosophy of Biological and Biomedical Sciences 36C:373–95.Find this resource:
(p. 377) Craver, C. F. 2007. Explaining the Brain: What a Science of Mind Could Be. Oxford: Oxford University Press.Find this resource:
Crick, F., & Koch, C. 1998. Consciousness and neuroscience. Cerebral Cortex 8(2):97–107.Find this resource:
Crick, F., & Koch, C. 2003. A framework for consciousness. Nature Neuroscience 6(2):119–26.Find this resource:
David, S. V., William, E. V., & Gallant, J. L. 2004. Natural stimulus statistics alter the receptive field structure of V1 neurons. Journal of Neuroscience 24:6991–7006.Find this resource:
Davies, M., Coltheart, M., Langdon, R., & Breen, N. 2001. Monothematic delusions: Towards a two‐factor account. Philosophy, Psychiatry and Psychology 8:133–158.Find this resource:
Davies, M., & Stone, T. (eds.). 1995. Folk Psychology: The Theory of Mind Debate. Oxford: Blackwell.Find this resource:
de Vignemont, F., & Fourneret, P. 2004. The sense of agency: A philosophical and empirical review of the “who” system. Consciousness and Cognition 13:1–19.Find this resource:
Eggermont, J. J. 1998. Is there a neural code? Neuroscience and Biobehavioral Reviews 22:355–70.Find this resource:
Eliasmith, C. 2003. Moving beyond metaphors: Understanding the mind for what it is. Journal of Philosophy 100(10):493–520.Find this resource:
Evans, G. 1985. Molyneux's question. In Evans, Collected Papers. Oxford: Clarendon.Find this resource:
Fadiga, L., Fogassi, L., Pavesi, G., & Rizzolatti, G. 1995. Motor facilitation during action observation: A magnetic stimulation study. Journal of Neurophysiology 73:2608–11.Find this resource:
Farah, M. J. 2005. Neuroethics: The practical and the philosophical. Trends in Cognitive Science 9:34–40.Find this resource:
Fodor, J. 1983. The Modularity of Mind. Cambridge, Mass.: MIT Press.Find this resource:
Fodor, J. 1999. Let your brain alone. London Review of Books /www.lrb.co.uk/v21/n19/fodo01_.html.
Frith, C. 1992. The Cognitive Neuropsychology of Schizophrenia. Hillsdale, N.J.: Erlbaum.Find this resource:
Frith, C., Perry, R., & Lumer, E. 1999. The neural correlates of conscious experience: An experimental framework. Trends in Cognitive Sciences 3(3):105–14.Find this resource:
Gallese, V., Fadiga, L., Fogassi, L., & Rizzolatti, G. 1996. Action recognition in the premotor cortex. Brain 119:593–609.Find this resource:
Gallese, V., & Goldman, A. 1998. Mirror neurons and the simulation theory of mind‐reading. Trends in Cognitive Sciences 2:493–501.Find this resource:
Garety, P. A., & Hemsley, D. R. 1994. Delusions: Investigations into the Psychology of Delusional Reasoning. Oxford: Oxford University Press.Find this resource:
Garson, J. 2003. The introduction of information into neurobiology. Philosophy of Science 70:926–36.Find this resource:
Gauthier, I., Curby, K. M., & Epstein, R. in press. Activity of spatial frequency channels in the fusiform face‐selective area relates to expertise in car recognition. Cognitive and Affective Behavioral Neuroscience.Find this resource:
Gazzaniga, M. S. 2005. Forty‐five years of split‐brain research and still going strong. Nature Reviews: Neuroscience 6:653–59.Find this resource:
Gerrans, P. 2002. Multiple paths to delusion. Philosophy, Psychology and Psychiatry 9:66–72.Find this resource:
Gluck, M. A., Meeter, M., & Myers, C. E. 2003. Computational models of the hippocampal region: Linking incremental learning and episodic memory. Trends in Cognitive Science 7:269–76.Find this resource:
Glymour, C. 1994. On the methods of cognitive neuropsychology. British Journal for the Philosophy of Science 45:815–35.Find this resource:
Gold, I. J., & Stoljar, D. 1999. A neuron doctrine in the philosophy of neuroscience. Behavioral and Brain Sciences 22:809–30.Find this resource:
(p. 378) Gombrich, E. H. 1998. The Story of Art, 16th ed. New York: Prentice‐Hall.Find this resource:
Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., & Cohen, J. D. 2004. The neural bases of cognitive conflict and control in moral judgment. Neuron 44:389–400.Find this resource:
Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. 2001. An fMRI investigation of emotional engagement in moral judgment. Science 293:2105–8.Find this resource:
Hacker, P. M. S., & Bennett, M. R. 2003. Philosophical Foundations of Neuroscience. Malden, Mass.: Blackwell.Find this resource:
Hanson, S. J., Matsuka, T., & Haxby, J. V. 2004. Combinatorial codes in ventral temporal lobe for object recognition: Haxby (2001) revisited: Is there a “face” area? Neuroimage 23(1):156–66.Find this resource:
Hardcastle, V. G. 1996. How we get there from here: Dissolution of the binding problem. Journal of Mind and Behavior 17:251–66.Find this resource:
Hardcastle, V. G. 1997. Consciousness and the neurobiology of perceptual binding. Seminars in Neurology 17:163–70.Find this resource:
Hardcastle, V. G. 1999a. The Myth of Pain. Cambridge, Mass.: MIT Press.Find this resource:
Hardcastle, V. G. 1999b. What we don't know about brains. Studies in History and Philosophy of Biological and Biomedical Sciences 5C:69–89.Find this resource:
Hardcastle, V. G. 1999c. It's o.k. to be complicated: The case of emotion. Journal of Consciousness Studies 6:237–349.Find this resource:
Hardcastle, V. G., & Stewart, C. M. 2002. What do brain data really show? Philosophy of Science 69:S72–82.Find this resource:
Hartline, H. K. 1938. The response of single optic nerve fibers of the vertebrate eye to illumination of the retina. American Journal of Physiology 121:400–415.Find this resource:
Hatfield, G. 2000. “The brain's ‘new’ science: Psychology, neurophysiology, and constraint,” Philosophy of Science 67:S388–403.Find this resource:
Haxby, J. V., Gobbini, M. I., Furey, M. L., Ishai, A., Schouten, J. L., & Pietrini, P. 2001. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293:2425–30.Find this resource:
Henson, R. 2005. What can functional neuroimaging tell the experimental psychologist? Quarterly Journal of Experimental Psychology 58A:193–233.Find this resource:
Hohwy, J. 2004. Top‐down and bottom‐up in delusion formation. Philosophy, Psychiatry and Psychology 11:65–70.Find this resource:
Hubel, D. H., & Weisel, T. N. 1998. Early explorations of the visual cortex. Neuron 20:401–12.Find this resource:
Illes, J., & Raffin, T. 2002. Neuroethics: A new discipline is emerging in the study of brain and cognition. Brain and Cognition 50:341–44.Find this resource:
Jacobson, A. J. 2003. Mental representations: What philosophy leaves out and neuroscience puts in. Philosophical Psychology 16:189–203.Find this resource:
Kanwisher, N., & Yovel, G. in press. The fusiform face area: A cortical region specialized for the perception of faces. Philosophical Transactions of the Royal Society of London B.Find this resource:
Kayser, C., Salazaar, R. F., & König, P. 2003. Journal of Neurophysiology 90:1910–20.Find this resource:
Keeley, B. 2002. Making sense of the senses: Individuating modalities in humans and other animals. Journal of Philosophy 99:5–28.Find this resource:
Kim, J. 1992. Multiple realizability and the metaphysics of reduction. Philosophy and Phenomenological Research 52:1–26.Find this resource:
Kim, S. 2002. Testing multiple realizability: A discussion of Bechtel and Mundale. Philosophy of Science 69:606–10.Find this resource:
Laureys, S. (2005) The neural correlate of (un)awareness: Lessons from the vegetative state. Trends in Cognitive Sciences 9:556–559.Find this resource:
(p. 379) Lehky, S. R., & Sejnowski, T. J. 1988. Network model of shape‐from‐shading: Neural function arises from both receptive and projective fields. Nature 333:452–54.Find this resource:
Lewis, D. 1986. On the Plurality of Worlds. Oxford: Oxford University Press.Find this resource:
Libet, B. 1985. Unonscious cerebral initiative and the role of conscious will in voluntary action. Behavioural and Brain Science 8:529–66.Find this resource:
Livingstone, M. S. 2002. Vision and Art: The Biology of Seeing. New York: Abrams.Find this resource:
Lloyd, D. 2000. Terra cognita: From functional neuroimaging to the map of the mind. Brain and Mind 1:93–116.Find this resource:
Machamer, P. K., Grush, R., & McLaughlin, P. 2001. Theory and Method in the Neurosciences. Pittsburgh, Pa.: University of Pittsburgh Press.Find this resource:
Mandik, P. 2003. Varieties of representation in evolved and embodied neural networks. Biology and Philosophy 18:95–130.Find this resource:
Mandik, P. 2005. Action oriented representation. In A. Brook & K. Akins (eds.), Cognition and the Brain: The Philosophy and Neuroscience Movement. Cambridge: Cambridge University Press.Find this resource:
Merleau‐Ponty, M. 1962. The Phenomenology of Perception. Translated by C. Smith. London: Routledge & Kegan Paul.Find this resource:
Milner, A. D., & Goodale, M. A. 1995. The Visual Brain in Action. Oxford: Oxford University Press.Find this resource:
Mundale, J. 2002. Concepts of localization: Balkanization in the brain. Brain and Mind 3:313–30.Find this resource:
Nagel, E. 1961. The Structure of Science. New York: Harcourt, Brace & World.Find this resource:
Nagel, T. 1971. Brain bisection and the unity of consciousness. Synthese 22:396–413.Find this resource:
Noë, A. 2004. Action in Perception. Cambridge, Mass.: MIT Press.Find this resource:
O'Keefe, J., & Nadel, L. 1978. The Hippocampus as a Cognitive Map. Oxford: Oxford University Press.Find this resource:
Pins, D., & Ffytche, D. 2003. The neural correlations of conscious vision. Cerebral Cortex 13:461–74.Find this resource:
Poeppel, D. 1996. A critical review of PET studies of phonological processing. Brain and Language 55:317–51.Find this resource:
Quartz, S. R. 2003. Innateness and the brain. Biology and Philosophy 18:13–40.Find this resource:
Quian Quiroga, R., Reddy, L., Kreiman, G., Koch, C. & Fried, I. 2005. Invariant visual representation by single neurons in the human brain. Nature 435:1102–7.Find this resource:
Rees, G., Kreiman, G., & Koch, C. 2002. Neural correlates of consciousness in humans. Nature Reviews Neuroscience 3(4):261–70.Find this resource:
Revonsuo, A. 1999. Binding and the phenomenal unity of consciousness. Consciousness and Cognition 8:173–85.Find this resource:
Ringach, D. L. 2004. Mapping receptive fields in primary visual cortex. Journal of Physiology 558:717–28.Find this resource:
Rizzolatti, G., Fadiga, L., Gallese, V., & Fogassi, L. 1996. Premotor cortex and the recognition of motor actions. Cognitive Brain Research 3:131–41.Find this resource:
Robertson, A. D. J. 1965. Anesthesia and receptive fields. Nature 205:80–83.Find this resource:
Rolls, E. T. 2001. Representations in the brain. Synthese 129:153–71.Find this resource:
Roskies, A. L. 2002. Neuroethics for the new millennium. Neuron 35:21–23.Find this resource:
Roskies, A. L. 2003. Are ethical judgments intrinsically motivational? Lessons from “acquired sociopathy”. Philosophical Psychology 16, 51–66.Find this resource:
Rust, N. C. & Moushon, J. A. 2005. In praise of artifice. Nature Neuroscience 12:1647–1650.Find this resource:
Schouten, M. K. D., & de Jong, H. L. 1999. Reduction, elimination, and levels: The case of the LTP‐learning link. Philosophical Psychology 12(3):237–62.Find this resource:
(p. 380) Shallice, T. 1988. From Neuropsychology to Mental Structure. Cambridge: Cambridge University Press.Find this resource:
Sperry, R. W. 1961. Cerebral organization and behavior. Science 133:1749–1757.Find this resource:
Stone, T., & Young, A. W. 1997. Delusions and brain injury: The philosophy and psychology of belief. Mind and Language 12:327–64.Find this resource:
Stufflebeam, R. S. 2001. Brain matters: A case against representations in the brain. In W. Bechtel et al. (eds.), Philosophy and the Neurosciences: A Reader, 395–413. Oxford: Blackwell.Find this resource:
Tarr, M. J., & Gauthier, I. 2000. FFA: A flexible fusiform area for subordinate‐level visual processing automatized by expertise. Nature Neuroscience 3(8):764–69.Find this resource:
Thomson, J. J. 1986. Rights, Restitution, and Risk: Essays in Moral Theory. Cambridge, Mass.: Harvard University Press.Find this resource:
Tibbetts, P. 2004. The concept of voluntary motor control in the recent neuroscientific literature. Synthese 141:247–76.Find this resource:
Turk, D. J., Heatherington, T. F., Kelley, W. M., Funnell, M. G., Gazzaniga, M. A. & Macrae, C. N. 2002. Mike or me? Self‐recognition in a split‐brain patient. Nature Neuroscience 5:841–42.Find this resource:
Ungerleider, L. G., & Mishkin, M. 1982. Two cortical visual systems. In D. J. Ingle, M. A. Goodale, & R. J. W. Mansfield (eds.), Analysis of Visual Behavior, 549–86. Cambridge, Mass.: MIT Press.Find this resource:
Uttal, W. R. 2001. The New Phrenology: The Limits of Localizing Cognitive Processes. Cambridge, Mass.: MIT Press.Find this resource:
Van Orden, G. C., & Paap, K. R. 1997. Functional neuroimages fail to discover pieces of mind in the parts of the brain. Philosophy of Science 64:S85–94.Find this resource:
Wörgötter, F., & Eysel, U. T. 2000. Context, state and the receptive fields of striatal cortex cells. Trends in Neuroscience 23:497–503.Find this resource:
Young, R. M. 1990. Mind, Brain, and Adaptation in the Nineteenth Century: Cerebral Localization and Its Biological Context from Gall to Ferrier. New York: Oxford University Press.Find this resource:
Yovel, G., & Kanwisher, N. 2004. Face perception: Domain specific, not process specific. Neuron 44(5): 889–98.Find this resource:
Zeki, S. 1999. Inner Vision: An Exploration of Art and the Brain. New York: Oxford University Press.Find this resource: