Philosophy of Neuroscience
Abstract and Keywords
The experimental study of the brain has exploded in the past several decades, providing rich material for both philosophers of science and philosophers of mind. In this chapter, the authors summarize some central research areas in philosophy of neuroscience. Some of these areas focus on the internal practice of neuroscience, that is, on the assumptions underlying experimental techniques, the accepted structures of explanations, the goals of integrating disciplines, and the possibility of a unified science of the mind-brain. Other areas focus outwards on the potential impact that neuroscience is having on our conception of the mind and its place in nature.
Neuroscience is a field of fields that hang together because they share the abstract goals of describing, understanding, and manipulating the nervous system. Different scientists direct their attention to different aspects of the nervous system, yet all of them seem to be concerned in one way or another with relating functions of the nervous system to structures among component parts and their activities. A neuroscientist might focus on large-scale structures, such as cortical systems, or on exceptionally tiny structures, such as the protein channels that allow neurons to conduct electrical and chemical signals. She might consider long time frames, such as the evolution of brain function, or exceptionally short time frames, such as the time required for the release of synaptic vesicles. And, at these different spatial and temporal scales, researchers might investigate, for example, anatomical structures, metabolism, information processing, physiological function, overt behavior, growth, disease, or recovery. These different topics lead scientists to approach the brain with a diverse array of tools: different theoretical background knowledge, different experimental techniques, different methodologies, different model organisms, and different accepted standards of practice.
Likewise, philosophers interested in neuroscience approach the topic considering many entirely distinct questions and problems. Some are puzzled by whether—and if so, how—diverse aspects of our mental lives, from our conscious experiences, to our capacity for understanding, to our ability to make decisions on the basis of reasons, derive from or are otherwise related to the biochemical and electrophysiological workings of our brains. Others are interested in neuroscience as a distinctive kind of science. Some see it as representative of multidisciplinary sciences that span multiple levels of brain organization. Some see it as distinctive in attempting to forge connections between mental phenomena and biological phenomena. Still others are interested in fundamental concepts in the neurosciences: they seek to clarify the sense in which a neuron carries information, a brain region may properly be said to represent the world, or what it means to claim that a function localizes to a particular brain region. And some philosophers move fluidly among these perspectives.
The diversity of neuroscience makes for a wealth of potential topics that might involve a philosopher, far too much to cover in a single chapter. Here, we map some of the more populated terrain. We begin looking at philosophical problems internal to the practice of neuroscience: neuroscientific explanation and methods. Then we turn to questions in philosophy of science more generally that neuroscience can help inform, such as unification and reduction. Finally, we consider how neuroscience is (and is not) contributing to traditional philosophical discussions about the mind and its place in the natural world.
2. Looking Inward
2.1. Explanation in Neuroscience
The central aims of neuroscience are to predict, understand, and control how the brain works. These goals are clearly interconnected, but they are not the same. Here, we focus on what is required of an explanation in neuroscience. We discuss how explanation differs from prediction and why this difference matters for following through on the aim of control. Contemporary debates about explanation in the philosophy of neuroscience have become focused on the nature and limits of causal explanation in neuroscience (Bogen 2005, 2008; Chirimuuta 2014; Craver 2006, 2007, 2008; Kaplan and Craver 2011; Levy 2014; Levy and Bechtel 2013; Piccinini and Craver 2011; Weber 2004, 2008). Here, we present the background to that debate.
2.1.1. Explanations as Arguments: Predictivism
C. G. Hempel’s (1965) covering law (CL) model is a common backdrop for thinking about the norms of explanation in the neurosciences. The CL model is the clearest and most concise representative of a predictivist view of explanation. Its failings point the way to a more adequate model.
According to the CL view, scientific explanations are arguments. The conclusions of such arguments (the explanandum statements) are descriptions of events or generalizations to be explained, such as the generation of an action potential by a pyramidal cell in the cortex. The explanans, the explanation proper, consists of the premises of the argument. The premises in the explanans state the relevant laws of nature (perhaps the Nernst equation and Coulomb’s law) and the relevant antecedent and background conditions from which the explanandum follows. Explanations show that the explanandum was to be expected given what we know about the laws and conditions. The CL model is an epistemic conception of scientific explanation: the central norms of explanation are norms for evaluating inferences. The power of explanation inheres in its capacity to make the diverse phenomena of the world expectable.
2.1.2. Enduring Challenges to the CL Model
The central problem with the CL model (and predictivism more generally) is that the norms for evaluating explanations are different from the norms for evaluating arguments. The normative demands of the predictivist CL model are neither sufficient nor necessary for an adequate explanation.
The account is not sufficient because, as Hempel recognized, the model counts every strong correlation as explanatory. This violates scientific practice: scientists searching for explanations design experiments to distinguish causes from correlations, and they recognize a distinction between predictive models and explanatory models. The difference is of extreme practical import: a model that reveals causes reveals the parameters that can be used to control how the system works.
The norms of the account are not necessary because one can explain an event without being able to predict it. In the biological sciences, many causal processes operate stochastically, and some of these stochastic causal processes yield their effects only infrequently. For example, only approximately 20% of action potentials cause neurons to release neurotransmitters. Action potentials explain neurotransmitter release, but they do not make release likely or expectable.
2.1.3. Causal Explanations
Wesley Salmon’s (1984) diagnosis and treatment for the failings of the CL model are equally pithy and compelling: explanation is not a matter of showing that the explanandum phenomenon was to be expected on the basis of laws of nature but is rather a matter of showing how the phenomenon is situated within the causal structure of the world. This is the causal mechanical (CM) model of explanation.
The difference between knowing correlations and knowing causes is that knowing causes reveals the buttons and levers by which a system might be controlled. Causes are detected most crisply in well-controlled experiments in which a change induced in the putative cause variable results in a change in the effect variable even when all the other causes of the change in the effect variable have been controlled for (see Woodward 2003). The term “cause” is operationalized through such experiments. Causal knowledge deserves the honorific title “explanation” because it is distinctively useful: it can be used to control what happens or change how something works. The norms of causal-mechanical explanation, in other words, are tailored not to the ideal of expectation but to the ideal of control.
Salmon identified two forms of CM explanation; both are common in neuroscience. An etiological explanation (or causal explanation) explains a phenomenon in terms of its antecedent causes. For example, death of dopaminergic neurons explains Parkinsonian symptoms; L-dopa explains their relief. A constitutive explanation explains a phenomenon in terms of the underlying components in the mechanism, their activities, and their organization. The organization of cells in the hippocampus, for example, is part of the explanation for how it forms spatial maps. Constitutive explanations are inherently interlevel: they explain the behavior of a whole in terms of the organized activities of its parts.
For one who embraces the CM model, not every model of a phenomenon explains the phenomenon. Models can be used to summarize data, infer quantities, and generate predictions without explaining anything. Some models fail as explanations because they are purely descriptive, phenomenal models. Other models fail because they are inaccurate. Still others suffer as explanations because they contain crucial gaps. Consider these kinds of failure in turn.
2.1.4. Phenomenal Versus Mechanistic Models
The theoretical neuroscientists Dayan and Abbott (2001) distinguish descriptive and mechanistic models of neural function. Descriptive models are phenomenal models. They summarize the explanandum. Mechanistic models, in contrast, describe component parts, their activities, and their organization.
Consider an example. Hodgkin and Huxley (1952) used the voltage clamp to experimentally investigate how membranes change conductance to the flow of ions at different voltages. They intervened to raise and lower the voltage across the membrane, measured the resulting ionic current, and used that to infer the membrane’s conductance at that voltage. They used these data to fit a curve describing membrane conductance for different ions as a function of voltage. Hodgkin and Huxley insist, however, that these equations fail to explain the changes in membrane conductance. They do not refer to parts of the mechanism, and so are nothing more than summaries of the data obtained with the voltage clamp. They are descriptive, phenomenal models. This point merely echoes the above arguments concerning the limits of predictivism: phenomenal models; they provide expectation without explanation.
2.1.5. How-Possibly Models
Some models are representational and true (to an approximation): they are intended to describe how a mechanism works and (to an approximation) they accurately do so. Some models are intended to represent a mechanism but fail because they are false. Other models intentionally distort a system to get at something deep and true about it. Different parts of the same model might be best understood in any of these different ways.
Return again to Hodgkin and Huxley. The conductance equations just described as phenomenal models were components in a powerful, more comprehensive model that describes the total current moving across the cell membrane during an action potential. The conductance equations were prized for their accuracy and were tightly related to the experimental data collected with the voltage clamp. Other parts of the model are not inferred directly from observations but rather apply highly general laws about the flow of electricity through circuits. In order to apply these laws as simply as possible, Hodgkin and Huxley made a number of idealizing assumptions about the neuron. For example, they assume that the axon is a perfect cylinder and that ions are distributed evenly throughout the cytoplasm. Different parts of the model are intended to have different degrees of fidelity to the actual mechanism: some differences are thought to matter; others are not. And this difference of emphasis says something about how Hodgkin and Huxley intended to use the model. Specifically, they used the model as part of an argument that an ionic mechanism involving the passive flux of sodium and potassium could in fact account for diverse quantitative and qualitative features of the action potential under known conditions in the cell. This model was developed in 1952 to put a kind of nail in the coffin: not only do action potentials change with changing concentrations of ions (known long before this model), but one can plug actual values into an approximate electrical model and show that, under those conditions, one would see electrical behavior very much like what one sees during action potentials and graded potentials. This left the opponents of the membrane hypothesis (e.g., those such as Erlanger and Gasser, who held that the action potential reflected a chain of chemical reactions) in the uncomfortable position of lacking a model that could be demonstrated with mathematical certainty to predict all the right things. The fact that the model plays this epistemic role in arguing for an ionic membrane hypothesis should not be confused with the fact that Hodgkin and Huxley intended the model first and foremost as an explanatory text. To understand what that model asserts about a mechanism, it must be supplemented with copious other background ideas about membranes and their environments, which is presumed in the application of the model to this system. For their epistemic purpose, roughly accurate values of ion concentrations and conductance changes were essential. The shape of the neuron was not. They were not aiming at high-fidelity description in this respect.
A purely how-possibly model might make accurate predictions for the wrong reasons. Hodgkin and Huxley, for example, temporarily posited the existence of “gating particles” in the membrane that move around and explain how the membrane changes its conductance, though they insisted that this model should not be taken seriously and should be used only for heuristic purposes. The explanatory content of the model is that content that fixes our attention on real and relevant parts, activities, causal interactions, and organizational features of the mechanism. One and the same model of a mechanism might vary internally in how much explanatory information it conveys about different aspects of the mechanism.
The goal of explanation in neuroscience is to describe mechanisms. This does not entail that a model must describe everything about a mechanism to count as explanatory. A complete model of a particular phenomenon would not generalize to even slightly different situations. And if incomplete models do not explain, then science has never explained anything. The Hodgkin and Huxley model, for example, rises above the gory details (e.g., about the locations of the ions or shapes of the cells) to capture causal patterns relevant to action potentials generally.
Mechanisms can be described from many perspectives and at many different grains; there is no requirement that all of this information be housed in a single model. Yet it is against this ideal of a complete description of the relevant components of a mechanism (an ideal never matched in science) that one can assess one’s progress in knowing how a mechanism or type of mechanism works. For some explanatory projects, very sketchy models suffice. For others, more detail is required.
Not everyone agrees that completeness and correctness are ideals of explanation. Some philosophers pursue the functionalist line that some explanations, for example in psychology, do not need to get into the details of how a mechanism works in order to provide an explanation. The model might be predictively adequate and fictional yet nonetheless be taken as explanatory (see Weiskopf 2011). Yet this account faces the challenge of articulating a view about explanation that doesn’t collapse into the failed predictivism of the CL model (see Kaplan and Craver 2011; Piccinini and Craver 2011; Povich forthcoming). Other philosophers argue that explanations are better to the extent that they leave out details about the underlying system (Batterman and Rice 2014); good explanations use idealization and abstraction to suppress mechanistic detail and isolate the core causal features of a system. However, it would seem that an advocate of a causal mechanical model would simply acknowledge that such models are useful for getting at core causal features while leaving out other causally and explanatorily relevant details that might well make a difference in a particular case (see Povich forthcoming). Finally, others have emphasized the role of optimality explanation in biology (as when one explains a coding scheme by appeal to its optimal efficiency of information transfer or a wiring scheme on the grounds that it minimizes wire length in the brain [Chirimuuta 2014; Rice 2013]). Mechanists tend to see such optimality explanations as shorthand for more detailed evolutionary and developmental explanations. The challenge in each case is to articulate a general view of explanation, free of the problems that plague the CL model, according to which such models count as explanatory.
2.2. Neuroscience Methods
One role for philosophy of neuroscience is to reconstruct and assess the inferences involved in designing experiments and drawing conclusions from different kinds of data. Neuroscientists operate with a number of assumptions about how to test hypotheses about the relationship between the mind and the brain; philosophy can help make those methodological assumptions explicit and encourage discussion of the merits of those methodological assumptions. Here, we discuss some neuroscientific methods that have attracted philosophical discussion.
2.2.1. General Approaches
22.214.171.124. Animals and Model Systems
Philosophers are often (but not always) primarily interested in human beings and their behavior, and thus in the way in which the human mind works. However, because typically only noninvasive or postmortem neuroscientific methods can be used with humans, there are significant limitations to the types of experiments that scientists can perform. For this reason, most of our general knowledge about how brains work comes from research on nonhuman animals, and we make inferences about how this information applies to humans. The reliance on animal models raises a number of theoretical and practical questions about how analogous these models are and the scope and limits of inferences we can draw from them (Shanks, Greek, and Greek 2009). Similarities between biological components and neural structures of human and nonhuman animals, as well as evolutionary relationships, ground the inferences we make in applying results of animal studies to our understanding of humans (see Bechtel 2009). The more basic the functions we study, and the more similar we are to the animals we study in relevant ways, the more warranted we are in making those inferences. However, it may be that some of the phenomena we most wish to understand depend on functions or capacities that animals do not share (Roskies 2014). Humans and other animals might use altogether distinct mechanisms to perform similar tasks. Nowhere are the epistemological questions more pressing than in the development of animal models of human neuropsychiatric disorders, where many of the symptoms, such as delusions, emotions, hallucinations, and the like, cannot be convincingly established in nonhumans (Nestler and Hyman 2010; Sufka, Weldon, and Allen 2009).
126.96.36.199. Tasks, Functional Decomposition, and Ontology
Any technique aimed at revealing the relation between cognitive performance and neural mechanisms relies on some way to measure cognitive performance. Many techniques rely on tasks specifically designed to engage capacities that we take to be real, unitary components in cognitive and/or neural mechanisms. The use of a task to operationalize a psychological construct relies on a theory of the task according to which the unitary component in question contributes to the performance of the task. These components are typically derived from elements of our intuitive cognitive psychology. However, an abiding worry is that our intuitive functional ontologies fail to reflect the true taxonomy of mental functions that actually describe how the brain solves these kinds of task (Anderson 2015). This kind of issue has been visited in philosophy both by philosophers of mind who defended folk psychology as an accurate picture of mind (Fodor 1980) and by those who advocated its abandonment (Churchland 1981). Sullivan has emphasized the multiplicity of task protocols and the considerable difficulty in seeing how findings obtained using one operationalization can be compared to findings obtained using a distinct operationalization (Sullivan 2009, 2010).
188.8.131.52. New Methods
Recent technological developments will greatly expand the range of epistemological questions that we will need to ask. Genomics will undoubtedly provide a much better understanding of the ways in which humans differ from other species at a molecular level and may provide tools for assessing the likelihood of translational success. Behavioral genetics and neurogenetics promise to illuminate the extent and nature of the dependence of neurobiological and behavioral traits on genes. Research has already established that these dependencies are often complex and multigenic and that few traits or diseases are determined by single alleles. In fact, when strong associations have been found, they often are highly modulated by environmental factors (Buckholtz and Meyer-Lindenberg 2008; Caspi et al. 2002). These facts should help dispel the erroneous idea of genetic determination, that our genes determine our futures in ways necessary and immutable. These techniques also raise a number of questions about the use of “big data” and the assumptions by which one searches for significant effects in a vast sea of comparisons (e.g., Ioannidis 2005; Storey and Tibshirani 2003). With advances in the understanding of such relationships comes an inevitable increase in our measure of control over such systems. Techniques such as CRISPR are providing unprecedented control in gene editing/engineering, opening the possibility for more direct control over the machinery of the brain. Optogenetics, which uses genetic techniques to make specific neural populations functionally responsive to illumination by light of specific frequencies, already allows relatively noninvasive and highly controlled manipulation of specific cell types and is poised to revolutionize our understanding of brain function and brain disorders, as well as our interventional capabilities. Some philosophers of neuroscience have seen the emergence of optogenetics as a case study for considering the dimensions of progress that one might make in the ability to intervene experimentally into a target system (Craver forthcoming). Taken together, such technologies also raise pressing questions about how neuroscientific research can and ought to be utilized.
Behavioral data from brain-lesioned organisms have long provided key evidence about the functional organization of the brain. Experimentally induced lesions in animals have been used to investigate relationships of dependence and independence among cognitive systems, the localization of function in the brain, and the causal structure of the neural systems, a central concern of neuroscientific explanation. Functional deficits produced by experimentally induced lesions are used to identify brain regions necessary for the performance of these particular tasks or functions.
The interpretation of lesion studies is complicated by many factors. Lesions are often imprecise and incompletely damage many structures rather than completely damaging any one structure. Lesions are often also highly variable across individuals. Even assuming that one can find perfectly isolated lesions, the inference from lesion to the localization of function is perilous. The brain recovers after the lesion, and redundant systems might take over for the function of the lesioned area; both of these would mask the lesioned area’s involvement. Furthermore, the effects of a lesion are not always confined to local disruption in the area of the lesion. Disruptions in the connectivity of a network might cause disruptions in network behavior that ramify throughout the brain. Vascular damage in one region of the brain can deprive downstream structures of glucose and oxygen. A region might contain both intrinsic connectivity and fibers of passage that connect functionally unrelated areas of the brain. Some of these problems are being addressed with improvements in techniques, such as optogenetics.
The central form of inference for relating behaviorally measured deficits to brain structure is the dissociation inference. In a single dissociation, it is shown that an individual or group with damage to a particular brain region succeeds on one or more tasks measuring cognitive capacity A while failing on tasks measuring cognitive capacity B. This shows that cognitive capacity B is not necessary for cognitive capacity A. Double dissociations demonstrate the mutual independence of A and B: A is not necessary for B, and B is not necessary for A. Double dissociations are commonly understood as powerful arguments for the existence of distinct modules in the brain (Coltheart and Davies 2003; Davies 2010).
These inferences, however, face a number of persistent concerns that ought to temper any overly sanguine application of this methodology. First, behavioral patterns reflecting double dissociations have been shown to result from damage to single modules within connectionist architectures of the mind (Plaut 1995). Second, the appearance of double dissociations can be produced if one does not take into account the relative resource demands of the tasks measuring A and the tasks measuring B. If A requires more processing resources (more working memory, faster information processing, or greater representational capacity) than task B, then one might retain the ability to perform the less demanding task while losing the ability to perform the more demanding task even if one and the same system is responsible for both (Glymour 1994). Both defenders and detractors of this methodology should acknowledge that these inferences are not deductive, but abductive: we infer the best explanation for the observed changes in behavior (Davies 2010).
Dissociation studies provide evidence about whether a cognitive capacity is necessary for a certain kind of task performance. They do not allow one easily to infer what the lesioned structure does in an intact brain. The advent of noninvasive neuroimaging technologies, such as positron emission tomography (PET) and functional magnetic resonance imaging (fMRI), allows neuroscientists to address this question more directly by studying how blood flow, which correlates with neural activity, changes during the performance of a task. Data are collected from discrete volumes of brain tissue (represented as voxels), allowing levels of activity to be compared across task conditions. These methods have revolutionized medicine and cognitive neuroscience. They have also generated philosophical controversy (Hanson and Bunzl 2010). The main philosophical questions are epistemological, concerning what we can learn about brain function from imaging data, but ethical questions about how neuroimages are consumed have also been raised. Brain images seem misleadingly simple and compelling, concealing the complexity and discretion that underlies their generation (Weisberg 2008). However, although the “seductive allure of neuroscience” was widely predicted to bias nonexperts viewing neuroimages (Roskies 2008), several studies in forensic contexts suggests that neuroimages themselves are not biasing (Schweitzer et al. 2011; Schweitzer and Saks 2011). We discuss some of these questions in depth because of their complexity and in light of the dominance of the technique in human cognitive neuroscience.
184.108.40.206. What Does fMRI Measure?
The question of what the fMRI signal corresponds to physiologically and how it is related to brain function has been an important focus of neuroscientific research (for philosophical discussions of the assumptions of the method, see Bogen 2002). For instance, the blood oxygen level dependent (BOLD) signal in MRI measures a quantity with a complicated and indirect relationship to neural activity (Buxton 2009). Studies have corroborated the hypothesis that the fMRI BOLD signal correlates with neural activity and have suggested that the fMRI signal primarily reflects synaptic processing (local field potentials) rather than action potentials (Logothetis and Wandell 2004). However, the BOLD signal has many limitations: it does not distinguish between excitatory and inhibitory neurons, and it has relatively low spatial and temporal resolution. Moreover, because we still do not understand how neurons encode information, it is difficult to know which neural signals are the computationally relevant ones, which ones are irrelevant, and thus how to interpret the BOLD signal. Finally, neuroimaging at its best provides only a weak kind of causal information about brain function: namely, that performing a given kind of task causes an increase in activity in a given locus. This does not establish that the brain region is actually involved in the performance of the task or that its involvement is in any way specific to the task. In order to establish the causal relevance of brain areas to particular functions, neuroimaging data must ultimately be supplemented by other information, ideally from interventions (lesions, stimulation, transcranial magnetic stimulation, etc.).
220.127.116.11. What Can Brain Imaging Tell Us About Cognition?
No brain imager worth her salt denies the importance of cognitive psychology to productive imaging experiments. But many psychologists have denied the importance of imaging to understanding cognitive function. Philosophical arguments about multiple realizability and the autonomy of the special sciences imply that neuroimaging cannot provide much insight into the nature of mind (Fodor 1974). Arguments from psychology have been similarly dismissive of neuroimaging. For example, it is often claimed that neuroimaging can only provide information on where mental processes occur, not how they occur, and that mapping (localizing) function is fundamentally uninteresting. Others argued that brain imaging cannot aid in theory choice in psychology (Coltheart 2006), failing to recognize that reasonable, empirically testable commitments about structure–function relationships can serve as bridge principles between functional and anatomical data, thus allowing neuroimaging to bear on psychological theory choice (Roskies 2009). However, increasingly clever and powerful experimental and analytical techniques have been developed to allow neuroimaging data to more precisely mirror temporal and functional variables, and there are ample illustrations of neuroimaging’s relevance to psychology. Even the most vociferous deniers seem to accept that neuroimaging can shed light on psychological questions (Coltheart 2013). What is still unknown is how much neuroimaging can tell us: what are its scope and limits?
One element that was a factor in early skeptical views of neuroimaging was the deviation in results across studies. Early meta-analyses of neuroimaging were generally critical, emphasizing divergence in results among studies of like phenomena (Shulman 2013). Over time, however, evidence has amassed for reliable overlap of results among many tasks with similar content or structure, and there has been a growing awareness of the existence of individual differences and the context-dependence and sensitivity of brain networks recruited for various tasks. In addition, novel methods of registration provide increasingly precise ways of compensating for functional-anatomical variability across subjects (Haxby et al. 2011). On the whole, the voices of those who proclaim that neuroimaging results are just noise or uninterpretable have faded into the past.
Neuroimaging nicely illustrates how our theoretical frameworks and analytical methods shape and are shaped by our interpretations of the neuroscience data. Early neuroimaging was influenced by the pervasive assumption of modularity of functional modules advocated by philosophical and psychological theories (Kanwisher, McDermott, and Chun 1997). Studies sought to identify individual brain regions responsible for performing identifiable task-related functions using subtraction methods, in which differences in regional activation across tasks are ascribed to task differences. In early studies, attention focused on the region or regions that showed highest signal change in relation to task manipulations. Some regions of cortex were consistently activated with particular types of tasks or tasks involving particular kinds of stimuli, consistent with the view that the brain/mind is a modular system composed of more or less isolable and independent processing units with proprietary functions (note that this view is distinct from the modularity of mind proposed by Fodor , which was designed to describe only peripheral systems and includes features that are inappropriate for internal processing modules, such as innateness, automaticity, and information encapsulation in any strong sense).
Criticisms were leveled against neuroimaging in virtue of these subtraction methods. Some rightly highlighted limitations inherent in the interpretation of single contrasts as underscoring the need for factorial designs (Price, Moore, and Friston 1997) and for what has been called “functional triangulation” (Roskies 2010b). Other criticisms misunderstand subtraction as wedded to certain assumptions such as unidirectional effects and single regions of activation or static models of brain function (Uttal 2003; van Orden and Paap 1997). However, subtraction involves no such assumptions (Roskies 2010b). Nonetheless, the focus on attributing function to areas with maximum levels of signal change perpetuates a modular picture of brain function that is likely false.
Over the past decade, development of powerful new methods of analysis have led to a shift in this debate away from strict modularity toward a recognition of the importance of distributed processing across large cortical networks. Multivariate techniques allow tasks to be correlated with widespread patterns of brain activity, rather than individual regions, thus demonstrating that the brain encodes information in a distributed fashion across large portions of the cortex (Norman, Polyn, Detre, and Haxby 2006). Using these methods, scientists have demonstrated that information in the fMRI signal in different areas of cortex is sufficient to distinguish stimulus identity at various levels of abstraction, task demands, or a planned action (Haxby et al. 2001). What they do not demonstrate is whether or how this information is being used (though some of this can be inferred by looking at what information is represented in different parts of the processing hierarchy). Another area of active development is research on resting state connectivity in the human brain. In this application, researchers mine data about correlations of slow-wave oscillations in spatially disparate brain regions to form hypotheses about the system-level organization of brain networks (see, e.g., Biswal, Kylen, and Hyde 1997; Biswal, Zerrin Yetkin, Haughton, and Hyde 1995; Power, Fair, Schlaggar, and Petersen 2010; Power et al. 2011). These techniques change the way in which researchers conceive of cortical representation and functional specificity, and they vastly improve the predictive power of neuroimaging. The deeper understanding they engender takes seriously the notion that entire brain systems or networks, rather than individual regions, contribute to task performance. Even the most dogged supporters of functional modules now accept that stimulus-related information is widely distributed throughout cortex.
Novel analytical techniques continue to improve our ability to understand how neural representations or processes are encoded throughout cortex. For example, correlations between patterns of neural activity and stimulus features or semantic models permits prediction of novel activation patterns and evaluation of the models’ success (Haxby, Connolly, and Guntupalli 2014; Mitchell et al. 2008). Techniques such as representational similarity analysis (RSA) can aid in determining the representational specificity of various brain regions (Kriegeskorte, Mur, and Bandettini 2008). These new methods improve our understanding of how representations are elaborated across cortical areas. It remains to be seen how much insight into neural representation and neural computation can be gleaned from these methods, but it is clear that we have transcended the time in which one can seriously suggest that brain data cannot aid in understanding cognition.
18.104.22.168. The Logic of Brain Imaging
Neuroimaging is a highly indirect measurement technique. What is the validity of inferences one can make on the basis of data from functional imaging? People routinely use fMRI to infer the function of regions of activation based on psychological models. Such “forward inferences” are relatively unproblematic (or as unproblematic as the psychological models they depend on; see Section 22.214.171.124. Cognitive Ontology). Reverse inference, however, has been widely criticized as erroneous. In reverse inference, the presence of a psychological function contributing to a complex task is inferred from the observation of activity in a brain area that has been previously associated with that function. As a deductive inference, the logic is clearly flawed: it assumes that brain regions have unique functions, whereas we know that brain regions (at least at the resolution of neuroimaging) often subserve multiple functions (Anderson 2010; Poldrack 2006). Therefore, one cannot deduce involvement of a particular function from evidence of regional activity. Many have thus concluded that reverse inference is always illegitimate. However, this view is too strong. Bayesian reasoning can be used to provide probabilistic epistemic warrant for reverse inferences (Poldrack 2011). Taking into account task- or network-related contextual factors can also increase the likelihood that a hypothesis about function based on reverse inference is correct (Klein 2012). In fact, both reverse and forward inferences can be useful, but neither is demonstrative. As was the lesson with neuropsychology, interpretation in neuroimaging involves making an inductively risky inference to the best explanation. There are no crucial experiments, even in neuroimaging.
126.96.36.199. Cognitive Ontology
Philosophers have raised the disquieting possibility that our folk psychological understanding of the human mind is mistaken. Perhaps more troubling still is the possibility that neuroimaging is structured so that it can only support and never dislodge these mistaken cognitive ontologies. As Poldrack aptly notes, no matter what functional decomposition one posits, contrasts between tasks will cause something to “light up” (Poldrack 2010). Can neuroimaging (augmented by information from other fields) provide us the necessary tools to bootstrap ourselves out of a mistaken ontology? Or are we doomed to tinker with theories that can never converge on the truth? To address this issue, we might develop data-driven, theory-neutral methods for making sense of imaging data. Such work is in its infancy, but early attempts suggest that our intuitive ontologies are not the best explanations of brain data (Anderson 2015; Poldrack, Halchenko, and Hanson 2009). It remains to be seen whether the best interpretation of these and other results is that the brain’s ontology crosscuts our current cognitive ontology.
3. Looking Outward: What Can Philosophy of Neuroscience Tell Us About Science?
There is no greater villain in the history of philosophy, viewed from the point of view of contemporary neuroscience, than Rene Descartes. This is because Descartes was a dualist, holding that the mind is an altogether distinct substance from the body. Never mind that he held that everything in the non-mental world could be understood ultimately in terms of the lawful interactions of minute particles. Never mind that he constructed possible mechanistic explanations for everything from magnetism and the motion of the planets to sensory transduction and the circulation of the blood or that he thought everything in the behavior of nonhuman animals could be explained mechanistically. His villainy results from questioning whether this mechanistic perspective is sufficient to understand the working of the human mind and from his decision that it was not: in addition to physical mechanisms, minds require souls. Although Descartes is often treated as a villain, one might just as easily see him as expressing a thesis about the explanatory limits of a properly “mechanistic” conception of the world.
In this section, we consider two sets of questions related to this thesis. First, we ask about the unity of neuroscience, the integration of different scientific disciplines in the study of the mind and brain. Second, we ask about the ontology of the multilevel structures common in neuroscientific models and, in particular, whether higher-level things are nothing but lower level things. In our final section, we turn to potential challenges to the reach and limits of mechanistic explanation when it comes to the domain of the mind.
3.1. Integration and the Unity of Neuroscience
Unity is one of the prized ideals of science. There is, at the moment, no Newton or Darwin to unify our understanding of the nervous system. Is this problematic? Or is the ideal of unification perhaps inappropriate for the neurosciences?
What would a unified theory of the mind-brain even look like? According to a classic picture (Oppenheim and Putnam 1958), the world can be ordered roughly into levels of organization, from high-level phenomena (such as those described by economics) to low-level phenomena, such as those described by quantum mechanics. Things at higher levels are composed of things at lower levels, and theories about higher level phenomena can to be explained in terms of theories at lower, and ultimately fundamental, levels. Following the CL model, such explanations would take the form of derivations of higher-level theories from lower-level theories (Nagel 1961; Schaffner 1993). Because the theories about different levels are typically expressed using different vocabularies, such deductive unification requires bridge laws linking the two vocabularies by definition or some other way of relating terms. This classic view combines commitments to (1) a tidy correspondence between levels of ontology, levels of theories, and levels of fields, and (2) the idea that all of the levels can be reduced, step by step, to the laws of physics. This caricature sketch of the classic view of unity provides a clear contrast to more contemporary accounts.
Bickle (2008), for example, argues that we could dispense entirely with the idea that the phenomena in neuroscience span multiple levels or organization as well as the idea that scientists ought to try to integrate across such levels. According to his “ruthlessly reductive” view, higher level descriptions of the world are, in fact, merely imprecise and vague descriptions of behavior that guide the search for cellular and molecular mechanisms. The unity of neuroscience is achieved not by a stepwise reduction to the lowest level, but by single explanatory links that connect decisions to dopamine or navigation to NMDA receptors. Bickle dispenses entirely with the idea of levels. His view fits well with work in many areas of cellular and molecular neuroscience in which scientists seem to bridge cognition and molecules in a single experimental bound. And, once levels are gone, there is no pressing question of identifying things at higher levels with things at lower levels. The pressing question is merely how the higher level sciences fix our attention on the right molecules.
Another alternative, the mechanistic view, is more a conservative elaboration and emendation than a radical rejection of the classic view. Designed to fit examples drawn from more integrative areas of neuroscience research, the mechanistic view stresses the need to elaborate the causal structure of a mechanism from the perspective of multiple different levels and multiple different techniques and theoretical vocabularies (Machamer, Darden, and Craver 2000). The appropriate metaphor is a mosaic in which individual sciences contribute tiles that together reveal the mechanistic structure of a brain system. Examples of multifield integration are easy to find in neuroscience, but one example concerning the multilevel organization of spatial learning and spatial memory has received particular attention (see Bickle 2008; Churchland and Sejnowski 1992; Craver 2007, 2014; Sullivan 2009, 2010).
For the mechanist, the idea that the nervous system has multiple levels is simply a commitment to the idea that the various activities of the nervous system (e.g., forming a spatial map) can be understood by revealing their underlying mechanisms, that the components and activities of these mechanisms can themselves be so decomposed, and so on. Levels thus understood are local, not monolithic. The spatial learning system is at a higher level than spatial map formation in the hippocampus, and spatial map formation is at a higher level than place cell firing. But it does not imply that all things that psychologists study are at the same level, nor that all capacities will decompose into the same sets of levels (different machines have different parts and contain altogether proprietary levels of organization), nor that there is a single theory that covers all phenomena such as learning and memory. Integration in neuroscience, on this view, is achieved capacity by capacity and not by a grand reduction of a “theory of psychology” to a “theory of neuroscience”: rather, local theories about specific capacities are related to the mechanistic parts for that capacity. Viewed from this vantage point, there is little prospect for, and little apparent need for, a unifying theory that encompasses all psychological phenomenon under a concise set of basic laws (as Newton crafted for motion).
Mechanists have also tended to distance themselves from the classical idea that the unity of neuroscience, such as it is, is defined through stepwise reduction to the most fundamental level. Integration can move up and down across levels, as illustrated earlier, but integration also takes place between fields working on phenomena that are intuitively at the same level. One might combine Golgi staining and electrophysiological recording to understand how electrical signals propagate through the wiring diagram of the hippocampus. In sum, the mechanistic view differs from the classical view in presenting a more local view of levels and a more piecemeal and multidirectional view of how work in different fields, using different techniques and principles, can be integrated without reducing them all to the lowest physico-chemical level of the neuroscience hierarchy (Craver 2007, ch. 6).
To some, however, even this more limited vision of unity is overly optimistic. The ability to combine results from distant corners of neuroscience requires some way of ensuring that everyone is working with the same concepts and constructs. Sullivan (2009) argues that the long-term potentiation (LTP) research program is plagued by a multiplicity of experimental protocols and lab norms that makes it difficult, if not impossible, to compare what is discovered in one lab with what is discovered in another. When it is discovered that a gene knockout disrupts LTP produced by one experimental protocol (e.g., high-frequency stimulation), one cannot simply assume that it will disrupt the LTP produced in another protocol (e.g., theta burst stimulation). If the same construct is operationalized differently in different labs, one cannot assume that results described in a similar vocabulary in fact are results about the same phenomenon. The drive for reliable, repeatable, and well-controlled protocols in the laboratory might intrinsically conflict with the goal of understanding how such phenomena occur “in the wild.” If so, unity might be a good ideal, but there is reason to doubt whether science as currently pursued will genuinely achieve it (see also Sullivan’s discussion of the Morris water maze and its role in this research program).
As we write, a novel and potentially unifying theory of brain function is under consideration. Predictive coding models of brain function postulate that error signals are encoded in feedforward neural activity and that feedback activity contains representational information from higher cortical levels (Friston and Kiebel 2009). Incorporating biological data and insights from Bayesianism and other computational approaches, predictive coding succeeds in unifying a range of psychological and neurobiological phenomena that occur at a number of levels (Hohwy 2014). Evidence is mounting in accord with predictive coding predictions, yet some psychological constructs still seem difficult to weave into predictive coding accounts (Clark 2013). If the promise of predictive coding is borne out, we may have to reconceptualize many of the data of neuroscience, interpret the functional importance of neuroimaging signals differently than we have in the past, and revisit the question of unification.
Grant that the capacities of the nervous system can be decomposed into capacities of parts of the nervous system, and so on. A second question concerns the ontology of such multilevel structures. Isn’t every higher level capacity in some sense just the organized behavior of the most fundamental items? Most philosophers say yes, but they disagree with one another about what exactly they are assenting to.
There appears to be broad consensus that wholes can often do things that the individual parts cannot do. Cells generate action potentials. A single ion channel cannot generate an action potential. For some, these considerations alone warrant the claim that higher-level phenomena are “more than” their parts: they can do things the parts cannot do, and they have properties that are not simple aggregations of the properties of the parts; they are organized (Wimsatt 1997).
There is still considerable room for disagreement, however, when we ask: is there anything more to the behavior of the mechanism as a whole than the organized activities of its component parts (in context)? For some, the answer is no: the behavior of a type of mechanisms is identical to types of organized entities and activities that underlie them (e.g., Polger 2004). This type-identity thesis explains why cognitive capacities are correlated with activations of physical mechanisms. It also fits with the reductionist component of the classical model of the unity of science: these type-identities are exactly the kind of thing expressed in the bridge laws linking the vocabularies for describing different levels. Finally, the thesis avoids the problem of top-down causation discussed later.
The type-identity thesis faces the challenge of multiple realizability: that higher level kinds do not generally map neatly onto the kinds identified from the perspective of lower level mechanisms. Behaviors or capacities are typically multiply realizable: they can typically be realized by an innumerably large number of mechanisms that differ from one another in innumerable ways (see, e.g., Marder and Bucher 2007). The mapping of higher-level to lower-level capacities is one to many.
Here, the metaphysical options proliferate. Some hold out for type-identity, claiming that genuine multiple realization is rare (Shapiro 2008) or reflects a failure to match grains of analysis (Bechtel and Mundale 1999; see also Aizawa 2009). None of this poses a principled problem to type-identity.
Others opt for a form of token identity, holding that each individual manifestation of a higher-level capacity is identical to a particular set of organized activities among lower-level components. Yet it is unclear that the idea of token identity can be expressed coherently without collapsing into the thesis of type identity (see Kim 1996). The discussion of identity and its formulation is well beyond the focus of this chapter (see Smart 2007).
Still others have opted for a third way: nonreductive physicalism. According to this view, higher-level properties or capacities are irreducible (not type-identical to) to lower-level properties and capacities, they have causal powers of their own, but they nonetheless are determined by (supervene upon) the most fundamental structure of the world.
Nonreductive physicalism has some virtues. First, it acknowledges the problem of multiple realization and, in fact, uses that as a premise in an argument for the autonomy of higher-level causes. Second, the view preserves the idea that higher-level behaviors are real, not dim images of a fundamentally atomic reality (as a Bickle might have it). And third, it preserves general commitment to physicalism: what goes on at higher levels is ultimately grounded in what happens at the physical level.
Despite the prima facie plausibility of this view to many, it faces a persistent challenge to explain how higher level phenomena can have causal powers and so have a legitimate claim to being real at all (Kim 2000). The challenge might be put as a dilemma: either higher level capacities have effects over and above the effects of their lower level realizers or they do not. If they do not, then their claim to exist is tenuous at best. If they do, then their effects would appear to be entirely redundant with the effects of the physical causes at play. If every physical effect has a complete physical cause, it is not clear what the higher level capacities have to add. The nonreductive physicalist would appear to be committed to the idea that most events are multiply overdetermined by a set of redundant causes. For some, such as Kim (2000), this is a most unattractive world picture. For others (e.g., Woodward 2003; and for neuroscience, Craver 2007, ch. 6), the sense that multiple overdetermination is unattractive rests on a mistaken view of causation.
What of Bickle’s idea that higher level capacity descriptions merely guide our attention to lower level mechanisms? Many of the details about the molecules, their precise locations, their momenta, and what have you don’t make any difference whatsoever to how things at higher levels go. The lower level mechanisms are lower level mechanisms in virtue of the fact that they contribute in some way to a higher level capacity that we take to be significant. The cell is a blooming buzzing confusion; mechanistic order is imposed selectively by focusing on just those lower level interactions and features that make a difference to the higher level capacities (such as spatial learning). If so, higher-level capacities are not pointers to those mechanisms; higher level capacities partly constitute the mechanisms as the mechanisms they are.
The issues in this section are quite general and have nothing whatsoever to do with the particular topic area of neuroscience. Hierarchical organization and nearly decomposable structure appears to be a feature of systems generally. One might just as easily raise these kinds of questions about the generation of action potentials or the opening of ion channels. From the perspective of physics, these structures are remote from and multiply realized in the fundamental structures of the world.
4. Looking Across: Neuroscientific Approaches to Philosophical Questions
Over the past hundred years neuroscience has had remarkable success in uncovering diverse mechanisms of the central nervous system at and across multiple levels of organization. Despite this, there are far fewer solved than unsolved problems. And among these unsolved problems are some that seem especially recalcitrant to a solution. These problems are recalcitrant not so much because we lack the relevant data but because we have no idea what sort of evidence might be relevant. They are recalcitrant not so much because their complexity defies our understanding but because we can’t seem to imagine any way that even an arbitrarily complex mechanism could possibly do what it must to explain the phenomenon. They are recalcitrant, finally, not so much because the relevant science has yet to be done but because something about the phenomenon appears to evade formulation within the language and explanatory building blocks that science offers. It is often with these problems that we find the closest nexus with philosophical puzzles.
4.1. Free Will/Agency
The philosophical problem of free will is generally thought to be tied to the metaphysical question of determinism, although, since Hume, it has been recognized that indeterminism poses an equally troubling background for our commonsense conception of freedom. Nonetheless, libertarian solutions to the problem of free will ground freedom in the existence of indeterministic events in the brain (e.g., Kane 1999). Although some have supposed that neuroscience will answer the question of determinism, it will not (Roskies 2006). At most, it will illustrate that neural mechanisms subserve behavior. These illustrations may fuel philosophical discussion, but, ultimately, this is a debate that has raged since at least the reigning days of atomism. The hard problems in this area involve working out the implications of the ever-encroaching mechanistic explanation of human behavior for our ordinary concepts of choice, decision-making, reasons, responsibility, and a whole host of related concepts intimately bound up with a folk conception of human freedom.
Although not specifically aimed at the philosophical problem of freedom, the neuroscience of volition has flourished (Haggard 2008; Roskies 2010a). Voluntary action involves frontoparietal networks, and endogenously generated actions involve the pre-supplementary motor area (preSMA). Progress is being made on other aspects of agency, such as decision-making (Gold and Shadlen 2007), task-switching (Dosenbach et al. 2006; Mante, Sussillo, Shenoy, and Newsome 2013), conflict-monitoring, and inhibitory processes (Ridderinkhof, van den Wildenberg, and Brass 2014). Ultimately, however, there is challenging philosophical work to be done to specify the relationship between these subpersonal mechanisms and personal-level decisions and responsibilities.
One thread of research that has generated a great deal of discussion in philosophy began midcentury and concerns the causal relevance of conscious decision-making to action. The research focuses on an electroencephalographic (EEG) signal, the readiness potential (RP) in the human brain. Libet argued that the RP, which reliably predicts that one is about to spontaneously initiate movement, appears well before subjects report being aware of having decided to move (Libet, Gleason, Wright, and Pearl 1983). He reasoned that the brain “decides” to move before we make the conscious decision to do so and concluded that our conscious decisions are inefficacious, so we lack free will. Although Libet’s basic findings characterizing the RP have been replicated numerous times, many studies have called into question his interpretations and the implications he draws from them. For example, Schlegel e al. have shown the RP occurs in nonmotor tasks and in tasks that involve preparation but not action (Schlegel et al. 2013). Schurger has shown that averaging signals time-locked to decision in a drift-diffusion model results in a signal that looks like the RP (Schurger, Sitt, and Dehaene 2012). Thus, the interpretation of the RP as evidence of a subpersonal decision to move cannot be maintained. Libet’s method of measuring time of awareness has been criticized (Pockett 2002), as has the nature of the state that is indexed by subject’s report (Roskies 2011). Recordings of single-unit activity during a Libet task showed that both movement and time of willing can be decoded from neural data on a single-trial basis and prior to a subject’s judgment of having decided (Fried, Mukamel, and Kreiman 2011). This study demonstrates that information encoding impending action (and decision) is present in the brain prior to movement but is subject to the same criticisms as the original paradigm with respect to measurement of time of willing and the nature of the state measured (Roskies 2011).
Neuroimaging studies showing the ability to predict subjects’ actions seconds prior to decision are offered in the same spirit as Libet but also fail to successfully demonstrate that we lack flexibility in our decisions or that “our brains decide, we don’t.” Neuroimaging studies that show that information is present well before decision show only slightly more predictive power than chance and can be interpreted as merely indicating that decisions we make are sensitive to information encoded in the brain with long temporal dynamics (Soon, Brass, Heinze, and Haynes 2008). Indeed, the finding that brain activity precedes action or even decision is not surprising. Any materialist view of mind would expect brain activity corresponding to any kind of mental process, and one that evolved in time would likely be measurable before its effects. Thus far, no neuroscientific results demonstrate that we lack (or have) free will (see also Baumeister, Mele, and Vohs 2010; Clark, Kiverstein, and Vierkant 2013).
4.2. Mental Causation
A long-standing philosophical question involves whether mental states can play a causal role in action or whether mind is epiphenomenal (Jackson 1982). The current debate is tightly tied to questions of reductionism (see the earlier discussion of Ontology). Neuroscientists have for the most part not grappled with the problem of mental causation in this kind of formulation but have instead begun to work on problems about how executive brain areas may be able to control or affect the functioning of controlled (and so “lower-level” areas, using “level” in an entirely distinct sense than gives rise to the philosophical problem of mental causation). Tasks involving endogenous manipulation of attention and set switching are frequently used (Dosenbach et al. 2006; Greenberg, Esterman, Wilson, Serences, and Yantis 2010). Attention has been shown to affect lower-level processing, sharpening or potentiating the tuning curves of neurons involved in lower level-task functions (Reynolds and Chelazzi 2004; Reynolds and Heeger 2009). Other work has implicated synchronous neural activity in the modulation of lower-level function (Gregoriou, Gotts, Zhou, and Desimone 2009). The source of these “endogenous” signals is always of particular interest to philosophers. Recent work suggests that the dynamical properties of a system may themselves govern cued set-switching (Mante et al. 2013). This interesting work raises intriguing questions about how such results fit into traditional views of mind and executive function.
The nature of intentionality has been a preoccupation of philosophers since Brentano coined the term. How do concepts, words, or mental representations mean anything? Although early thinkers despaired of a naturalistic solution to this problem, later thinkers pursued various naturalistic accounts (Dretske 1999; Fodor 1980; Millikan 1987). The problem of intentionality or meaning has not been solved, but as neuroscience and psychology continue to articulate the representational vehicles that the brain constructs on the basis of experience and the way in which those are shaped by expectations, the broad outlines of a possible solution to the problem might be taking shape (Clark 2013; Hohwy 2014). These draw from the insights of the causal theorists, as well as from those of functional role theorists. For example, reference to concrete objects seems best explicable with a causal account for the genesis of the representations used and a counterfactual account to determine the scope of the representation. Data from brain science are largely consistent with theories of embodiment (Varela, Thompson, and Rosch 1992) because tokening a representation in the absence of its object seems to activate regions also active when interacting with the object. But many objects are not concrete and able to be so represented. Here, evidence from brain science is broadly consistent with functional role theories (Huth, Nishimoto, Vu, and Gallant 2012). The language of thought theory, at least as conceived classically by Fodor, seems to make the least connection with brain science. But neuroscientists have made much more progress in articulating the nature of object representation and visual recognition than they have in illuminating the nature of propositional attitudes. Causal theories and embodiment theories also play well with predictive coding accounts.
A particular difficulty should be flagged here. When we speak of a neural representation, we often describe a pattern of neural activity as “meaning” or “standing” for the stimulus with which it best correlates. This manner of speaking should not be taken to suggest that neurons or patterns of activity “understand” anything at all; rather, it is often nothing more than the expression of one or more causal or correlational relationships: that the pattern of activity is caused by its preferred stimulus or with some behavioral output, or that it is correlated with some significant environmental factor in virtue of some less direct causal route (for reviews, see Haugeland 1998; Ramsey 2007). Some have suggested that Brentano’s challenge can be met with a purely causal theory of content (e.g., Fodor 1990). Others have appealed to natural selection to explain the normative content of representations (Garson 2012; Ryder 2004; 2009a; 2009b). Still other philosophers deny that the neural, causal, or correlational sense of representation (understood in causal or correlational terms) has the resources to understand the normative significance of mental content: thus, there remains a gap between theories of neural representation and theories of how thinkers have thoughts about things (McDowell 1994).
4.4. Moral Cognition and Responsibility
One could argue that imaging neuroscience began to make direct contact with philosophically motivated questions with Greene’s publication of brain activity during judgments about moral dilemmas (Greene, Sommerville, Nystrom, Darley, and Cohen 2001). Although a simplistic interpretation of brain activity as mapping to normative ethical positions has been roundly criticized, the results from imaging and other studies have led to a rapid growth in understanding of how the brain represents and processes information relevant to complex decision-making generally (not only moral decisions). Moral judgments do not involve special-purpose moral machinery but instead recruit areas involved in decision-making and cognition more generally. Insight into moral cognition can be gained by seeing how permanent or temporary disruptions to neural machinery influences judgments. For example, damage to ventromedial cortex leads to moral judgments that deviate from the norm in being more “utilitarian” (Young, Bechara, et al. 2010), whereas disruption of the temporoparietal junction (TPJ), which is involved in social cognition, leads to moral judgments that are less sensitive to mental state of the offender (Young, Camprodon, et al. 2010). Neural data might also explain many aspects of our intuitions about punishment (Cushman 2008). Studies of individuals with episodic amnesia have shown that many cognitive capacities plausibly linked to human choice and judgment (such as an understanding of time, valuation of future rewards, executive control, self-knowledge, and the Greene effect of personal scenarios on moral judmgents) remain intact despite global and even life-long deficits in the ability to remember the experiences of one’s life and to project one’s self vividly into the future (Craver 2012; Craver, Kwan, Steidam, and Rosenbaum 2014; Kwan et al. 2012, 2013).
Regional patterns of activation are not the only data relevant to understanding moral behavior. Insights from neurochemistry and cellular neurobiology also inform our understanding of morality, explaining how prosocial behavior is dependent on neural function and perhaps supplying the framework allowing for naturalistic theorizing about the human development of morality (P. S. Churchland 2012). Despite some confusion on this issue, no descriptive data from neuroscience have provided a rationale for preferring one normative philosophical theory to another and none can without the adoption of some kind of normative assumptions (Berker 2009; Kahane 2012). However, one can see neural foundations that are consistent with prominent elements of all the major philosophical contenders for substantive normative ethical theories: utilitarianism, deontology, and virtue ethics. That said, in probing the neural basis of prudential and moral reasoning, neuroscientists are seeking to reveal the mechanistic basis by which moral decisions are made, but they are also seeking to reveal the mechanistic basis of being a moral decision-maker. As such, they require a number of substantive assumptions about the kind of thing one must be capable of doing in order to count as a maker of decisions, let alone a moral agent, at all.
4.5. Consciousness and Phenomenality
Consciousness, once the province solely of philosophers, has become a respectable topic for research in neuroscience, starting with Crick and Koch’s publications on the topic in the 1990s. Their exhortation to find the neural correlates of consciousness has been heeded in several different ways. First, in medicine, novel neuroimaging techniques have been used to provide evidence indicating the presence of consciousness in brain-damaged and unresponsive patients. For instance, a few patients classified as being in a persistent vegetative state showed evidence of command-following in an imagery task (Owen et al. 2006). In further studies, the same group showed that behaviorally unresponsive patients were correctly able to answer questions using the tasks as proxies (Naci, Cusak, Jia, and Owen 2013). Most recently, they contend that normal executive function can be assessed by brain activity during movie viewing and that the presence of this executive function is indicative of consciousness (Naci, Cusack, Anello, and Owen 2014). Their study showed that coherent narratives evoked more frontal activity than did scrambled movies and that this is related to executive function. The authors of these studies have claimed that the results show residual consciousness, yet this claim rests on a strong assumption that differential response to these commands is indicative of consciousness.
The approach raises a host of epistemological questions about how residual mindedness in the absence of behavior might be detected with brain imaging devices and ethical questions about what is owed to persons in this state of being.
A more direct approach contrasts brain activity during liminal and subliminal presentation, with the hopes of identifying the differences in brain activity that accompany awareness (Rees, Kreiman, and Koch 2002). These attempts, although interesting neuroscientifically, are beset by a number of interpretational problems (Kouider and Dehaene 2007). Several other approaches have kinship with Baars’s global workspace theory (Baars 1993). One long-standing observation links synchronous and oscillatory activity with binding and consciousness (Gray, Engel, König, and Singer 1990; Singer 1998). A number of studies have corroborated the idea that corticothalamic feedback is necessary for conscious awareness of stimuli. Tononi tries to quantify consciousness as a measure of information integration (Oizumi, Albantakis, and Tononi 2014). The available data do not (yet) suffice to distinguish between various theoretical accounts of consciousness, all of which posit central roles for feedback, coherence, oscillations, and/or information-sharing. Moreover, these views may explain functional aspects of consciousness (what Block has called “access consciousness”), but do not explain the more puzzling aspect of phenomenality.
The “hard” problem of phenomenal consciousness concerns the experiential aspect of human mental life (Chalmers 1995). The term “qualia” is often used for the “raw feelings” that one has in such experiences: the redness of the red, the cumin’s sweet spiciness, the piercing sound of a drill through an electric guitar. There is, as Thomas Nagel described it, something it is like to have these experiences; they feel a certain way “from the inside.” Why is this problem thought to be so difficult? Neuroscience has made tremendous progress in understanding the diverse mechanisms by which conscious experiences occur. There appears to be nothing in the mechanisms that explains, or perhaps even could explain, the experiences that are correlated with their operation. Chalmers made this point vivid by asking us to consider the possibility of philosophical zombies who have brains and behaviors just like ours, yet lack any conscious experiences. If zombies are conceivable, then all the mechanisms and behaviors might be the same and consciousness experience might be absent. If so, then conscious experience is not identical to any physical mechanism.
Many attempts have been made to respond to this hard problem of consciousness, and it would be impossible to canvass them all here. One could be an eliminativist and reject the hard problem because it asks to explain something that is not real. One could be an optimist and insist that the hard problem seems so hard only because we haven’t yet figured out how to crack it. For those committed to a form of physicalism or even to the multilevel mechanistic picture, these problems are exciting challenges for the science of the mind-brain, terra incognita that will fuel interest in the brain for generations to come. How one thinks about the possibility for a science of consciousness is influenced by a host of deep metaphysical commitments about the structure of the world.
Philosophical engagement of the neurosciences is of relatively recent origin, perhaps beginning in earnest with the work of ChurchlandPaul M. (1989) and Patricia S. Churchland (1989) who predicted that philosophical issues would look different as philosophers began to acquaint themselves with the deliverances of the neurosciences. In the nearly three decades since, this prediction has been borne out. The neurosciences now have a home in the philosophy of science and the philosophy of mind, as well as in moral psychology, ethics, and aesthetics. It is a significant challenge to these philosophers to stay up to date with the latest work in a large and rapidly changing field while, at the same time, maintaining the critical and distanced perspective characteristics of philosophers. Yet this distance must be maintained if philosophy is to play a useful, constructive role in the way that neuroscience proceeds and the way that its findings are understood in light of our need to find a place for ourselves in the natural world. Precisely because philosophers are not primarily caught up in the practice of designing experiments or in the social structure of the sciences, and precisely because they bring the resources of conceptual analysis and a long tradition for thinking about the mind that are not well-known to experimentalists, philosophers have much to contribute to this discussion.
ALR would like to thank Jonathan Kubert for research and technical assistance.
Aizawa, K. (2009). “Neuroscience and Multiple Realization: A Reply to Bechtel and Mundale.” Synthese 167(3): 493–510.Find this resource:
Anderson, M. L. (2010). “Neural Reuse: A Fundamental Organizational Principle of the Brain.” Behavioral and Brain Sciences 33(04): 245–266.Find this resource:
Anderson, M. L. (2015). “Mining the Brain for a New Taxonomy of the Mind.” Philosophy Compass 10(1): 68–77. doi: 10.1111/phc3.12155Find this resource:
Baars, B. J. (1993). A Cognitive Theory of Consciousness (Cambridge/New York: Cambridge University Press).Find this resource:
Batterman, R. W., and Rice, C. C. (2014). “Minimal Model Explanations.” Philosophy of Science 81: 349–376.Find this resource:
Baumeister, R., Mele, A., and Vohs, K., eds. (2010). Free Will and Consciousness: How Might They Work? 1st ed. (New York: Oxford University Press).Find this resource:
Bechtel, W. (2009). “Generalization and Discovery by Assuming Conserved Mechanisms: Cross-Species Research on Circadian Oscillators.” Philosophy of Science 76(5): 762–773.Find this resource:
Bechtel, W., and Mundale, J. (1999). “Multiple Realizability Revisited: Linking Cognitive and Neural States.” Philosophy of Science 66: 175–207.Find this resource:
Berker, S. (2009). “The Normative Insignificance of Neuroscience.” Philosophy & Public Affairs 37(4): 293–329.Find this resource:
Bickle, J. (2008). Psychoneural reduction: The new wave (Cambridge MA: MIT Press).Find this resource:
Biswal, B., Zerrin Yetkin, F., Haughton, V. M., and Hyde, J. S. (1995). “Functional Connectivity in the Motor Cortex of Resting Human Brain Using Echo-Planar MRI.” Magnetic Resonance in Medicine 34(4): 537–541.Find this resource:
Biswal, B. B., Kylen, J. V., and Hyde, J. S. (1997). “Simultaneous Assessment of Flow and BOLD Signals in Resting-State Functional Connectivity Maps.” NMR in Biomedicine 10(4–5): 165–170.Find this resource:
Bogen, J. (2002). “Epistemological Custard Pies from Functional Brain Imaging.” Philosophy of Science 69(S3): S59–S71.Find this resource:
Bogen, J. (2005). “Regularities and Causality; Generalizations and Causal Explanations.” Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 36(2): 397–420.Find this resource:
Bogen, J. (2008). “The Hodgkin-Huxley Equations and the Concrete Model: Comments on Craver, Schaffner, and Weber.” Philosophy of Science 75(5): 1034–1046.Find this resource:
Buckholtz, J. W., and Meyer-Lindenberg, A. (2008). “MAOA and the Neurogenetic Architecture of Human Aggression.” Trends in Neurosciences 31(3): 120–129. doi: 10.1016/j.tins.2007.12.006Find this resource:
Buxton, R. B. (2009). Introduction to Functional Magnetic Resonance Imaging: Principles and Techniques, 2nd ed. (Cambridge/New York: Cambridge University Press).Find this resource:
Caspi, A., McClay, J. Moffitt, T. E, Mill, Martin, J., Craig, I. A., Taylor, A., and Poulton, R. (2002). “Role of Genotype in the Cycle of Violence in Maltreated Children.” Science 297(5582): 851–854. doi: 10.1126/science.1072290Find this resource:
Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies 2: 200–219.Find this resource:
Chirimuuta, M. (2014). “Minimal Models and Canonical Neural Computations: The Distinctness of Computational Explanation in Neuroscience.” Synthese 191 (2): 127–153.Find this resource:
Churchland, P. M. (1981). “Eliminative Materialism and the Propositional Attitudes.” Journal of Philosophy 78(2): 67–90. doi: 10.2307/2025900Find this resource:
Churchland, P. M. (1989). A Neurocomputational Perspective: The Nature of Mind and the Structure of Science (Cambridge, MA: MIT Press).Find this resource:
Churchland, P. S. (1989). Neurophilosophy: Toward a Unified Science of the Mind-Brain (Cambridge, MA: MIT Press).Find this resource:
Churchland, P. S. (2012). Braintrust: What Neuroscience Tells Us about Morality. Reprint edition (Princeton, NJ: Princeton University Press).Find this resource:
Churchland, P. S., and Sejnowski, T. J. (1992). The Computational Brain (Cambridge, MA: MIT Press).Find this resource:
Clark, A. (2013). “Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science.” Behavioral and Brain Sciences 36(03): 181–204. doi: 10.1017/S0140525X12000477Find this resource:
Clark, A., Kiverstein, J., and Vierkant, T., eds. (2013). Decomposing the Will (Oxford/New York: Oxford University Press).Find this resource:
Coltheart, M. (2006). “What Has Functional Neuroimaging Told Us About the Mind (So Far)? Position Paper Presented to the European Cognitive Neuropsychology Workshop, Bressanone, 2005).” Cortex 42(3): 323–331.Find this resource:
Coltheart, M. (2013). “How Can Functional Neuroimaging Inform Cognitive Theories?” Perspectives on Psychological Science 8(1): 98–103. doi: 10.1177/1745691612469208Find this resource:
Coltheart, M., and Davies, M. (2003). “Inference and Explanation in Cognitive Neuropsychology.” Cortex 39(1): 188–191.Find this resource:
Craver, C. F. (2006). “When Mechanistic Models Explain.” Synthese 153(3): 355–376.Find this resource:
Craver, C. F. (2007). Explaining the Brain: Mechanisms and the Mosaic Unity of Neuroscience (New York: Oxford University Press).Find this resource:
Craver, C. F. (2008). “Physical Law and Mechanistic Explanation in the Hodgkin and Huxley Model of the Action Potential.” Philosophy of Science 75(5): 1022–1033.Find this resource:
Craver, C. F. (2012). “A Preliminary Case for Amnesic Selves: Toward a Clinical Moral Psychology.” Social Cognition 30(4): 449–473.Find this resource:
Craver, C. F. (2014). “Levels”. OpenMind Project http://open-mind.net/papers/levelsFind this resource:
Craver, C. F. (forthcoming). “Thinking About Interventions: Optogenetics and Makers’ Knowledge” In J. Woodward and K. Waters (eds.), Causation in Biology and Philosophy. (Minnesota Studies in the Philosophy of Science).Find this resource:
Craver, C. F., Kwan, D., Steindam, C., and Rosenbaum, R. S. (2014). “Individuals with Episodic Amnesia Are Not Stuck in Time.” Neuropsychologia 57: 191–195.Find this resource:
Cushman, F. (2008). “Crime and Punishment: Distinguishing the Roles of Causal and Intentional Analyses in Moral Judgment.” Cognition 108(2): 353–380. doi: 10.1016/j.cognition.2008.03.006Find this resource:
Davies, M. (2010). “Double Dissociation: Understanding Its Role in Cognitive Neuropsychology.” Mind & Language 25(5): 500–540. doi: 10.1111/j.1468-0017.2010.01399.xFind this resource:
Dosenbach, N. U. F., Visscher, K. M., Palmer, E. D. Miezin, F. M. Wenger, K. K., Kang, H. C., et al. (2006). “A Core System for the Implementation of Task Sets.” Neuron 50(5): 799–812. doi: 10.1016/j.neuron.2006.04.031Find this resource:
Dretske, F. I. (1999). Knowledge and the Flow of Information (Stanford, CA: Center for the Study of Language and Information).Find this resource:
Dayan, P., and Abbott, L. F. (2001). Theoretical Neuroscience. Computational Modeling of Neural Systems (Cambridge, MA: MIT Press).Find this resource:
Fodor, J. A. (1974). “Special Sciences (Or: The Disunity of Science as a Working Hypothesis).” Synthese 28(2): 97–115.Find this resource:
Fodor, J. A. (1980). The Language of Thought, 1st ed. (Cambridge, MA: Harvard University Press).Find this resource:
Fodor, J. A. (1983). The Modularity of Mind: An Essay on Faculty Psychology (Cambridge, MA: Bradford/MIT Press).Find this resource:
Fodor, J. A. (1990). A Theory of Content and Other Essays (Cambridge, MA: MIT Press).Find this resource:
Fried, I., Mukamel, R., and Kreiman, G. (2011). “Internally Generated Preactivation of Single Neurons in Human Medial Frontal Cortex Predicts Volition.” Neuron 69(3): 548–562.Find this resource:
Friston, K., and Kiebel, S. (2009). “Predictive Coding under the Free-Energy Principle.” Philosophical Transactions of the Royal Society of London B: Biological Sciences 364(1521): 1211–1221.Find this resource:
Garson, J. (2012). “Function, Selection, and Construction in the Brain.” Synthese 189(3): 451–481.Find this resource:
Glymour, C. (1994). “On the Methods of Cognitive Neuropsychology.” British Journal for the Philosophy of Science 45(3): 815–835.Find this resource:
Gold, J. I., and Shadlen, M. N. (2007). “The Neural Basis of Decision Making.” Annual Review of Neuroscience 30(1): 535–574.Find this resource:
Gray, C. M., Engel, A. K., König, P., and Singer, W. (1990). “Stimulus-Dependent Neuronal Oscillations in Cat Visual Cortex: Receptive Field Properties and Feature Dependence.” European Journal of Neuroscience 2(7): 607–619.Find this resource:
Greenberg, A. S., Esterman, M., Wilson, D., Serences, J. T., and Yantis, S. (2010). “Control of Spatial and Feature-Based Attention in Frontoparietal Cortex.” Journal of Neuroscience 30(43): 14330–14339.Find this resource:
Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., and Cohen, J. D. (2001). “An fMRI Investigation of Emotional Engagement in Moral Judgment.” Science 293(5537): 2105–2108. doi: 10.1126/science.1062872Find this resource:
Gregoriou, G. G., Gotts, S. J., Zhou, H., and Desimone, R. (2009). “High-Frequency, Long-Range Coupling Between Prefrontal and Visual Cortex During Attention.” Science 324(5931): 1207–1210. doi: 10.1126/science.1171402Find this resource:
Haggard, P. (2008). “Human Volition: Towards a Neuroscience of Will.” Nature Reviews Neuroscience 9(12): 934–946. doi: 10.1038/nrn2497Find this resource:
Hanson, S. J., and Bunzl, M., eds. (2010). Foundational Issues in Human Brain Mapping. New edition (Cambridge, MA: Bradford).Find this resource:
Haugeland, J. (1998). Having Thought. Essays in the Metaphysics of Mind (Cambridge: Cambridge University Press).Find this resource:
Haxby, J. V., Connolly, A. C., and Guntupalli, S. J. (2014). “Decoding Neural Representational Spaces Using Multivariate Pattern Analysis.” Annual Review of Neuroscience 37(1): 435–456.Find this resource:
Haxby, J. V., Gobbini, M. I., Furey, M. L., Ishai, A., Schouten, J. L., and Pietrini, P. (2001). “Distributed and Overlapping Representations of Faces and Objects in Ventral Temporal Cortex.” Science 293(5539): 2425–2430.Find this resource:
Haxby, J. V., Guntupalli, S. J., Connolly, A. C., Halchenko, Y. O., Conroy, B. R., Gobbini, M. I., et al. (2011). “A Common, High-Dimensional Model of the Representational Space in Human Ventral Temporal Cortex.” Neuron 72(2): 404–416.Find this resource:
Hempel, C. (1965). “Aspects of Scientific Explanation” In Aspects of Scientific Explanation and Other Essays. (New York: The Free Press).Find this resource:
Hodgkin, A. L., and Huxley, A. F. (1952). “A Quantitative Description of Membrane Current and Its Application to Conduction and Excitation in Nerve.” Journal of Physiology 117(4): 500–544.Find this resource:
Hohwy, J. (2014). The Predictive Mind (Oxford/New York: Oxford University Press).Find this resource:
Huth, A. G., Nishimoto, S., Vu, A. T., and Gallant, J. L. (2012). “A Continuous Semantic Space Describes the Representation of Thousands of Object and Action Categories Across the Human Brain.” Neuron 76(6): 1210–1224. doi: 10.1016/j.neuron.2012.10.014Find this resource:
Ioannidis, J. (2005). “Why Most Published Research Findings Are False.” PLoS Medicine 2(8): e124.Find this resource:
Jackson, F. (1982). “Epiphenomenal Qualia.” Philosophical Quarterly 32(127): 127–136. doi: 10.2307/2960077Find this resource:
Kahane, G. (2012). “On the Wrong Track: Process and Content in Moral Psychology.” Mind & Language 27(5): 519–545. doi: 10.1111/mila.12001Find this resource:
Kane, R. (1999). “Responsibility, Luck, and Chance: Reflections on Free Will and Indeterminism.” Journal of Philosophy 96(5): 217–240. doi: 10.2307/2564666Find this resource:
Kanwisher, N., McDermott, J., and Chun, M. M. (1997). “The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception.” Journal of Neuroscience 17(11): 4302–4311.Find this resource:
Kaplan, D. M., and Craver, C. F. (2011). “The Explanatory Force of Dynamical and Mathematical Models in Neuroscience: A Mechanistic Perspective.” Philosophy of Science 78(4): 601–627.Find this resource:
Kim, J. (1996). “Philosophy of Mind.” http://philpapers.org/rec/KIMPOM-2
Kim, J. (2000). Mind in a Physical World: An Essay on the Mind-Body Problem and Mental Causation (Cambridge, MA: MIT Press).Find this resource:
Klein, C. (2012). “Cognitive Ontology and Region- Versus Network-Oriented Analyses.” Philosophy of Science 79(5): 952–960.Find this resource:
Kouider, S., and Dehaene, S. (2007). “Levels of Processing During Non-Conscious Perception: A Critical Review of Visual Masking.” Philosophical Transactions of the Royal Society of London B: Biological Sciences 362(1481): 857–875. doi: 10.1098/rstb.2007.2093Find this resource:
Kriegeskorte, N., Mur, M., and Bandettini, P. (2008). “Representational Similarity Analysis—Connecting the Branches of Systems Neuroscience.” Frontiers in Systems Neuroscience 2(November). doi: 10.3389/neuro.06.004.2008Find this resource:
Kwan, D., Craver, C. F., Green, L., Myerson, J., Boyer, P., and Rosenbaum, R. S. (2012). “Future Decision-Making Without Episodic Mental Time Travel.” Hippocampus 22(6): 1215–1219.Find this resource:
Kwan, D., Craver, C. F., Green, L., Myerson, J., and Rosenbaum, R. S. (2013). “Dissociations in Future Thinking Following Hippocampal Damage: Evidence from Discounting and Time Perspective in Episodic Amnesia.” Journal of Experimental Psychology: General 142(4): 1355.Find this resource:
Levy, A. (2014). “What Was Hodgkin and Huxley’s Achievement?” British Journal for the Philosophy of Science 65(3): 469–492.Find this resource:
Levy, A., and Bechtel, W. (2013). “Abstraction and the Organization of Mechanisms.” Philosophy of Science 80(2): 241–261.Find this resource:
Libet, B., Gleason, C. A., Wright, E. W., and Pearl, D. K. (1983). “Time of Conscious Intention to Act in Relation to Onset of Cerebral Activity (Readiness-Potential).” Brain 106(3): 623–642. doi: 10.1093/brain/106.3.623Find this resource:
Logothetis, N. K., and Wandell, B. A. (2004). “Interpreting the BOLD Signal.” Annual Review of Physiology 66(1): 735–769. doi: 10.1146/annurev.physiol.66.082602.092845Find this resource:
Machamer, P., Darden, L., and Craver, C. F. (2000). “Thinking About Mechanisms.” Philosophy of Science 67: 1–25.Find this resource:
Mante, V., Sussillo, D., Shenoy, K. V., and Newsome, W. T. (2013). “Context-Dependent Computation by Recurrent Dynamics in Prefrontal Cortex.” Nature 503(7474): 78–84. doi: 10.1038/nature12742Find this resource:
Marder, E., and Bucher, D. (2007). “Understanding Circuit Dynamics Using the Stomatogastric Nervous System of Lobsters and Crabs.” Annual Review of Physiology 69: 291–316.Find this resource:
McDowell, J. (1994). “The Content of Perceptual Experience.” Philosophical Quarterly 44: 190–205.Find this resource:
Millikan, R. G. (1987). Language, Thought, and Other Biological Categories: New Foundations for Realism (Cambridge, MA: MIT Press).Find this resource:
Mitchell, T. M., Shinkareva, S. V., Carlson, A., Chang, K. -M., Malave, V. L., Mason, R. A., and Just, M. A. (2008). “Predicting Human Brain Activity Associated with the Meanings of Nouns.” Science 320(5880): 1191–1195. doi: 10.1126/science.1152876Find this resource:
Naci, L., Cusack, R., Anello, M., and Owen, A. M. (2014). “A Common Neural Code for Similar Conscious Experiences in Different Individuals.” Proceedings of the National Academy of Sciences of the United States of America 111(39): 14277–14282. doi: 10.1073/pnas.1407007111Find this resource:
Naci, L., Cusack, R., Jia, V. Z., and Owen, A. M. (2013). “The Brain’s Silent Messenger: Using Selective Attention to Decode Human Thought for Brain-Based Communication.” Journal of Neuroscience 33(22): 9385–9393. doi: 10.1523/JNEUROSCI.5577-12.2013Find this resource:
Nagel, E. (1961). The Structure of Science: Problems in the Logic of Scientific Explanation. New York: Harcourt, Brace and World.Find this resource:
Nestler, E. J., and Hyman, S. E. (2010). “Animal Models of Neuropsychiatric Disorders.” Nature Neuroscience 13(10): 1161–1169. doi: 10.1038/nn.2647Find this resource:
Norman, K. A., Polyn, S. M., Detre, G. J., and Haxby, J. V. (2006). “Beyond Mind-Reading: Multi-Voxel Pattern Analysis of fMRI Data.” Trends in Cognitive Sciences 10(9): 424–430. doi: 10.1016/j.tics.2006.07.005Find this resource:
Oizumi, M., Albantakis, L., and Tononi, G. (2014). “From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0.” PLoS Computational Biology 10(5). doi: 10.1371/journal.pcbi.1003588Find this resource:
Oppenheim, P., and Putnam, H. (1958). Unity of Science as a Working Hypothesis. Minnesota Studies in the Philosophy of Science (Minneapolis: Minnesota University Press).Find this resource:
Owen, A. M., Coleman, M. R., Boly, M., Davis, M. H., Laureys, S., and Pickard, J. D. (2006). “Detecting Awareness in the Vegetative State.” Science 313(5792): 1402–1402. doi: 10.1126/science.1130197Find this resource:
Piccinini, G., and Craver, C. (2011). “Integrating Psychology and Neuroscience: Functional Analyses as Mechanism Sketches.” Synthese 183(3): 283–311.Find this resource:
Plaut, D. C. (1995). “Double Dissociation without Modularity: Evidence from Connectionist Neuropsychology.” Journal of Clinical and Experimental Neuropsychology 17(2): 291–321.Find this resource:
Pockett, S. (2002). “On Subjective Back-Referral and How Long It Takes to Become Conscious of a Stimulus: A Reinterpretation of Libet’s Data.” Consciousness and Cognition 11(2): 144–161. doi: 10.1006/ccog.2002.0549Find this resource:
Poldrack, R. A. (2006). “Can Cognitive Processes Be Inferred from Neuroimaging Data?” Trends in Cognitive Sciences 10(2): 59–63. doi: 10.1016/j.tics.2005.12.004Find this resource:
Poldrack, R. A. (2010). “Mapping Mental Function to Brain Structure: How Can Cognitive Neuroimaging Succeed?” Perspectives on Psychological Science 5(6): 753–761. doi: 10.1177/1745691610388777Find this resource:
Poldrack, R. A. (2011). “Inferring Mental States from Neuroimaging Data: From Reverse Inference to Large-Scale Decoding.” Neuron 72(5): 692–697. doi: 10.1016/j.neuron.2011.11.001Find this resource:
Poldrack, R. A., Halchenko, Y. O., and Hanson, S. J. (2009). “Decoding the Large-Scale Structure of Brain Function by Classifying Mental States Across Individuals.” Psychological Science 20(11): 1364–1372.Find this resource:
Polger, T. (2004) Natural Minds (Cambridge, MA: MIT Press).Find this resource:
Povich, M. (forthcoming) “Mechansms and Model-Based MRI” Philosophy of Science. Proceedings of the Philosophy of Science Association.Find this resource:
Power, J. D., Cohen, A. L., Nelson, S. M., Wig, G. S., Barnes, K. A., Church, J. A., et al. (2011) “Functional Network Organization of the Human Brain.” Neuron 72: 665–678.Find this resource:
Power, J. D., Fair, D. A., Schlaggar, B. L., and Petersen, S. E. (2010). “The Development of Human Functional Brain Networks.” Neuron 67(5): 735–748.Find this resource:
Price, C. J., Moore, C. J., and Friston, K. J. (1997). “Subtractions, Conjunctions, and Interactions in Experimental Design of Activation Studies.” Human Brain Mapping 5(4): 264–272. doi: 10.1002/(SICI)1097-0193(1997)5:4<264::AID-HBM11>3.0.CO;2-EFind this resource:
Ramsey, W. M. (2007). Representation Reconsidered (Cambridge: Cambridge University Press).Find this resource:
Rees, G., Kreiman, G., and Koch, C. (2002). “Neural Correlates of Consciousness in Humans.” Nature Reviews Neuroscience 3(4): 261–270. doi: 10.1038/nrn783Find this resource:
Reynolds, J. H., and Chelazzi, L. (2004). “Attentional Modulation of Visual Processing.” Annual Review of Neuroscience 27(1): 611–647. doi: 10.1146/annurev.neuro.26.041002.131039Find this resource:
Reynolds, J. H., and Heeger, D. J. (2009). “The Normalization Model of Attention.” Neuron 61(2): 168–185. doi: 10.1016/j.neuron.2009.01.002Find this resource:
Rice, C. (2013). “Moving Beyond Causes: Optimality Models and Scientific Explanation.” Noûs. http://onlinelibrary.wiley.com/doi/10.1111/nous.12042/fullFind this resource:
Ridderinkhof, K. R., van den Wildenberg, W. P. M., and Brass, M. (2014). “‘Don’t’ versus ‘Won’t’: Principles, Mechanisms, and Intention in Action Inhibition.” Neuropsychologia 65(December): 255–262. doi: 10.1016/j.neuropsychologia.2014.09.005Find this resource:
Roskies, A. L. (2006). “Neuroscientific Challenges to Free Will and Responsibility.” Trends in Cognitive Sciences 10(9): 419–423. doi: 10.1016/j.tics.2006.07.011Find this resource:
Roskies, A. L. (2008). “Neuroimaging and Inferential Distance.” Neuroethics 1(1): 19–30.Find this resource:
Roskies, A. L. (2009). “Brain-Mind and Structure-Function Relationships: A Methodological Response to Coltheart.” Philosophy of Science 76(5): 927–939.Find this resource:
Roskies, A. L. (2010a). “How Does Neuroscience Affect Our Conception of Volition?” Annual Review of Neuroscience 33: 109–130.Find this resource:
Roskies, A. L. (2010b). “Saving Subtraction: A Reply to Van Orden and Paap.” British Journal for the Philosophy of Science 61(3): 635–665. doi: 10.1093/bjps/axp055Find this resource:
Roskies, A. L. (2011). “Why Libet’s Studies Don’t Pose a Threat to Free Will.” In W. Sinnott-Armstrong (ed.), Conscious Will and Responsibility (New York: Oxford University Press), 11–22.Find this resource:
Roskies, A. L. (2014). “Monkey Decision Making as a Model System for Human Decision Making.” In A. Mele (ed.), Surrounding Free Will (Oxford: Oxford University Press), 231–254.Find this resource:
Ryder, D. (2004). “SINBAD Neurosemantics: A Theory of Mental Representation.” Mind & Language 19(2): 211–240.Find this resource:
Ryder, D. (2009a). “Problems of Representation I: Nature and Role.” http://philpapers.org/rec/RYDPOR
Ryder, D. (2009b). “Problems of Representation II: Naturalizing Content.” http://philpapers.org/rec/RYDPOR-2
Salmon, W. (1984). Scientific Explanation and the Causal Structure of the World (Princeton, NJ: Princeton University Press).Find this resource:
Schaffner, K. F. (1993). Discovery and Explanation in Biology and Medicine (Chicago: University of Chicago Press).Find this resource:
Schlegel, A., Alexander, P., Sinnott-Armstrong, W., Roskies, A., Tse, P. U., and Wheatley, T. (2013). “Barking up the Wrong Free: Readiness Potentials Reflect Processes Independent of Conscious Will.” Experimental Brain Research 229(3): 329–335. doi: 10.1007/s00221-013-3479-3Find this resource:
Schurger, A., Sitt, J. D., and Dehaene, S. (2012). “An Accumulator Model for Spontaneous Neural Activity prior to Self-Initiated Movement.” Proceedings of the National Academy of Sciences 109(42): E2904–E2913. doi: 10.1073/pnas.1210467109Find this resource:
Schweitzer, N. J., and Saks, M. J. (2011). “Neuroimage Evidence and the Insanity Defense.” Behavioral Sciences & the Law 29(4): 592–607. doi: 10.1002/bsl.995Find this resource:
Schweitzer, N. J., Saks, M. J., Murphy, E. R., Roskies, A. L., Sinnott-Armstrong, W., and Gaudet, L. M. (2011). “Neuroimages as Evidence in a Mens Rea Defense: No Impact.” Psychology, Public Policy, and Law 17(3): 357–393. doi: http://dx.doi.org/10.1037/a0023581Find this resource:
Shanks, N., Greek, R., and Greek, J. (2009). “Are Animal Models Predictive for Humans?” Philosophy, Ethics, and Humanities in Medicine 4(2): 1–20.Find this resource:
Shapiro, L. A. (2008). “How to Test for Multiple Realization.” Philosophy of Science 75(5): 514–525.Find this resource:
Shulman, R. G. (2013). Brain Imaging: What It Can (and Cannot) Tell Us About Consciousness (New York: Oxford University Press).Find this resource:
Singer, W. (1998). “Consciousness and the Structure of Neuronal Representations.” Philosophical Transactions of the Royal Society of London B: Biological Sciences 353(1377): 1829–1840. doi: 10.1098/rstb.1998.0335Find this resource:
Smart, J. J. C. (2007). “The Mind/Brain Identity Theory”, In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2014 Edition) <http://plato.stanford.edu/archives/win2014/entries/mind-identity/>Find this resource:
Soon, C. S., Brass, M., Heinze, H. -J. and Haynes, J. -D. (2008). “Unconscious Determinants of Free Decisions in the Human Brain.” Nature Neuroscience 11(5): 543–545. doi: 10.1038/nn.2112Find this resource:
Storey, J. D., and Tibshirani, R. (2003). “Statistical Significance for Genomewide Studies.” Proceedings of the National Academy of Sciences 100(16): 9440–9445.Find this resource:
Sufka, K., Weldon, M., and Allen, C. (2009). The Case for Animal Emotions: Modeling Neuropsychiatric Disorders (New York: Oxford University Press).Find this resource:
Sullivan, J. A. (2009). “The Multiplicity of Experimental Protocols: A Challenge to Reductionist and Non-Reductionist Models of the Unity of Neuroscience.” Synthese 167(3): 511–539.Find this resource:
Sullivan, J. A. (2010). “Reconsidering ‘Spatial Memory’ and the Morris Water Maze.” Synthese 177(2): 261–283.Find this resource:
Uttal, W. R. (2003). The New Phrenology: The Limits of Localizing Cognitive Processes in the Brain (Cambridge, MA: Bradford).Find this resource:
van Orden, G. C., and Paap, K. R. (1997). “Functional Neuroimages Fail to Discover Pieces of Mind in the Parts of the Brain.” Philosophy of Science 64(December): S85–S94.Find this resource:
Varela, F. J., Thompson, E. T., and Rosch, E. (1992). The Embodied Mind: Cognitive Science and Human Experience. New edition (Cambridge, MA: MIT Press).Find this resource:
Weber, M. (2004). Philosophy of Experimental Biology (Cambridge University Press).Find this resource:
Weber, M. (2008). “Causes Without Mechanisms: Experimental Regularities, Physical Laws, and Neuroscientific Explanation.” Philosophy of Science 75(5): 995–1007.Find this resource:
Weisberg, D. S., Keil, F. C., Goodstein, J., Rawson, E., and Gray, J. R. (2008). “The Seductive Allure of Neuroscience Explanations.” Journal of Cognitive Neuroscience 20(3): 470–477. doi: 10.1162/jocn.2008.20040Find this resource:
Weiskopf, D. A. (2011). “Models and Mechanisms in Psychological Explanation.” Synthese 183(3): 313–338.Find this resource:
Wimsatt, W. C. (1997). “Aggregativity: Reductive Heuristics for Finding Emergence.” Philosophy of Science, 64: S372–S384.Find this resource:
Woodward, J. (2003). Making Things Happen: A Theory of Causal Explanation (New York: Oxford University Press).Find this resource:
Young, L., Bechara, A., Tranel, D., Damasio, H., Hauser, M., and Damasio, A. (2010). “Damage to Ventromedial Prefrontal Cortex Impairs Judgment of Harmful Intent.” Neuron 65(6): 845–851. doi: 10.1016/j.neuron.2010.03.003Find this resource:
Young, L., Camprodon, J. A., Hauser, M., Pascual-Leone, A., and Saxe, R. (2010). “Disruption of the Right Temporoparietal Junction with Transcranial Magnetic Stimulation Reduces the Role of Beliefs in Moral Judgments.” Proceedings of the National Academy of Sciences of the United States of America 107(15): 6753–6758. doi: 10.1073/pnas.0914826107Find this resource: