Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: null; date: 23 October 2018

Neuroethics And The Lure of Technology

Abstract and Keywords

Neuroethics, as a domain of inquiry, was made necessary by this interdisciplinary march of technology that has been much documented and the resulting synergism, which resulted in the development of neuroimaging, deep brain stimulation, and advanced neuropharmaceutics. Closing the loop from discovery of basic mechanisms of illness to knowledge of structure and function en route to restorative therapeutics is a long way from earlier efforts to use electrical stimulation to address human maladies. The most challenging aspect about neuroethics is that the technology used by neuroscientists needs to be understood in order to offer responsible neuroethical critique. The technocentricity of neuroscience makes it especially vulnerable to broader market forces and the sway of political economy, all of which might be exacerbated by the recent fiscal melt down and recent trends in healthcare reform. These challenges are illustrated by investigational work exploring the use of deep brain stimulation in the minimally conscious state.

Keywords: neuroethics, neuropharmaceutics, challenges, technology, neuroscientists

Neuroethics as an ethics of technology

If there is a unifying theme to neuroethics, and this anthology, it is the predominance of technology. Neuroethics is both made necessary by technology and utterly dependent upon it. Without resort to hyperbole, it could be asserted that neuroethics is essentially an ethics of technology.

Indeed, if a derivative neuroethics can be distinguished from “conventional” medical ethics (Fins 2008), that differentiation would hinge upon neuroethics’ overwhelming preoccupation with, and reliance upon, technology. Simply stated, without the dramatic confluence of progress in the related realms of computer science, nuclear physics, electrical engineering, and pharmacology, neuroethics would never have emerged as a discipline. Neuroethics, as a domain of inquiry, was made necessary by this interdisciplinary march of technology and the resulting synergism which resulted in the development of neuroimaging, deep brain stimulation, and advanced neuropharmaceutics. Each of these developments have been borne of technological advance and become the subject of neuroethical critique.

In just a couple of decades technology has yielded tools that have altered how the mind interfaces with the brain. Through visual proxies, electrical connections, and pharmacologic manipulation, technology has enabled a connection with the brain and central nervous system, heretofore unimagined, much less imaged. What had once been the realm of science fiction, an impenetrable black box, has now come into focus through techniques which provide both structural and functional knowledge of the living brain. These insights have, in turn, led to new ways to manipulate cognitive processes, develop mind–brain interfaces and prompted deeper reflection on questions like personhood and the self.

The pace of technological advance only adds to its significance. In the 25 years since I graduated from medical school the resolution of brain images have gone from clunky black and white box-like grainy pixels to a degree of resolution on diffusion tensor imaging tractography—a type of advanced functional magnetic resonance imaging (fMRI)—that permits the visualization of single axons (Filler 2009). This is an unprecedented advance when placed into an historical context. It is not even 100 years ago that the Hopkins (p. 896) neurosurgeon Walter Dandy developed the ventriculogram, a way to see inside the brain by injecting air into the spinal column and identifying air–fluid–tissue interfaces on a conventional x-ray (Fox 1984). As recently as 1921, Wilder Penfield, then a novice neurosurgeon, traveled urgently to Baltimore to learn this new technique in neuroimaging to determine whether to operate on a young child with a deep brain tumor (Penfield 1977; Fins 2008).

It is a long way from those shadowy images to modern tractography, although the motivations of investigators over the decades have remained the same: to discern the workings of the brain. The only difference now is the power, and complexity, of the tools at the disposal of modern investigators, which—as Susan Wolf suggests—erodes the simple dichotomy between research and clinical practice upon which so much of our normative and regulatory standards are founded (Wolf 2011). The pace is now so quickened that it is increasingly difficult to draw a neat line between investigational work and therapeutics.

If we consider modern neuroradiology, we will note that it has already enabled hypotheses into the mechanisms of major depression through the work of neurologist Helen Mayberg (Mayberg et al. 2005). Mayberg has created a synthetic mechanistic model based on an array of neuroimaging techniques that have begun to localize the disorder to circuits converging in the subcallosal cingulate gyrus (SCG), including Brodmann area 25 (Mayberg 2003). Her structure and function correlations have been facilitated by translational neuroimaging (Mayberg 2009). More recently her hypotheses have been tested—some might say validated—by clinical trials with deep brain stimulation with targets further refined through tractography, a newer fMRI technique which can identify isolated fiber tracts (Gutman et al. 2009).

From Franklin to functional electrical stimulation

Closing the loop from discovery of basic mechanisms of illness to knowledge of structure and function en route to restorative therapeutics is a long way from earlier efforts to use electrical stimulation to address human maladies. All we have to do is recall Ben Franklin’s very plausible musings that paralysis might be treated with an electrical current (Goodman 1931). It was a good idea that wanted for an effective technology. In a letter dated 1757 to John Pringle he writes encouragingly of the “immediate greater sensible warmth in the lame limbs” and the “prickling sensation” felt by some the night after their treatments. He laments that he never saw a permanent change and that his temporary success might have been an 18th-century placebo effect. He wondered:

…And how far the apparent temporary advantage might arise from the exercise of the patients’ journey, and coming daily to my house, or from the spirits given by the hopes of success, enabling them to exert more strength in moving their limbs, I will not pretend to know. (Goodman 1931)

Franklin concludes by critiquing his methodology and wondering:

Perhaps some permanent advantage might have been obtained, if the electric shocks had been accompanied with proper medicine and regimen, under the direction of a skillful (p. 897) physician. It may be, too, that a few great strokes, as given in my method, may not be so proper as many small ones…. (Goodman 1931)

Franklin pursued a good hypothesis, an idea which was limited by available technology rather than by scientific creativity. Today, two and a half centuries later, we are on the cusp of realizing Franklin’s therapeutic vision for paralysis. Neuroprosthetic experts, engineers Joseph Pancrazio and P. Hunter Peckham, report that the experimental use of electrical stimulation in paralysis recently has been achieved in an animal model and predict that proof of concept for functional electrical stimulation (FES) will be achieved within the next 5 years (Pancrazio and Peckham 2009).

As these remarkable examples of depression and paralysis illustrate, technology has given form to hypotheses and accomplishments which have long eluded humankind. It has made the impossible possible. It has deepened knowledge of complex biological systems and expanded diagnostic and therapeutic horizons. But as Helen Mayberg reminded me in a recent conversation, in quoting the physicist David Goldstein in his review of Einstein’s Unfinished Symphony, Listening to the Sounds of Space-Time (Bartusiak 2000):

The cutting edge of science is not about the completely unknown. It is found where we understand just enough to ask the right question or build the right the instrument. (Goldstein 2000)

Our progress has been through the pursuit of good questions using the right instruments. But if that progress has been made possible by new tools, it is, in an equally dramatic fashion, vulnerable to technology. Misunderstood, technology can become an object of desire to which we aspire, forgetting the pragmatic dictum of true instrumentality in which usefulness is the marker of a worthy tool or intervention. If we hope to realize the promise that technology might offer we also have to be cognizant of its own seductions, lest we miss opportunities that predict progress or ignore occasions which portend problems.

The promise and peril of technology

Perhaps the most challenging aspect about neuroethics is that the technology used by neuroscientists needs to be understood in order to offer responsible neuroethical critique. Regrettably, many who comment on the normative implications of the field know too little about the capabilities and limits of tools used by investigators and their relationship to the current state of scientific knowledge. Although this is not unique to neuroethics—similar challenges were commented upon by the late bioethicist, Marc Lappe, with the advent of recombinant DNA in the 1970s and the advent of molecular biology (Martin 2005)—I would maintain that the challenges posed to neuroethics by technology are on a grander scale because so many modalities have converged to give birth to this investigative and clinical endeavor. The challenge is deepened by public ignorance of, or distrust in, science and its methods (Kitchner 2010). As Alan I. Leshner observes trenchantly in his essay on neuroscience and public engagement:

On the one hand, the purpose of science is to tell us about the nature of the natural world, whether we like the answer or not. On the other hand, only scientists are obliged to accept (p. 898) scientific explanations, again whether they like them or not. The rest of the public is free to disregard or, worse, to distort scientific findings at will, and with rather limited immediate consequences. Scientific understanding is only binding on scientists. (Leshner 2011)

This relative ignorance of technology can lead to normative distortions, even errors. The first sort of error imbues technology with more capability than it actually possesses or is likely to possess in the near future. Instead of describing the crudeness of initial prototypes, laden with margins of error, the forward-looking ethicist imagines all the possibilities that might result from the invention. This leads to hyperbolic, almost science fiction scenarios which, in turn, are either bright and optimistic or dark and glum (Fins 2005).

Over its short academic life neuroethics has been the tale of two disciplines. One iteration sees promise while the other envisions peril. It is an overly dichotomous view of the field which persists into this handbook and which, I must admit, threatens its longevity as a mature academic field.

Neuroimaging work—as Federico, Lombera, and Illes claim as a scholarly pillar for neuroethics (Federico et al. 2011)—has been particularly prone to such extreme characterizations, especially those publications which depict how neuroimaging is exploring the hither lands of consciousness and disorders like the vegetative and minimally conscious states. The response to Owen et al.’s 2006 Science paper (Owen et al. 2006) and Monti et al.’s more recent New England Journal of Medicine paper (2010) demonstrating command-following in some minimally conscious and vegetative patients and the ability of one vegetative patient to respond to simple yes/no questions using fMRI is an example of worrisome hyperbole exemplifying a pattern of journalism noted by Racine (2011), as well as Zarzeczny and Caulfield in this volume (2011), building upon earlier work by Racine, Bar-Ilan, and Illes, and others (Racine et al. 2005). Although the technology is in its infancy the immediate question posed by media invested the method with far greater capabilities than it possessed: namely, whether patients could use this crude communication channel as a way to express their wishes regarding life-sustaining therapies, whether they would want to live or die (Carey 2010).

To this commentator, it seemed a bit premature to generalize the findings (Fins and Schiff 2010a). After all, of the 54 subjects studied, only five demonstrated command-following and all of these patients had traumatic and not anoxic brain injury. This phenomenological study took no account of the variance seen in which patients were responding, a seemingly key part of the puzzle before this technique is applied more broadly. The science tells us that a response is only dispositive of consciousness absent a mechanistic explication of responsiveness. Absent that deeper scientific knowledge, a failure to respond to a query could stem from a methodological error and not indicate that the patient is unconscious. A non-response could also be the result of: a failure to ask the question in a proper fashion; the patient’s inattention; or even failing to wait long enough for a response. Normatively, even if the patient were to respond, would his binary answers satisfy a “sliding scale of competence” where the gravity of a patient’s choice is matched by proportionate evidence of understanding and explication (Drane 1984)? Doubtful, at best, at least for now (Fins and Schiff 2010).

And yet despite these highly significant scientific and normative limitations some view neuroimaging as a powerful threat to our human nature. In these speculative scenarios the functional magnetic imager has the power to decode mental states and read minds, alter relationships, detect criminality, or pose a threat to national security (Haynes 2011; Leshner 2011). (p. 899) While such speculations, generally about non-medical applications, are intellectually interesting and often elegantly Talmudic in their reasoning, the hyperbole often does not take account of the technical limits of neuroscience to read minds, as Emily Murphy and Hank Greely wisely warn us. They advise a healthy dose of humility when predicting the future, especially when it comes to decoding “the most complicated thing in the universe” (Murphy and Greely 2011).

Untempered by prudence and caution, hyperbolic predictions can create fears that undermine legitimate uses of still nascent technology for populations in need, and as Eric Racine importantly observes, impedes credible knowledge dissemination inimical to an open and democratic society (Racine 2011). So while we should imagine future uses of still primitive devices and their possible implications, it is especially critical to distinguish the probable from the implausible, and—as Hildt and Metzinger rightly suggest—distinguish the needs of individuals from public policy and draw a line between the medical and non-medical uses of emerging technologies even if a distinction can not easily be discerned between the therapeutic and enhancement at the level of the individual (Hildt and Metzinger 2011).

Truth be told, one needs to be something of a scientific polyglot to make sense of the many developments which are taking place. This need for specialized knowledge to undertake ethical analysis suggests that we will see areas of subspecialization within neuroethics much earlier than might have been the case for other areas of ethical reflection and this volume’s many focused essays suggest that this process is already occurring so that responsible and informed critiques can take place.

Although this is a necessary trend, it is also regrettable because of the further fragmentation that will occur as commentators focus on new developments in self-imposed silos of splendid isolation. Many insights will be lost through this process of sequestration and we must be careful to avoid too narrow a focus as we seek to balance the need to be informed about relevant scientific details while contextualizing this knowledge against a larger backdrop. From personal experience, I can attest that my own work considering the use of deep brain stimulation in disorders of consciousness was enriched by considering the history of psychosurgery and its relevance to modern neuromodulation and the application of deep brain stimulation to psychiatric disorders (Fins 2003b).

Hans Jonas and technoprudence

If we place our current tendency to hyperbole into that earlier historical context, we see that we are not alone in being vulnerable to the lure of technology. Even Hans Jonas, a philosopher I admire, expresses a technophobia—or better yet a technoprudence—written during the psychosurgery era. In “Technology and Responsibility”, an essay published in 1974, he worries about longer-term consequences of “novel” technology and questions whether our traditional framework of an age-old proximate ethics can accommodate technological forces which make man—and the species—vulnerable in an unparalleled manner:

To be sure, the old prescriptions of the “neighbor” ethics—of justice, charity, honesty, and so on—still hold in their intimate immediacy of the nearest, day by day sphere of human interaction. But this sphere is overshadowed by a growing realm of collective action where doer, (p. 900) deed, and effect are no longer the same as they were in the proximate sphere, and which by the enormity of its powers forces a new dimension of responsibility never dreamt of before. (Jonas 1980)

Although psychosurgery is not his exclusive concern, he does worry about behavior control and the rather imminent morphing of laudable medical goals into worrisome societal ones:

It is similar with all the other, quasi-utopian powers about to be made available by the advances of biomedical science as they are translated into technology. Of these, behavior control is much nearer to practical readiness that the still hypothetical prospect I have been discussing, (the prospect of prolonged, even immortal life) and the ethical question it raises are less profound but have a more direct moral bearing on the moral conception. Here again, the new kind of intervention exceeds the old ethical categories. They have not equipped us to rule, for example, on mental control by chemical means of by direct electrical action of the brain via implanted electrodes –undertaken lest us assume, for defensible even laudable ends. The mixture of beneficial and dangerous potentials is obvious, but the lines are not easy to draw. Relief of mental patients from distressing and disabling symptoms seems unequivocally beneficial. But from the relief of the patient, a goal entirely in the tradition of the medical art, there is an easy passage to the relief of society…this opens up an indefinite field with grave potentials. (Jonas 1980)

From there Jonas worries about the effect mind control for “social management” would have on “human rights and dignity.” He shared the modern neuroethicist’s concerns about the loss of free will through “circumventing the appeal of autonomous motivation,” enhancement by inducing “learning attitudes in school children by mass administration of drugs” and “performance increase” at work, and the generation of “sensations of happiness or pleasure or at least contentment…independent, that is, of the objects of happiness, pleasure or content and their attainment in personal living and achieving.” (Jonas 1980).

It is a remarkable passage for its resonance with the preceding pages of this volume. But was Jonas a sage or a hysteric? Despite Harris’s claim for the social utility of pharmaceutical enhancements (Harris 2011), I am with Jonas and share his concern about children receiving pharmaceutical enhancement with drugs like Ritalin® (methylphenidate). Along with others, I endorse advocacy for non-pharmacological efforts at “enhancement” such as proper education and exercise (Morein-Zamir and Sahakian 2011).

Having said that, I also worry about the therapeutic index that might exist between the treatment of some neuropsychiatric disorders with deep brain stimulation and the induction of addiction—in some but not all putative targets (Synofzik et al. under review). While the use of deep brain stimulation in depression remains investigational, the concerns raised by Kringelbach and Berridge in their essay about happiness (2011), reminds us that we need a systems approach to neurobiology to understand affect and reward, as Suhler and Churchland indicate (2011), as well as addiction, as Reske and Paulus suggest (2011), and that functional networks are interrelated, sometimes presenting a fine line between benefit and burden (Morein-Zamir and Sahakian 2011).

Yet, despite Jonas’ prescience on the aforementioned points, it must be said that most of his ruminations, written with such urgency, have yet to come to pass. In fact, contemporaneous allegations of mind control applied to vulnerable members of our citizenry via psychosurgery during that time was debunked by scholarly reports from The Hastings Center and the National Commission just a few years after Jonas published his essay (Blatte 1974; The National Commission 1977; Fins 2003b).

(p. 901) Perhaps more to the point, in contrast to Jonas’ concerns about the circumvention of autonomy with manipulation of the brain, psychosurgery’s modern successor—deep brain stimulation—has actually helped to restore a degree of personal agency in minimally conscious subject. My colleagues and I have recounted how a severely injured individual whose highest level of interaction was inconsistent command-following via eye movements prior to stimulation regained the ability to voice preferences at the level of assent (Schiff et al. 2007, 2009).

Similarly, in this volume, legal scholar Stacey Tovino has argued that additional knowledge of neurobiological differences between the sexes constitutes not a threat to women’s rights but rather might afford additional protections in criminal law and civil procedure although she cautions that there might be correlative implications that warrant concern and dictates prudence against precipitous application in law and society (Tovino 2011). Joshua Greene and Jonathan Cohen also indicate that while neuroscience will change the law, it will not do so by altering current legal assumptions (Greene and Cohen 2011).

These examples, taken together, suggest that Jonas’ pessimism may not have been warranted and that while we should heed Jonas’ precautionary principle, a point made by Steve Hyman in his essay on the neurobiology of addiction and its implications for voluntary control of behavior. Hyman tempers the tendency towards viewing addiction as beyond individual control. He urges that we neither revamp our legal nor normative structures about responsibility and culpability based on interim data and adds that a proportionate dose of moral outrage and punishment for drug-related activity is warranted, if it is a deterrent (Hyman 2011).

Prudence is further warranted because the outcome that we should fear may be the exact opposite of the one Jonas predicted. Jonas was concerned about the loss of free will and the circumvention of autonomy through enhancement efforts. The more likely scenario, as elegantly argued by Chneiweiss is a hyperautonomous enhanced brain bound up in itself and disconnected from the constraints imposed by society (Chneiweiss 2011). Such an outcome, a sort of Civilization and its Malcontents—to remind us of Freud’s counter example of the repressions imposed by society upon the individual (Freud 1961)—is something to be heeded and far better understood, lest we create a class of sociopaths who know only their own self-imposed limits on normative behavior (Stout 2005).

Chneiweiss’s argument is reinforced, in my view, by Wexler who correctly observes that humans are social and historical creatures and that our capabilities, desires, and proclivities develop through a complex interaction between our neurobiology and the natural and built environment (Wexler 2011). To limit these interactions through enhancement of the self at the expense of one contextualized within community, would as Chneiweiss warns us lead to:

The risk is creating isolated super-brains lost within a self-centered, self-organized, virtual world wherein the absence of the eyes of the other blurs the fundamental meaning of “telling the truth” on oneself. (Chneiweiss 2011)

This is a critical point echoed as well by Haggard in his essay on the societal constraints placed on free will (Haggard 2011) and by Reiner in his essay on the limits of “neuroessentialism” (Reiner 2011). Taken together, these essays suggest that whatever our intrinsic neurobiology, our brains—if not our very selves—are social entities that must and need to take account of societal and normative externalities. They also demonstrate that the landscape of speculative ethical commentary since Jonas has shifted from fears about mind control and (p. 902) the loss of autonomy or free will through manipulation of the brain to worries about an overly atomistic self disconnected from societal correctives, clues and constraints through technological or pharmacological intervention in the brain.

While it is too early to know what will come to pass, the shifting debate over agency and free will in the decades since Jonas point out that wise commentators may in fact be wrong and that ill-informed prudence or excessive hype comes at a cost.

Neuroethics and political economy

The cost of these normative errors is amplified by the economics of the technologies which have led to neuroethical critique. If we misconstrue these technologies we might find that they are perceived as neither affordable nor worth the investment.

The technocentricity of neuroscience makes it especially vulnerable to broader market forces and the sway of political economy, all of which might be exacerbated by the recent fiscal melt down and recent trends in healthcare reform. Because of endowment losses amongst universities and philanthropies (of over 20% for leading research universities) (Lewis 2010) and dollar cost averaging in payouts, there is less philanthropic money in the system to support—or even sustain—research programs. Government support in developing countries is essentially flat. The passage of healthcare reform in the United States, which laudably enfranchises millions who have been uninsured, consciously does so through parsimonious entitlements, which for patients with historically marginalized and high-cost neuropsychiatric disorders, raise the question, “access to what?”

My concern is that the high costs of such interventions will become a new excuse to neglect another generation of patients with neuropsychiatric disorders (Fins 2003a) whose hope and vulnerability lie in the promise and expense of technologies upon which they may vitally depend.

These challenges are illustrated by our investigational work exploring the use of deep brain stimulation in the minimally conscious state (Schiff et al. 2007). Despite the preliminary nature of this study, (Schiff et al. 2009) whenever I discuss it in public, I am invariably asked about its cost as if this emergent response to disorders of consciousness were somehow responsible for creating a market for new expenditures. Closer reflection on the putative cost–benefit analysis of this intervention, again should it be deemed therapeutic, might reveal that neuromodulation might actually cut into fixed costs associated with the chronic sequelae of severe head injury and associated chronic care costs (Fins 2010a).

The problem here, it seems to me, might be one of unrealized expectations. Although the response to our work was laudatory, the hype with which it was greeted—despite our effort to be understated with our claims (Schiff et al. 2007)—led invariably to a critique which asked if whatever good was achieved was good enough to warrant ongoing support. Like most innovations, our efforts were a step forward, not a leap across the finish line, notwithstanding how it might have been portrayed. Instead, it was, what the late physician-scientist Lewis Thomas called a “half-way technology” (Thomas 1974; Schiff et al. 2009). Dr. Thomas’ description is apt because it is a preventive against hyperbole which mischaracterizes incremental progress as a final product, a distinction which needs to be carefully delineated when considering distributive justice questions.

(p. 903) This question of access is perhaps most compelling when we consider the emergent use of neuroimaging methods as communication tools. This work’s potential is epitomized by Monti’s recent proof of principle using fMRI as a communication paradigm in patients with disorders of consciousness. Suppose this ability to query those with disorders of consciousness evolves beyond the binary capabilities of yes/no responses and becomes a link for those who have heretofore been beyond our shared community of communication? Imagine a tool that can pierce the isolation of those who are conscious but can not speak, but whose voices can now be heard through a prosthetic intermediary (Fins and Schiff 2010a)? What price can be placed upon this capability, which might be viewed more as a basic civil right than as an entitlement (Fins 2011)?

And even as we worry about a lack of access to innovation and care, we need to be concerned about the influence of corporate interests and intellectual property law on the use of these technologies (Fins 2010b; Fins and Schiff 2010b). Such market forces can lead to the premature or inappropriate dissemination of new technologies as vetted diagnostic or therapeutic tools when they have yet to be fully evaluated for efficacy. An example of the former would be the use of investigational neuroimaging techniques in clinical practice at this juncture outside of a clinical research context prepared to interpret and generalize results (Fins et al. 2008).

An example of the latter is the corporate branding of investigational applications of devices as therapeutic when they have not been fully vetted in a clinical trial. An example of such behavior is the marketing of deep brain stimulation for the “treatment” of obsessive-compulsive disorder by one manufacturer when the actual FDA approval under which that advertising campaign is occurring is a Humanitarian Device Exemption (HDE) (US FDA 2009). This is in lieu of the more costly and extensive vetting that would occur through an Investigational Device Exemption under which a proper clinical trial would have been conducted to demonstrate safety and efficacy (Fins et al. under review).

Conclusion

The fundamental mind–brain question, raised here by Beauregard (2011) and Levy (2011)—and by Wilder Penfield (Fins 2008) posthumously in The Mystery of The Mind (Penfield 1978; Lipsman and Bernstein 2011)—remind us to be cautious with claims about our current state of knowledge. Despite all our stunning progress in the past decade, we remain relatively ignorant. Although we have progressed, we need to avoid hubris and remain humble about our mastery of neuroscience and the ability to predict the interplay of technology and society.

Future generations will view our prized technologies as crude and our hypotheses as naïve. They will likely view our neuroimaging efforts as reductionistic post-phrenological—or better yet phenomenological flares—which distracted us from important questions in systems neurobiology. They will supply an explication of how deep brain stimulation actually works, a question that Lipsman and Bernstein reminds us remains unknown (2011). And in answering these questions, they generate many others of more complexity and challenge.

Each generation will have its own questions to answer and be tempted by the lure of its technology. The key for us and our successors is to be wary of technology’s sway and recall (p. 904) C.P. Snow’s admonition decades ago: “Technology, remember, is a queer thing; it brings you great gifts with one hand, and it stabs you in the back with the other.” (Lewis 1971). What was prescient then remains timely now.

Acknowledgements

Dr. Fins gratefully acknowledges funding from an Investigator Award in Health Policy Research from the Robert Wood Johnson Foundation, The Buster Foundation, and additional support from the NIH Clinical & Translational Science Center UL1-RR024966 Weill Cornell Medical College Research Ethics Core.

Disclosures

IntElect Medical, Inc. provided partial support for the deep brain stimulation in the minimally conscious state clinical trial described and considered in this paper and the author was an unfunded coinvestigator.

References

Bartusiak, M. (2000). Einstein’s Unfinished Symphony, Listening to the Sounds of Space-Time. Washington, DC: Joseph Henry Press.Find this resource:

    Beauregard, M. (2011). Neural foundations to conscious and volitional control of emotional behavior: a mentalistic perspective. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.83–100. Oxford: Oxford University Press.Find this resource:

      Blatte, H. (1974). State prisons and the use of behavior control. The Hastings Center Report, 4, 11.Find this resource:

        Carey, B. (2010). Trace of thought is found in “vegetative” patient. Available at: http://www.nytimes.com/2010/02/04/health/04brain.html (accessed 4 February 2010).

        Chneiweiss, H. (2011). Does cognitive enhancement fit with the physiology of our cognition? In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.295–308. Oxford: Oxford University Press.Find this resource:

          Drane, J. (1984). Competency to give an informed consent. A model for making clinical assessments. Journal of the American Medical Association, 252, 925–7.Find this resource:

            Federico, C.A., Lombera, S., and Illes, J. (2011). Intersecting complexities in neuroimaging and neuroethics. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.377–388. Oxford: Oxford University Press.Find this resource:

              Fins, J.J. (2003a). Constructing an ethical stereotaxy for severe brain injury: balancing risks, benefits and access. Nature Reviews Neuroscience, 4, 323–7.Find this resource:

                Fins, J.J. (2003b). From psychosurgery to neuromodulation and palliation: history’s lessons for the ethical conduct and regulation of neuropsychiatric research. Neurosurgery Clinics of North America, 14, 303–19.Find this resource:

                  (p. 905) Fins, J.J. (2005). The Orwellian threat to emerging neurodiagnostic technologies. American Journal of Bioethics, 5, 56–8.Find this resource:

                    Fins, J.J. (2008). A leg to stand on: Sir William Osler and Wilder Penfield’s “neuroethics.” American Journal of Bioethics, 8, 37–46.Find this resource:

                      Fins, J.J. (2010a) Deep brain stimulation: Calculating the true costs of surgical innovation. Virtual Mentor, 12, 114–18. Available at: http://virtualmentor.ama-assn.org/2010/02/msoc1-1002.htmlFind this resource:

                        Fins, J.J. (2010b). Deep brain stimulation, free markets and the scientific commons: is it time to revisit the Bayh-Dole Act of 1980? Neuromodulation: Technology at the Neural Interface, 13, 153–9.Find this resource:

                          Fins, J.J. (2011). Minds apart: severe brain injury Citizenship & Civil Rights. In M. Freeman (ed.) (Volume 13), University College of London Faculty of Laws’ Law and Neuroscience – Current Legal Issues. Oxford: Oxford University Press.Find this resource:

                            Fins, J.J. and Schiff, N.D. (2010a). In the blink of the mind’s eye. The Hastings Center Report, 3, 21–3.Find this resource:

                              Fins, J.J. and Schiff, N.D. (2010b). Conflicts of interest in deep brain stimulation research and the ethics of transparency. Journal of Clinical Ethics, 2, 125–32.Find this resource:

                                Fins, J.J., Illes, J., Bernat, J.L., Hirsch, J., Laureys, S., Murphy, E., and Participants of the Working Meeting on Ethics. (2008). Neuroimaging and limited states of consciousness. Neuroimaging and disorders of consciousness: envisioning an ethical research agenda. American Journal of Bioethics, 8, 3–12.Find this resource:

                                  Fins, J.J., Mayberg, H.S., Nuttin, B., et al. (Under review). Neuropsychiatric Deep Brain Stimulation Research and the Misuse of the Humanitarian Device Exemption.Find this resource:

                                    Filler, A. (2009). Magnetic resonance neurography and diffusion tensor imaging: origins, history, and clinical impact of the first 50,000 cases with an assessment of efficacy and utility in a prospective 5000-patient study group. Neurosurgery, 65, A29–43.Find this resource:

                                      Fox, W.L. (1984). Dandy of Johns Hopkins. Philadelphia: Williams & Wilkins.Find this resource:

                                        Freud, S. (1961). Civilization and its Discontents. New York: W.W. Norton.Find this resource:

                                          Goldstein, D. (October 29, 2000). Sounds of gravity, an account of the project to detect and measure gravitational waves. The New York Times.Find this resource:

                                            Goodman N.G. (ed.) (1931). The Ingenious Dr. Franklin: Selected Scientific Letters of Benjamin Franklin. Philadelphia: University of Pennsylvania Press.Find this resource:

                                              Greene, J. and Cohen, J. (2011). For the law, neuroscience changes nothing and everything. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.655–674. Oxford: Oxford University Press.Find this resource:

                                                Gutman, D.A., Holzheimer, P.E., Behrens, T.S., Johansen-Berg, H., and Mayberg, H.S. (2009). A tractography analysis of two deep brain stimulation white matter targets for depression. Biological Psychiatry, 65, 276–82.Find this resource:

                                                  Haggard, P. (2011). Neuroethics of free will. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.219–226. Oxford: Oxford University Press.Find this resource:

                                                    Harris, J. (2011). Chemical cognitive enhancement: is it unfair, unjust, discriminatory or cheating for healthy adults to use smart drugs? In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.265–284. Oxford: Oxford University Press.Find this resource:

                                                      Haynes, J.D. (2011). Brain reading: decoding mental states from brain activity in humans. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.3–14. Oxford: Oxford University Press.Find this resource:

                                                        Hildt, E. and Metzinger, T. (2011). Cognitive enhancement. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.245–264. Oxford: Oxford University Press.Find this resource:

                                                          (p. 906) Hyman, S.E. (2011). The neurobiology of addiction: implications for voluntary control of behaviour. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.203–218. Oxford: Oxford University Press.Find this resource:

                                                            Jonas, H. (1974, 1980). Technology and responsibility. In Philosophical essays: from ancient creed to technological man, pp. 3–20. Chicago, IL: The University of Chicago Press.Find this resource:

                                                              Kitchner, P. (2010). Two forms of blindness: on the need for both cultures. Technology in Science, 32, 40–8.Find this resource:

                                                                Kringelbach, M.L. and Berridge, K.C. (2011). The neurobiology of pleasure and happiness. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.15–32. Oxford: Oxford University Press.Find this resource:

                                                                  Leshner, A.I. (2011). Bridging neuroscience and society: Research, education and broad public engagement. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.v–xii. Oxford: Oxford University Press.Find this resource:

                                                                    Lewis, A. (1971). Dear Scoop Jackson. The New York Times. Available at: http://select.nytimes.com/mem/archive/pdf?res=F30A11F73454127B93C7A81788D85F458785F9   (accessed   15 March 2010).

                                                                    Lewis, T. (2010). Investment losses cause steep dip in university endowments, study finds. The New  York   Times.   Available   at:  http://www.nytimes.com/2010/01/28/education/28endow.html (accessed 28 January 2010).

                                                                    Levy, N. (2011). Neuroethics and the extended mind. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.285–294. Oxford: Oxford University Press.Find this resource:

                                                                      Lipsman, N. and Bernstein, M. (2011). Ethical issues in functional neurosurgery: Emerging applications and controversies. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.405–416. Oxford: Oxford University Press.Find this resource:

                                                                        Martin, D. (2005). Marc Lappé, 62, dies; fought against chemical perils. The New York Times. Available at: http://www.nytimes.com/2005/05/21/national/21lappe.html (accessed 21 May 2010).

                                                                        Mayberg, H.S. (2003). Positron emission tomography imaging in depression: a neural systems perspective. Neuroimaging Clinics of North America, 13, 805–15.Find this resource:

                                                                          Mayberg H.S. (2009). Targeted electrode-based modulation of neural circuits for depression. Journal of Clinical Investigation, 119, 717–25.Find this resource:

                                                                            Mayberg, H.S., Lozano, A.M., Voon, V., et al. (2005). Deep brain stimulation for treatment-resistant depression. Neuron, 45, 651–60.Find this resource:

                                                                              Monti, M.M., Vanhaudenhuyse, A., Coleman, M.R., et al. (2010). Willful modulation of brain activity in disorders of consciousness. New England Journal of Medicine, 362, 579–89.Find this resource:

                                                                                Morein-Zamir, S. and Sahakian, B.J. (2011). Pharmaceutical cognitive enhancement. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.229–244. Oxford: Oxford University Press.Find this resource:

                                                                                  Murphy, E.R. and Greely, H.T. (2011). What will be the limits of neuroscience-based mindreading in the law? In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.635–654. Oxford: Oxford University Press.Find this resource:

                                                                                    Owen, A.M., Coleman, M.R., Boly, M., Davis, M.H., Laureys, S., and Pickard, J.D. (2006). Detecting awareness in the vegetative state. Science, 313, 1402.Find this resource:

                                                                                      Pancrazio, J.J. and Peckham, P.H. (2009). Neuroprosthetic devices: how far are we from recovering movement in paralyzed patients? Expert Reviews in Neurotherapeutics, 4, 427–30.Find this resource:

                                                                                        Penfield, W. (1977). No Man Alone: A Neurosurgeon’s Life. Boston, MA: Little, Brown and Company.Find this resource:

                                                                                          (p. 907) Penfield W. (1978). Mystery of the Mind. Princeton, NJ: Princeton University Press.Find this resource:

                                                                                            Racine, E. (2011). Neuroscience and the media: ethical challenges and opportunities. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.783–802. Oxford: Oxford University Press.Find this resource:

                                                                                              Racine, E., Bar-Ilan, O., and Illes, J. (2005). fMRI in the public eye. Nature Reviews Neuroscience, 6, 159-64.Find this resource:

                                                                                                Reiner, P.B. (2011). The rise of neuroessentialism. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.161–176. Oxford: Oxford University Press.Find this resource:

                                                                                                  Reske, M. and Paulus, M.P. (2011). A neuroscientific approach to addiction: ethical concerns. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.177–202. Oxford: Oxford University Press.Find this resource:

                                                                                                    Schiff, N.D., Giacino, J.T., Kalmar, K., et al. (2007). Behavioral improvements with thalamic stimulation after severe traumatic brain injury. Nature, 448, 600–3.Find this resource:

                                                                                                      Schiff, N.D., Giacino, J.T., and Fins, J.J. (2009). Deep brain stimulation, neuroethics and the minimally conscious state: moving beyond proof of principle. Archives of Neurology, 66, 697–702.Find this resource:

                                                                                                        Stout, M. (2005). The Sociopath Next Door. New York: Broadway Books.Find this resource:

                                                                                                          Suhler, C. and Churchland, P. (2011). The neurobiological basis of morality. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.33–58. Oxford: Oxford University Press.Find this resource:

                                                                                                            Synofzik, M., Sclaepfer, T.E., and Fins, J.J. (Under Review). How happy is happy enough? Euphoria, neuroethics and deep brain stimulation of the Nucleus Accumbens.Find this resource:

                                                                                                              The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (May 23, 1977). Use of psychosurgery in practice and research: report and recommendations of national commission for the protection of human subjects of biomedical and behavioral research. Federal Register, 23, 26318–32.Find this resource:

                                                                                                                Thomas, L. (1974). The Lives of a Cell: Notes of a Biology Watcher. New York: The Viking Press.Find this resource:

                                                                                                                  Tovino, S.A. (2011). Women’s neuroethics. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.701–714. Oxford: Oxford University Press.Find this resource:

                                                                                                                    US FDA (2009). Approval Order H05003. Letter to Patrick L. Johnson, Medtronic Neuromodulation from Donna-Bea Tillman, Ph.D, M.P.A., Director, Office of Device Evaluation, Center for Devices and Radiologic Health, FDA.Find this resource:

                                                                                                                      Wexler, B.E. (2011). Neuroplasticity, culture and society. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.743–760. Oxford: Oxford University Press.Find this resource:

                                                                                                                        Wolf, S.M. (2011). Incidental findings in neuroscience research: A fundamental challenge to the structure of bioethics and health law. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.623–634. Oxford: Oxford University Press.Find this resource:

                                                                                                                          Zarzeczny, A. and Caulfield, T. (2011). Public representations of neurogenetics. In J. Illes and B.J. Sahakian (eds.) Oxford Handbook of Neuroethics, pp.715–728. Oxford: Oxford University Press. (p. 908) Find this resource: