The Ethics Of Nano/Neuro Convergence
Abstract and Keywords
This article outlines a few representative areas of research in nano- and neuroscience and then considers the complex continuum of entangled research practices that results. The point of this review is to give a realistic sense of the distributed, opportunistic character of this research, and to show how such emergent practices challenge conventional assumptions about how ethics and science should be advanced. It evaluates the risk profile of research related to that type as if it designated some discrete project. It turns out that summary judgments dismissing any ethical novelty in nanoscience depend on implicit assumptions about the nature of ethical reflection, and these, in turn, depend on assumptions about the nature of pure and applied science. An ethic of nano/neuro convergence needs to explore how these new models might help us more appropriately to manage the complex possibility space associated with emerging research.
In this chapter, I outline a few representative areas of research in nano- and neuroscience, and then consider the complex continuum of entangled research practices that results. The point of this review is to give a realistic sense of the distributed, opportunistic character of this research, and to show how such emergent practices challenge conventional assumptions about how ethics and science should be advanced. When ethicists consider nano/neuro convergence, they should not just look for some standard type—for example, of a medical, nano-enabled brain–machine interface (BMI)—and then evaluate the risk profile of research related to that type as if it designated some discrete project. Instead, there are a host of entangled types in a complex possibility space: nano/neuro interfaces range from the use of quantum dots (QDs) as sensors for understanding neuron function in vitro all the way to specific military enhancement-related projects. Along this continuum, engineers look for opportunities to apply their new tools. Mission-driven agencies or clinicians also look for tools to solve existing problems. In their dialogue, both the nature of the tools and the framing of the problems are altered. Here research does not fit the conventional model of a discrete, applied science where top-down political control can be established. There are a host of distributed practices and visions at multiple scales of time and space, and there are complex sets of transactions between these distributed practices. Instead of a conventional ethical assessment of some discrete interventional type, we need an ethic that is continually in process, and that arises in the interstitial spaces where these transactions occur. Such an ethic of nano/neuro convergence will be opportunistic in the same way that the practices themselves are, and it will be oriented toward dispositions, capacities, and ends, rather than just oriented toward discrete actions. Such an ethic of nano/neuro convergence will be informed by, and inform, the new models of understanding and control integral to the emerging research. The contrast between the conventional type-based ethic and the needed interstitial ethic of nano/neuro convergence will be illustrated by critically reflecting on US and EU initiatives that consider the use of nano-enabled BMIs for enhancing human performance.
(p. 468) On The Novelty Of Nanoscience: Establishing A New Kind Of Interface With The Molecular Scale
Nanoscience involves the study and manipulation of material systems that have components 1–100 nanometers (nm) in size. On the bottom end of this scale, we approach the size of atoms; for example, a hydrogen atom is about one angstrom, which is a tenth of a nanometer. Quantum mechanical principles are needed to understand the behavior of matter at this scale—all is a weird blur, accounted for by a set of well-established formal tools, but not easily understood by standards of ordinary human experience. As we move to the upper end of the nanoscale, there is a kind of averaging effect that stabilizes phenomena, yielding bulk level material properties. While there is a general understanding of how these bulk level properties arise from the ground up, even modestly complex systems cannot be modeled in quantum mechanical terms. The nanoscale thus signifies a middle level or mesorange, where ab initio computational tools break down, and a complex, multiscale alignment of theoretical and experimental methods is needed. This is a frontier, where solid-state physics, supramolecular chemistry, and molecular biology converge with one another and with cutting edge applications in fields as diverse as materials science, biotechnology, and medicine.
Since nearly everything of interest to humans has nanoscale components, the scale alone cannot be used to define nanoscience. By that criterion, most chemistry, biology, and a host of other science and engineering disciplines would all be nanoscience. In order to more narrowly circumscribe the field, the US National Nanotechnology Initiative (NNI) has advanced a definition that highlights the understanding and manipulation of meso-level principles “to create materials, devices, and systems with essentially new properties and functions because of their small structure” (Roco and NSTC 2004, p. 890, my emphasis; NSTC 2002, p. 11). Of course, much here depends on what is meant by such essential novelty of properties and functions. A careful analysis of several representative examples would show there is considerable variability in meaning—indicating a set of family resemblances, rather than necessary and sufficient conditions.
The manner in which novelty depends on size can be illustrated by QDs, which have several uses in neuroscientific research (Pathak et al. 2006). QDs are nanosized semiconducting materials (approximately 5–15nm); for example, they might involve a cadmium selenide (CdSe) core in a zinc sulfide (ZnS) shell (with shell size ranging up to 120nm). When excited, they fluoresce, with the color of fluorescence related to the size and shape of the QD. The color can thus be tuned by synthesizing dots of different size. The relation between color of the light emitted and size is understood according to quantum mechanical principles (e.g. quantum confinement effect). If we consider an electron confined in a small box (less than about 20nm), the uncertainty relation tells us that as we decrease the box size we get an increase in the electron’s kinetic energy. If we consider a photon (a particle of light) exciting an electron from its valence band into the conduction band (the difference between these giving a band gap), then the smaller the box size, the greater the energy needed to excite the electron, and thus the greater the energy released when the electron relaxes back to its valence band. (A full account of the phenomenon depends on multiple factors, including a (p. 469) complex combination of two kinds of quantization, one related to size and the other related to capacitance of the semi-conductor and amount of charge within it (Reed 1993; Vanmaekelbergh and Liljeroth 2005)) If we consider the visible spectrum, greater energy light (shorter wavelength) is on the blue side, and lower energy light (longer wavelength) is on the red side of the spectrum. Thus as dot size decreases, the wavelength of the light shifts from red to blue. Here we can see how the novelty of the properties and functions (e.g. the size tunable wavelength of QDs) depends on the characteristics of the meso-realm.
In some ways, a QD is like an artificial atom, allowing for exploration of basic quantum mechanical principles (Michler et al. 2000). Interest in the nanoscale, however, does not just depend on this capacity to exploit quantum principles for the creation of new products that enable fundamental scientific investigation. Additionally, there is an interest in imaging and interfacing with complex systems at this fundamental scale, and exploitation of their properties for human ends (Ratner and Ratner 2003, provides an accessible overview by a leading research group). Once synthesized, QDs can be used in areas as diverse as renewable energy (photovoltaic devices), quantum computing, and sensing. In biomedical applications, the surface of QDs can be functionalized so they bind to structures of interest; for example, to proteins, DNA, or viruses (Sutherland 2002; Zhou and Ghosh 2006). Since QDs photobleach at a much slower rate than traditional fluorescent markers (as much as 100 times as stable), are brighter (as much as 20 times), and they allow for tuning of wavelength according to size, they greatly expand the capacity of the biologist to track what is going on in cells (Chan and Nie 1998). QDs can be used to tag cells, track molecules across cell boundaries, or track single molecules within cells (Cai et al. 2007).
In these biomedical uses of the QD, the novelty of nano is more difficult to specify: it concerns the qualitative advance that arises from many quantitative, incremental developments that coalesce to provide a new level of resolution and control of the simplest elements of biological systems (Silva 2005, 2006). Here there is a kind of convergence of the specific, coordinated functions of elements in a hierarchically organized ensemble (associated with the logic of biological systems) and the kind of mass action integral to chemical systems. “Nano” as a general category thus connotes a new kind of interface with the molecular building blocks of living systems, and can thus be taken as the general science for enabling a long-awaited molecular medicine (Khushf 2008).
Many of the proposed clinical applications of QDs depend on their multi-functional character (Gomez et al. 2005; Azzazy et al. 2007). They can be used diagnostically to image pathological processes; for example, the surface of a QD can be functionalized so it binds to cancer cells, revealing their location when excited by light. But they can also be used to target specific cells; for example, when binding to or ingested by neoplastic cells, their potentially toxic effects might be exploited (Trojan horse). The same QD might thus perform both diagnostic and therapuetic functions, leading to so-called theranostics (Gao et al. 2004; Yezhelyev 2006).
In characterizing the development of nanoscience, Renn and Roco (2006) have identified four stages, ranging from simple particles (such as QDs) to multifunctional, hierarchically organized nanosystems. As research moves to the latter stages, QDs perform diverse clinical functions, depending on the environment or internal state of the nanosystem; for example, they might have optical properties that are modulated, have membrane-crossing equipment, and even be given an enzymatic function (Michalet et al. 2005). In these biomedical contexts, the meso-level considerations that account for the size dependent optical properties of (p. 470) QDs are generally black-boxed, and emphasis is placed upon the diverse functions of QDs in the engineered interface. This leads to medical definitions of nanoscience that downplay the “intrinsic” novelty of nano-products (associated with meso-level quantum effects) in favor of “extrinsic” novelty associated with higher level organization of diverse nano- components. “In this functional definition of nanotechnology, it is implicit that this is not a new area of science per se, and that the interdisciplinary convergence of basic fields (such as chemistry, physics, mathematics, and biology) and applied fields (such as materials science and the various areas of engineering) contributes to the functional outcomes of the technology” (Silva 2006, p. 66). For Silva, who is a leading nano/neuro researcher, the novelty of nano-phenomena concerns the way systems components such as DNA or QDs can be adapted to new functions; for example, the way DNA that does not have the intrinsic capacity to function as a nanowire might be turned into such a wire as part of a new application (p. 65).
Finally, the novelty of nano might relate to the methods of synthesis, rather than to intrinsic or extrinsic properties of products or applications. It is common within nanoscience to distinguish between top-down and bottom-up methods of design (Ratner and Ratner 2003). These reflect general engineering strategies: Do you start with some plan—like the sculptor’s vision of the end product—and then inscribe this from the top-down, for example, by lithographic techniques? Or do you discover ways of evoking the desired structures so they self-organize from the bottom-up? Often within nanoscience, a combination of the two will be used. But researchers hope that more and more can be done by bottom-up self-assembly, because this will be needed for accomplishing some of the more ambitious research goals and also for developing products and services that are commercially viable (Zhang 2003).
Both top-down and bottom-up methods have been developed for the synthesis of QDs (Murray et al.1995; Tersoff 1996). However, the questions of synthesis go beyond the formation of the QD itself. There are also top-down and bottom-up methods for integrating QDs among themselves and for engineering the interface between QDs and some system of interest. For example, Angela Belcher and her collaborators have re-engineered viruses (M13 bacteriophage) so they bind to ZnS at one end (Lee et al. 2002; Mao et al. 2003). For this, they generated a large phage library, and then selected for those variants that had the appropriate characteristics. M13 bacteriophage is about 880nm long, and 6.6nm in diameter. The variant that had a peptide on one end that binds to ZnS was termed A7 phage. When suspended in ZnS precursor solutions, an A7 phage-ZnS liquid nanocrystalline suspension was created. Belcher and colleagues used these methods to create highly organized thin films. They also used the A7 phage to generate nanowire arrays. In this way, “natural” systems and combinatorial/evolutionary design strategies guide a self-assembly process that is orchestrated to generate complex, highly ordered systems of QDs.
There is a fuzzy boundary between nano research and several other kinds of research in areas like solid state physics, synthetic biology, information technology, and neuroscience. Among these emerging research domains, there is a pervasive cross-traffic of tools and concepts, and advancements in one field often have complex implications for the way research (p. 471) in other fields might be advanced (Khushf 2009). It is thus a mistake to assume that “nano” involves some discrete set of tools that are then “applied” to “neuro,” as if the two domains were neatly distinguished. Instead, “nano/neuro convergence” should be taken as a designation for a broad continuum of emerging research practices where there is a complex entanglement of inter-related tools and concepts. The contrast class of interest is thus not between nano and related areas of emerging research in neuroscience, but between much of this emerging research and more conventional, disciplinary science and engineering, where it is possible to modularize the practices and products of research. Here nanoscience is of interest because of the way it is representative of these emerging technologies.
Nano/neuro research on quantum dots
In order to appreciate some of the challenges associated with characterizing nano/neuro convergence, we might ask how we could distinguish the “nano” from the “neuro” in the earlier-mentioned QD research. As a first approximation, we could assume that “nano” involves a kind of tool-oriented, applied science, and “nano/neuro convergence” involves application of those tools to neuroscience (Silva 2006). QDs have been used to track glycine receptors as they diffused into synaptic and extrasynaptic domains in cultured spinal neurons, showing that diffusion dynamics depended on whether receptors were synaptic, perisynaptic, or extrasynaptic (Dahan et al. 2003; general review of QD surface trafficking of neurotransmitter receptors can be found in Groc et al. 2007). They have been used to investigate how neuronal activity (and by implication, memory and learning) modifies diffusion properties of neurotransmitter receptors (Bannai et al. 2009). QDs can also control specific physiological or pharmacological responses; for example, to initiate downstream signaling of neurite growth (Vu et al. 2005). When combined with other methods (both within nanoscience and in more traditional areas of molecular biology/genetics), it is possible to directly control neural physiology at multiple systems levels, ranging from the constitutive electrical and molecular events up through the complex, functional organization of neural circuits (Lima and Miesenbock 2005; Deisseroth et al. 2006; Silva 2006; Andrews 2007; Elder et al. 2008;). Nano/neuro convergence would then concern this cluster of enabling capacities. But this would give just one side of the convergence.
Mihail Roco brings into view the other side when he characterizes nano/bio convergence: “Nanotechnology provides the tools and technology platforms for the investigation and transformation of biological systems, and biology offers inspiration models and bio-assembled components to nanotechnology” (Roco 2003, p. 337). When speaking of how biology “inspires” nanoscience, Roco has in mind something like the QD research of Angela Belcher and colleagues (Lee et al. 2002): by genetically altering viruses, establishing selection mechanisms for properties of interest, and then using the selected viruses to assemble her QD nanosystems, Belcher is both imitating and exploiting biological systems for human purposes. Neural models for designing nanoelectronics would provide another example of a biologically inspired research agenda. Thus, when QDs are used to image or manipulate neural systems, we have one kind of nano/neuro convergence: we could call this nano-enabled neuroscience. But when the meso-level properties of QDs are studied, or when they are synthesized by chemists, or when they are self-assembled by viruses into novel materials, then we have nano, but not neuro. This would make all the QD research outlined in the (p. 472) previous section “nano,” but not “neuro” (although we would have nano/bio in the case of Belcher’s self-assembled systems). However, if we use “neuro” as an “inspiration” for “nano,” then we have another kind of nano/neuro convergence. According to Roco’s definition, nano-enabled neuroscience and neuro-inspired nanoscience constitute two distinct domains that are encompassed under the general category of nano/neuro convergence.
Roco’s account of nano/neuro convergence works as a first approximation, but it breaks down when we consider some of the more complex lines of influence, and when we view these over a longer historical time line. For example, some look to neuroscience for the new architectures that are needed for nanoelectronics. Here we have neuro-inspired nano. The QD might be taken as a model of a neuron, and systems of QDs as neural systems (Toth et al. 1996). By understanding how neural systems handle uncertainty and error (e.g. by redundancy and adaptivity, which uses feedback from some functional interaction organized at higher systems levels to tune lower level circuits) or by exploring the integration of top-down and bottom-up pathways integral to neural interconnections, electrical engineers can develop new design strategies for circuits (van Roermund and Hoekstra 2000). However, once the electronic circuits are biomimetically designed, they can be taken as physical models of neural systems, and insights from these models can be used to ask new questions in neuroscience (Arenkiel and Ehlers 2009). Or the solutions to the electrical interconnection and control problems can inspire new ways of developing brain/machine interfaces (Miesenbock and Kevrekidis 2005; Aravanis et al. 2007; Shalk 2008). Here we get nano-inspired neuro. But when the inspiration maps back in a circle, the two distinct domains become entangled. There is thus a kind of iterative adjustment and alignment of disciplines, which leads to both convergences and divergences of disciplinary tools. In this cross-traffic, it becomes impossible to isolate just one strand and consider it in isolation. This will have significant implications for ethics, since we will not be able to neatly specify “the ethics of X” and consider it in isolation from “the ethics of Y.”
If we consider nanoscience as the general science for understanding and interfacing with living systems at the meso-scale where the smallest functional units are constituted, then neuroscience can be viewed as a subcategory of nanoscience. While such an over-reaching definition of nano is problematic, it does capture at least one aspect of nano/neuro convergence. In one recent review for Science, neuroscience was understood in exactly this way: an integration of meso-scale models of micro-circuits with the architecture of higher scale processing was seen as the key to harmonizing the currently fragmented bits of neuroscientific knowledge (Arenkiel and Ehlers 2009).
NBIC Convergence for electrophysiological brain–machine interfaces
When considering nano/neuro convergence, the distinction between pure and applied science breaks down. Establishing an interface with neural systems is a task of both. Basic science is advanced by technological means that directly interface with the neural systems, and medical interfaces serve as platforms for advancing basic science. As a result, medical and other applied neuroscientific projects at the nano/neuro intersection will have the same characteristics that we identified when describing the novelty of nanoscience: they will (p. 473) represent a continuum of entangled practices, and move toward multi-functional capacities. To illustrate this, I turn now to research on nano-enabled brain-machine interfaces (BMIs) that was part of a US policy initiative cosponsored by the US Department of Commerce and the National Science Foundation. The stated goal of the initiative was to facilitate the convergence of nanoscience, biomedicine, information technology, and cognitive science (NBIC Convergence) and orient this toward advancing human performance. (Roco and Bainbridge 2002a,b; Khushf 2007 provides an overview.) Fairly radical enhancements of strength, lifespan, and cognitive function were proposed. Here I consider just two of the contributions related to nano-enabled BMIs.
The title of Miguel Nicolelis’ (2002) contribution to the first US Convergence conference nicely captures the goals of the initiative: “Human-machine interaction: potential impact of nanotechnology in the design of neuroprosthetic devices aimed at restoring or augmenting human performance.” Note how conventional medical treatment (= restoring) and enhancement (= augmenting) are used together. This reflects the general, multifunctional, capacity-oriented character of the US convergence initiative. Nicolelis’ experiments with monkeys have provided one of the most vivid demonstrations of the mid-term potential of BMIs. After implanting electrodes into their sensorimotor cortex, his team was able to develop a computer algorithm to interpret the neural signals associated with arm movement, and then use this to drive a robotic arm. Eventually, monkeys were trained to use the robotic arm without moving their own arm, indicating how the prosthetic limb was incorporated into the monkey’s body schema. These kinds of experiments have been taken as proof-of-principle that fairly radical BMI-based enhancements lie on our immediate horizon. Some of the most high profile research on BMIs—including that of Nicolelis—has been funded by the US military because of this potential for enhancement (Hoag 2003; Moreno 2006; Barr 2008).
In his Convergence contribution, Nicolelis (2002) highlights non-medical, general purpose reasons for pursuing this research. He argues that the “realization of the full potential of the ‘digital revolution’ has been hindered by its reliance on low-bandwidth and relatively slow user-machine interfaces (e.g. keyboard, mice, etc.).” For human operators, computers are just another “external tool.” However, “if such devices could be incorporated into ‘neural space’ as extensions of our muscles or senses, they could lead to unprecedented (and currently unattainable) augmentation in human sensory, motor, and cognitive performance.” Nicolelis then considers how electrophysiological methods of the kind he studies could be used to provide that kind of “seamless” human/machine interface (see also Cauller and Penz 2002).
For Nicolelis (2002), the rate-limiting step for advanced human BMIs depends on nanoscience and technology: they provide the tools for developing invasive interface with neural tissue (see also Moxon et al. 2004; Lebedev and Nicolelis 2006; Andrews 2007). In turn, such interfaces will enable us to further extend our understanding and control of meso-level phenomena, for example, by using them to create “a complete new generation of actuators, designed to operate in micro- and nanospaces” (Nicolelis 2002). Nicolelis presupposes the goals of systems neuroscience, and then regards research in nano as instrumental for realizing those goals. But there is also a more subtle and complex influence of nano on his research—an influence that can even been seen in the tone of his convergence contribution, with its blending of treatment and enhancement. When “nano” informs research on human/machine interfaces, researchers don’t just “use” it. Beyond that, they come to understand the task of establishing an interface in new ways; for example, in the multifunctional, opportunist, (p. 474) capacity-oriented way exhibited in Belcher’s viral QD research (Stieglitz 2007; Schalk 2009). This understanding is not just due to the nanoscale research. It reflects a thought-style that is found in many areas of emerging research, including neuroscience. But that thought-style is reinforced by means of the nano/neuro convergence. This more subtle influence of nano is also well exhibited in another contribution to the US Convergence conference.
At the second US Convergence conference, Rodolfo Llinas quipped that his medical colleagues do not always respond positively when they hear about BMI research like that of Nicolelis. To get around this, Llinas and his colleagues developed a new, remote control strategy for establishing a BMI (Llinas and Makarov 2002; Llinas et al. 2005). For this, a catheter could be threaded through the vascular system to the place where blood bathes a region of the brain where an interface is sought. Nanowires small enough to cross the blood–brain barrier could then be released. These would not interfere with blood flow exchange of gases or introduce any disruption of brain activity. A small number of electrodes with an amplifier converter could then be used to establish the interface. At the time of the second Convergence conference (2004), Llinas and colleagues had already demonstrated in vitro that the nanowire could receive and relay signals from a firing neuron and that it could be used to trigger such a firing. He also demonstrated in mice that the vascular system could be used to target brain regions of interest, and provided functional magnetic resonance images showing nanowires as they cross the blood–brain barrier and snuggle next to neurons in ways that are likely to be safe and allow for the desired interface. Since that time, he has been awarded a patent for this interface. In this patent, Llinas (2008) asserts that “the brain–machine bottleneck will ultimately be resolved through the application of nanotechnology.” (The patent actually uses much of the wording from his Convergence workshop presentation, later published in the Journal of Nanoparticle Research (Llinas et al. 2005).) In prominent reviews by Lebedev and Nicolelis (2006) and Leary et al. (2006), Linnas’ research is presented as an example of the kind of BMI that nanoscience makes possible.
Optogenetic brain–machine interfaces
Electrophysiological interfaces like those of Nicolelis and Llinas have some weaknesses. One of the most significant of these is referred to as the “specificity problem” (Miesenbock and Kevrekidis 2005). When electrodes are used, they indiscriminately activate all kinds of neurons and fibers of passage in the area. Even in the case of Llinas’ nanowires, primary lines are needed that would be used with a multiplex amplifier. This would indiscriminately activate or receive signals from all nanowires within the receiving range of the primary line. Current BMIs such as those used in deep brain stimulation (DBS) for Parkinson’s disease have side effects that are thought to arise from this non-specific action, which yields extensive “collateral damage” (Aravanis 2007). This problem cannot be addressed by current electrophysiological technologies, because these depend on relatively large scale, mass-activation of multiple neurons within the region of an electrode. Initially, one might expect to address the problem of specificity by means of the reduction of scale associated with nanoscience and technology. If one could use a large number of very small electrodes, then one could activate all and only those neurons that are important in some circuit of interest. However, this strategy assumes that one can move down scale to the smallest functional unit (as modular, and independent), and then independently interface with each unit that constitutes some circuit (p. 475) of interest. But even the simplest of functions is associated with a large number of such units, and these associated units (the ensemble that constitutes the circuit) are not themselves addressable as a single unit. Rather, they are distributed among a host of other cells. As a result of this non-local, distributed character, establishing such a fine grained interface would require a process of individually discriminating and interfacing with each of the smallest, locally addressable units. As one moves down scale, the complexity of information to be managed increases exponentially, and such a process of interface becomes practically insurmountable. This problem of managing complexity is why nanotechnology needs bottom-up methods of self-assembly.
To solve these problems of specificity and complexity, new optogenetic interfaces have been developed (Deisseroth et al. 2006; Henderson et al. 2009). These take advantage of biologically based mechanisms for interfacing with the circuits (Zemelman et al. 2003). Vectors are developed that introduce genes into the specific cell types of interest, and the “natural” machinery of these cells is used to make proteins that either generate light when cells are activated (sensors) or that excite (or inhibit) neurons when light is introduced (actuators). Fiber optic cables (microendoscopes) can then be inserted instead of electrodes, and light can be used to receive signals or target only those cells that have been genetically altered to interact with the light (Deisseroth et al. 2006). In addition to the new specificity of action, optogenetic interfaces avoid the biofouling/scaring problems that lead to degradation of electrophysiological interfaces (Moxon et al. 2004; Elder et al. 2008). In this way, “the organism itself generates the tools necessary for investigating its function; biology is revealed through biology” (Miesenbock 2009, p. 395).
In several proof-of-principle experiments, optogenetics was used to modulate behavior of flies (Lima and Miesenbock 2005), zebra fish (Arrenberg et al. 2009), mice (Hira et al. 2009; Tsai et al. 2009), and monkeys (Berdyyeva and Reynolds 2009). Preliminary research indicates the genetic interventions are safe, and this technology is already moving to human trials. This may be an area where some of the long awaited, more radical developments associated with gene therapy are first realized in clinical practice. These genetic interventions have the interesting characteristic of introducing new properties to cells (allowing them to optically interface with artificial mechanisms of control), rather than addressing a defect associated with some pathology (as in many gene therapy protocols).
The relation between optogenetics and nano is similar to that between electrophysiology and nano: in both cases nano/neuro convergence denotes a broad set of enabling tools and connotes a thought-style oriented to new capacities for understanding and manipulating neural systems at the most basic level. To advance this convergence, the US National Institutes of Health has recently established a major Nanomedicine Center for the Optical Control of Biological Function (NIH 2009).
QDs thus provide just one development among many in nanoscience that might be used for establishing an optical interface with neural systems. In fact, there is a convergence between all three of the strands outlined in this essay (Arenkiel and Ehlers 2009). Although electrophysiological and optogenetic BMIs are sometimes presented as competing alternatives, they too should be regarded as complementary, converging technologies (Scanziani and Hausser 2009). Such convergence is clearly demonstrated in two sets of recent studies, which bring optogenetics closer to human application. In the research of Han et al. (2009; see also Berdyyeva and Reynolds 2009), published in Neuron, an optogenetic interface was used to modulate neural circuits in a monkey, and an electrophysiological interface in the (p. 476) same animal was used to identify complex cascades of excitation and inhibition. Here optogenetic and electrophysiological technologies are integrated in a hybrid BMI that enables both sensing and control of neural circuits. A different kind of convergence is demonstrated by two essays in Science on the mechanisms of action of DBS for Parkinson’s patients. In optogenetic research by a team in the lab of Karl Deisseroth (Gradinaru 2009), genes encoding light-sensitive ion pumps were inserted into the subthalamic nucleus (STN) of mice, and optical signals were used to inhibit the activity of these neurons, thus testing the hypothesis that DBS works by a kind of mini-lobotomy of STN hyperactivity generated as a result of withering dopaminergic cells in the substantia nigra. The research team was also able to selectively excite cells in the STN, testing an opposing hypothesis. In both cases, they did not find the beneficial effects associated with DBS. However, when they activated cells in the primary motor cortex whose axons extend into the STN, they were able to generate the beneficial DBS effects. They also found that different effects could be found when the same cells were driven at different frequencies, indicating the importance of temporal precision in optogenetic control. In another study published in the same issue of Science, Nicolelis’ team (Fuentes et al. 2009) showed that DBS effects on Parkinsonian mice could be generated by placing electrodes on the surface of the spinal chord. This involved indirect stimulation of the cortex that compensated for degenerating effects of Parkinson’s disease. Thus, in different ways, both of these investigations “point to the cortex as an important player in the therapeutic effect of DBS for Parkinson’s disease” (Miller 2009, p. 1555).
On The Entanglement Of Ethical And Scientific Concepts
In the previous sections I gave more detail about QDs and BMIs than might seem necessary for an encyclopedia article that looks at the ethics of nano/neuro convergence. But the detail is important. Too often, summary judgments are made about the ethics of nanoscience and nano/neuro convergence—that there is “nothing new” (Gordijn 2006; Allhoff 2007; Litton 2007; Alpert 2008)—but an insufficient account is provided of the research. The earlier overview exhibits a slice of this research, so such claims about ethical novelty (or its lack) can be assessed. But before I can provide this assessment, I need to take a brief detour away from the nano/neuro research. It turns out that summary judgments dismissing any ethical novelty in nanoscience depend on implicit assumptions about the nature of ethical reflection, and these, in turn, depend on assumptions about the nature of pure and applied science. By undermining the conventional assumptions about science, nano/neuro convergence undermines associated assumptions about the character of applied ethics. In order to appreciate the ramification of nano/neuro convergence, I thus need to first consider the underlying linkage between conventional science and applied ethics.
Forms of practical rationality integral to ethics mirror assumptions about the nature of science and its rationality (a detailed review is provided in Khushf 2009). In many contexts, something like a linear model of scientific research and development is still assumed (Bush 1945). Science is divided into two kinds: pure and applied. The first involves the pursuit of understanding for its own sake, and yields descriptively accurate portrayals of the world. (p. 477) The second kind of science involves intervention in that world, and thus a modification of what is antecedently given. This pure/applied distinction depends on a related distinction between what is natural and artificial. The pre-interventional given is called “nature,” and the task of pure science is to understand it. In contrast, applied science alters what is given; as such, it is artificial and yields “artifacts.” In an applied science, ends and values from outside science orient the activity—for example, medicine seeks to promote health and eliminate disease among humans. Basic science and technology provide means that are “used” to advance the extra-scientific ends—thus the “linear” movement from the basic to the applied domains. (An outstanding account of this model of applied science and its associated technical rationality is found in Schon, 1983, part I.)
The linear model yields two kinds of ethical reflection roughly corresponding to the two kinds of science. Pure science is governed by internal norms—for example, honesty in reporting of data, fair attribution of credit, and so on. Here the free and open pursuit of truth is advanced, and ethical norms assure that claims are sifted in the appropriate way to assure they are reliable. The internal norms of science foster the flourishing of the science itself. (The classic account of these norms is provided in Merton 1979; a nuanced justification in terms of internal and external goods is found in MacIntyre 1984, ch. 14.) In applied domains, however, the ethic needs to go beyond the purely internal concerns of the practice and assure that the change brought about in the world does not disrupt the flourishing of other activities external to that practice. The intervention is thus regarded as a modular unit that is rationally organized toward the advancement of some specified end. This whole unit is then taken as a perturbation in the world, and the ripple effects of the perturbation are assessed. Applied ethics assures that the positive aspects of the perturbation outweigh the negative aspects. According to this model, applied ethics is like a meta-applied science that reasons backward: instead of moving from pre-given ends to sufficient means, ethics considers the complete applied intervention as the means and considers the ends/consequences that follow. The task of the applied ethic is then to regulate applied science so that negative consequences are blocked or mitigated, and positive consequences are optimized. (J.S. Mill provides the classic statement of this model of ethical reflection; the associated risk model is reviewed in Macnaughten et al. 2005; Wynne 2005)
To simplify analysis of proposed applications of science, applied ethicists often categorize new interventions or products according to pre-existing ethical types. For example, a new drug for treatment of heart disease does not call for a new ethical type. Such a drug would provide an instance of a standard type—“new pharmaceutical”—and there are well-defined ethical/policy guidelines for managing such instances. For conventional applied ethicists contemplating some emergent research domain such as nano/neuro convergence, the question is then: “Is there anything new here?” (Gordijn 2006; Allhoff 2007; Litton 2007; Alpert 2008). By this they do not mean “is there new science or technology?” That is taken for granted. Instead, they ask whether the existing taxonomic structure of ethical types is sufficient to understand and socially manage the consequences of some proposed development. They are thus asking whether there is a novel ethical type that requires some modification of the background taxonomy. This approach thus assumes a neat division of labor between the work of the applied scientist and that of the applied ethicist. The ethics work comes downstream of the science, and presupposes some potential or actual intervention that can be discretely regarded. The ethicist then asks whether the antecedent ethical types are sufficient for managing the consequences of that intervention.
(p. 478) Despite extensive criticisms of the linear model, these background assumptions about applied ethics are pervasive. They directly inform the incessant worry about whether there is “anything new” in nano or nano/neuro convergence research. But this approach to ethics misses exactly what is most significant about much of the emerging research. Namely, it misses how research does not fit the conventional pure/applied distinction; how it resists categorization according to existing taxonomies; and how it concerns enabling capacities, rather than modular interventions. In the fuzzy world of emerging research, there is even a breakdown of the conditions for a neat division of labor between the work of the scientist or engineer, on one side, and the work of the ethicist or social scientist, on the other side (Gorokhov and Lenk 2009; I provide a more detailed account of this in Khushf 2009). This, in turn, undermines the conditions for post hoc ethical analysis. Instead, ethics needs to move upstream and be integrated as part of the research and development process. But that requires a deep change in research cultures of both the ethicists, who often know little about the science, and the scientists, who know little about ethics.
The European Union Convergence Initiative As A Type-Framed Ethic
In addressing the ethical issues integral to electrophysiological and optogenetic BMI research, we are faced with a fundamental framing problem. What exactly do we mean, when we speak of BMIs? This can be interpreted as a question about the ethical type featured in the ethical analysis. Are we referring here to specific, already developed BMIs, such as those currently used in DBS for treatment of Parkinson’s patients? Do we consider the research projects of people like Deisseroth, Miesenboch, Nicolelis, and Llinas, and take what has been accomplished with animals as evidence of the human BMIs on the mid-term horizon? How is this BMI research related to the more speculative visions integral to NBIC Convergence and military funded initiatives for enhancing human performance? Should we address all these as variants on a single type—that of BMIs—or see a set of distinct types, each raising different kinds of ethical concerns? And what role does nanoscience play in addressing any of these questions?
A traditional applied ethic will begin with such type-questions (usually implicit), and then, after clarifying the type at issue, move on to consider specific problems integral to that pre-given type. Generally, the framing associated with the specification of type is viewed as external to the ethical analysis. It provides the condition for the ethic. In the rare cases when it is made explicit, such framing is seen as a matter of proper description of some emergent phenomenon that calls for the ethical analysis. When ethics researchers ask “what’s new about BMIs associated with nano/neuro convergence?” they usually are asking such a type-question. They want specification of some feature or property of nano/neuro convergence that would require a different ethical analysis from the BMIs already considered as part of neuro-ethics. (Here novelty functions like “specific difference” in Aristotelian definitions and taxonomies.) Since the framing is taken as establishing the conditions for ethics, I will call such ethical analysis a “type-framed ethic.”
(p. 479) A European Union High Level Expert Group (HLEG 2004) was commissioned to develop an EU counterpart to the US NBIC Convergence initiative. This Group was highly critical of the US workshops, and they presupposed a type-framed ethic in their analysis. Here I focus on the arguments of Alfred Nordmann, who was the rapporteur for the HLEG. I’ll consider both the official EU report that he drafted and also some of Nordmann’s individually authored essays (2007a,b, 2009) where distinctions integral to that report are defended. Even beyond the specific topic of nano-enabled BMIs, the approach of the EU High Level Expert Group can be taken as representative of how applied ethical problems are generally addressed. By critically evaluating some of Nordmann’s claims, I thus show how nano/neuro convergence challenges a type-framed ethic.
The EU Group wants to make a sharp distinction between the development and use of BMIs for medical application—such as DBS for Parkinson’s patients—and BMIs for human enhancement. They think ethicists should not focus on enhancement projects, because they are unrealistic, and they divert our attention from the more pressing, real world challenges. For Nordmann, this means that calls for “upstream ethics” of such enhancement projects are misguided. “[E]thical concern is a scarce resource and must not be squandered on incredible futures, especially when they distract from on-going developments that demand our attention” (2007b, p. 34). They think the US NBIC Convergence initiative inappropriately framed the task of science, ethics, and policy. Radical human–machine interfaces are assumed to be on the immediate horizon—taken as given—and we are then supposed to ask about the ethics of this soon-to-be-realized development. But for Nordmann and the HLEG, this involves an inappropriate relation to history and technology. Instead, we should start with our current needs and challenges, and then ask: what research provides a solution to these challenges? Nordmann thinks that the US Convergence workshops undermine the very possibility of genuine ethical reflection when they presuppose a speculative and problematic technological development as if it were already realized. By taking some anticipated development as if it were given, they deprive ethics of its standpoint. “Rather than adopt a believing attitude towards the future, an ethics beholden to present capabilities, needs, problems, and proposed solutions will begin with vision assessment. Envisioned technologies are viewed as incursions on the present and will be judged as to their likelihood and merit.” (2007b, p. 41) When considering BMIs that might allow for thought-control of robotic arms or mind-mind communication, Nordmann thinks that there is no proof of principle. He sees nothing special about Nicolelis’ experiments: the monkeys only have a robotic arm that poorly mimics the motion of an actual arm. In humans, there are only the partial lobotomies of DBS or the terribly slow manipulation by ALS patients of a cursor to spell out words. In all of this, there is nothing radical. For Nordmann, the task is thus to make clear to scientists the difference between the medical and enhancement visions, and to reorient research and debate to the real world problems and prospects (Nordmann 2007b, 2009; HLEG 2004).
Summarizing, Nordmann and the EU Group introduce a sharp distinction between two types of BMI research. They dismiss the enhancement oriented work associated with the US Convergence initiative, and want to frame ethical deliberation in terms of discrete, therapeutically oriented projects like DBS for Parkinson’s disease or artificial hands for those who are disabled. For them, “ethics” must take human capacities and limits as given, and then, in a second step, consider how some technological intervention might best address needs that are already specified. This is what is already done in medical, therapeutically oriented (p. 480) research: researchers and clinicians respond to some pre-given need associated with some disease. Nordmann and his EU colleagues want an ethic that is therapeutic and problem-oriented in the same way: ethics becomes a kind of applied social science. They suggest that ethics of enhancement should not be discussed, and any enhancement projects should be blocked politically at the stage when they arise.
On The Deficiencies Of A Type-Framed Ethic
In substance, the EU High Level Expert Group wants to impose upon nano/neuro convergence the conventional view of an applied science (on this linkage, see Khushf 2007 commentary on Nordmann 2007a). What converges are technologies, not the basic sciences. The EU Group thus explicitly excludes “science” from the name: “NBIC Convergence” becomes “Converging Technologies” (CT). As befits the classical ideal of an applied science, they want specific, targeted projects that start with a clear delineation of the end/vision, and then, step by step organize disciplinary pursuits so that end is realized. They want to divert policy away from the kind of general, multifunctional, open-ended, capacity-oriented focus found in the US Convergence initiative. Their type-framed ethics simply presupposes this normative ideal, and then attempts to implement it. All of their recommendations could be taken as a political call to construct CT as an applied science, thus establishing a predictable order out of the otherwise chaotic ensemble of diverse technoscientific practices. But this simply amounts to a kind of nostalgia: they wish for a kind of transparent, predictable science that is no more (and probably never was). Consideration of the QD and BMI research outlined earlier makes clear why this ideal is problematic.
First, there is no clear type that captures common properties of interfaces associated with nano/neuro convergence. Instead, there are many kinds of brain–machine, human–machine, and neural–system interfaces. We should view the QD, electrophysiological, and optogenetic research as constituting a complex continuum, whose possibility space involves a range of variant, but overlapping types (this is apparent in scientific reviews like Miesenbock and Kevrekidis 2005; Deisseroth et al. 2006; Lebedev and Nicolelis 2006; and Arenkiel and Ehlers 2009). Some of these interfaces are developed primarily for purposes of basic research in neuroscience. Others are developed for medical or human enhancement purposes. And still others are developed for applications in electronics, cognitive science, systems management, and materials science. But most significantly, there is a blurring between all of these areas, and a kind of distributed, opportunistic adjustment of goals according to context, grant funding, and to the affordances provided by existing research capacity.
We can follow this entanglement of ideas and research projects over all scales. Consider, for example, the QD research outlined earlier. When a QD is functionalized so it binds to some neural structure of interest, then, when it is excited by light and fluoresces, an interface has been established between the human researchers and the neural system. Here we might define the interface in terms of the mediating agency of light. This would make optogenetical interfaces continuous with those established by QDs. Alternatively, we might define the interface in terms of the scale and the qualitative advance associated with the confluence of new capacities. In a central way, nanoscience is about establishing such interfaces. In all (p. 481) sorts of review essays, roadmaps, and grand challenges, establishing of interfaces between macro-scale and nano-scale structures has been viewed as integral to the development of both nanotechnology and neuroscience. Only by this means is understanding and control of systems at the most basic level possible. Further, most scientists agree that establishing these interfaces will require novel strategies; we cannot simply scale down current micro-level interfaces. As a result, new ways for managing failure/error will be needed; new ways of communicating; new ways of working with natural, self-organizing processes (van Roermund and Hoekstra 2000; Zhang 2003). Some of the needed novel strategies are already seen in QD and optogenetic research (Miesenbock and Kevrekidis 2005; Sjulson and Miesenbock 2008). These, in turn, challenge current models for managing risk and understanding safety: for example, when a QD or some other nanoparticle is used in a multi- functional way—perhaps as a sensor that triggers some latent capacity to destroy a cancer cell when some additional environmental condition is met—what does this do to current models for testing; e.g. to the time frame for assessing safety or to assumptions about what is needed in the preclinical stage of testing? (Kelty 2009 provides a nice account of how conventional views of the science and ethics/policy of toxicity are challenged by nanoscience.) Viewed in this way, all of the QD research was about a kind of BMI, or at least about a neural-interface. This should not just be thought of in the trivial sense that all neuroscience is about establishing an interface. The key here is to notice how, by means of nano/neuro convergence, the research interface is at the same time a kind of functional interface that could be naturally extended into practice settings. In the US Convergence initiative there is an attempt to further cultivate these kinds of opportunistic extensions. They want to cultivate those kinds of transfers in the same way Belcher’s team (Lee et al. 2002) wants to cultivate variants on some phage type in order to generate some library on which her selection function can operate.
In the nano/neuro convergence continuum, there is no sharp line between basic and applied research, and thus no place where we can gate control to block enhancement research. Nordmann views the whole nano/neuro convergence research program as if it were a giant applied science project. For him, vision comes first, before any large scale project. “Envisioned technologies” are “incursions on the present” and are to be “judged as to their likelihood and merit” (2007b, p. 41). This assumes we can distinguish between these “envisioned technologies” and the fundamental research that would provide the basis for the technological interventions. But in practice, the basic research necessary for enhancement is the same basic research necessary for the medical BMIs. In both cases, basic research advances by establishing functional interfaces that enable both understanding and control of the neural systems that are studied.
Miesenbock did some of the path-breaking work establishing optogenetic interfaces. For him, the central goal was to advance neuroscience. But he seeks to advance this by breaking out of the purely descriptive, observational mode of pure science. “Mechanistic understanding requires intervention” (Miesenbock 2009, p. 398). Drawing on control theory, Miesenbock and colleagues suggest that “scientific fields shift emphasis from observation to control as they mature” (Miesenbock and Kevrekidis 2005, p. 534). Although he seeks “to do in order to understand,” he advances this understanding by establishing artificial interfaces that enable novel functions and control of the systems he studies. He thus blurs the distinction between “natural” and “engineered” interfaces. By using “natural” mechanisms (e.g. of genetics), he solves the specificity problem associated with cruder kinds of interfaces. (p. 482) Note again the similarity between this kind of approach and that used by Belcher when she re-engineers viral self-assembly so it becomes a tool for organizing complex structures. In the research of Miesenbock and Belcher, “natural” systems that are currently not fully understood and transparent are “used” to “solve” an engineering problem. This is not just “like” genetically modified organisms (one of the concerns raised in the early days of nanoethics). These are GMOs, but now the “genetic” component is just one element of a more elaborate design. For Miesenbock, once the optogenetic interface is established, the organism (e.g. fly) can be placed in a more “natural” setting, allowing for study of relations between neural circuits and behavior. The organism no longer needs to be immobilized in the manner necessary with earlier interfaces, and thus is brought closer to the unperturbed state. But at the same time, far more radical modulation of behavior becomes possible: “remote control of behavior through genetically targeted photostimulation of neurons” (Lima and Miesenbock 2005). In this research, the most “natural” behavior is achieved by the greatest artifice.
There is already a multifunctional character to the developing BMIs that makes them distinct from conventional medical therapies. According to the EU Convergence report, interfaces like those for ALS or Parkinson’s patients are always for very specific purposes, and they compensate for some lost function. These special purposes devices are sharply contrasted with general purpose interfaces that might be used for human enhancement. But this simple either/or is inappropriate. We don’t face a simple choice between fully specific, single function interfaces, on one side, and completely nonspecific, general purpose interfaces, on the other hand. Instead, BMIs initially developed for specific purposes such as treatment of Parkinson’s disease are found to serve other functions, as well. This multifunctionality of the interface can arise from unplanned, opportunistic discoveries that are made after the interface was initially developed. Again, much of nano/neuro convergence is concerned with understanding and facilitating the conditions that foster such opportunistic discoveries. An ethic of nano/neuro convergence should consider the norms that might inform these processes.
The need for a third category—of multifunctional interfaces—can already be seen in the crude interfaces used for DBS. Nordmann simply takes these as a mini-lobotomy to address symptoms of late Parkinson’s disease (2007b, p. 44). But it is now clear that these interfaces function in more complex ways (Gradinaru et al. 2009; Miller 2009). Once the interfaces are established, they provide a window into more complex mental states and processes, and this provides an “opportunity” to ask more general questions about things like mood (Schneider et al. 2003), pain (Smith 2007), or about dopamine altering drugs and the importance of high frequency oscillations in sustaining voluntary control (Foffani et al. 2003). This almost natural extension of interface functionality should be more carefully studied as part of an ethics of nano/neuro convergence. Such extensions seem to be inherent to functional BMIs, and may have something to do with conditions needed for establishing a functional interface of this kind in the first place. Also, the extension of function arises naturally from mechanisms used to increase specificity. In the case of DBS for Parkinson’s disease, an electrode needed to be threaded into a tiny, cubic millimeter region of the brain. To provide greater control in targeting the appropriate region, several contacts were placed along the electrode, allowing the clinician to experiment with the contacts. By this means, physicians could independently target different regions along the electrode, thereby honing in upon the contact that provides the best symptom profile. When clinicians did this, they accidentally discovered that in some cases the mood of a patient could be significantly altered (Bejjani et al. 1999). This led to experimentation with mood, and immediately suggested alternate medical (p. 483) conditions that might be managed by DBS. This control of mood by itself raises some disturbing possibilities: patients at times seem like puppets, manipulated by strings and wires, and it is easy to imagine how this technology might be misused (Nordmann 2007b). However, it is also not too hard to imagine how patients might be given greater control over their own emotional states by such means, although the ramifications of doing this are anything but clear.
Experimental extensions of functional interfaces arise naturally for at least three reasons: (1) because things like mood and voluntary control are already implicated in the disease process that is addressed by the BMI; (2) because the neural structures (e.g. associated with the STN) addressed by the interface are themselves complex and multifunctional; and (3) because the specificity needed for the interface is partly obtained by introducing redundancy and then selection (as with the multiple contacts on the electrodes), and once that redundancy is introduced, it allows for addressing additional structures that were not originally targeted by means of the interface. Consider, for example, the way Llinas uses the vascular system to introduce nanowires within a region, or the way Miesenbock genetically targets specific kinds of neurons to introduce the novel, light-related property (sensor or actuator). In both cases, complex structures and mechanisms of the organism are “used” to solve the problems of specificity. But this only provides a kind of course grained solution. After that, additional control is gained by means of the central lines (electrical or fiber optical) used to interface with the nanowires or with the novel proteins arising from the genetic intervention. Once this hardware is introduced, all sorts of extensions are possible in relatively non-invasive ways.
The EU Convergence Group assumes that general purpose BMI enhancements would need to be advanced as separate, large-scale projects, and that these can be politically blocked when they arise (Kjolberg et al. 2008). They make a sharp distinction between envisioned military uses for an artificial hand and therapeutic uses (HLEG 2004). But when we recognize how therapeutically oriented BMIs can be extended, it becomes clear that the first enhancements will arise as extensions of treatments. For example, DARPA, the Air Force, and the Army have been major sponsors of BMIs (Moreno 2006), and have supported some of the most high profile research, including the above-mentioned work of Nicolelis (Hoag 2003) and Llinas (2005, Acknowledgements on p. 125). In a Washington Post review (Barr 2008), the director of DARPA explains how neural–machine interfaces for a prosthetic arm “hold promise that disabled soldiers can stay in the military ‘and contribute as before’ rather than be discharged.” Obviously, DARPA’s interest in such a disabled soldier arises from the “opportunity” a treatment-oriented human/machine interface would provide for extension. By dismissing these possibilities as speculative, Nordmann and the EU Group divert ethical reflection away from visions and values that inform prominent strands of BMI research. Instead of openly discussing these visions, they want to politically legislate them away. This approach only makes the issues invisible (Khushf 2006).
The Ethics Of Nano/Neuro Convergence: Understanding The Task
Taken together, the earlier-mentioned aspects of nano/neuro convergence problematize a traditional type-framed ethic. But this does not completely undermine the standpoint of (p. 484) ethics, as Nordmann contends (2007b, p. 40). He and the EU High Level Expert Group work with too narrow a conception of ethics, and with too great a confidence in what might be accomplished by political means. Ethics is more than just the politics of vision/needs assessment, implementation, and regulation. To responsibly address the emergent capacities associated with nano/neuro convergence, a much richer kind of ethical reflection is required, one that is more fit for the practices integral to emerging research. I close by considering what is needed for this ethic.
1) An ethic of nano/neuro convergence needs to move upstream (Wilsdon and Willis 2004). It needs to be anticipatory and not just reactionary, and needs to work with a mid-term time horizon (Khushf 2007). But moving upstream does not mean making ethics political. Too often, the metaphor of a stream and flow is still interpreted in terms of a variant of the older linear model. It is assumed that ethics means control of some discrete intervention, and control means a capacity to specify the ends that, in turn, govern how subcomponents of a research endeavor are coordinated and integrated (HLEG 2004). Moving ethics upstream then means: use political mechanisms to control the projects that are initiated. But this kind of top-down control is neither desirable nor possible. A more realistic model of science and engineering involves the recognition that “new” research always starts in the middle, when there are a host of other diverse research pathways already underway. These jump together in complex ways. In formal initiatives like the US NBIC Convergence workshops, there are attempts to facilitate the cross-talk (Khushf 2004a). But this does not follow older models of applied science.
When considering nano/neuro convergence, “ethicists” need to start with an appreciation of some of the diverse strands of research in nanoscience and neuroscience, and then consider how these strands come together. But this does not mean the strands are taken as fixed, fully determinate trajectories. In this essay, I considered QD and BMI work, and tried give a sense of the opportunistic ways these strands link up with one another to specify an interface with neural systems. My attempt to sketch this work presupposed that I was in dialogue with those who are developing the research, and thus that the current state of this research is accurately reflected in my overview. But this overview is not mere description. It is already a work of ethics, since it seeks to re-present that research in a way that rightly discloses what is given and what is yet open for reconstruction. This task of framing needs to be seen as a component of ethical deliberation, and it needs to be explicitly put into play as part of ethical discourse. What I offer here—and what anyone offers—can only be a first draft, open to correction and revision in the same way any other research contribution is open for revision. To move ethics upstream thus means that ethics needs to be located at the place where the research first arises, and it needs to uncover the possibilities inherent within that research. It also needs to be adaptive in the same ways any first efforts in research are adaptive.
2) In place of top-down control, ethics should be oriented toward management of processes that are already underway (Khushf 2007). Such management might initiate new eddies and currents. Here a decentralized management among distributed actors is needed (Guston and Sarewitz 2002). For this, the ethics of nano/neuro (p. 485) convergence needs to be worked out in the context of genuine, collaborative dialogue with the researchers involved (Fisher et al. 2006). And beyond this, the very character of communication between scientists and ethics/policy researchers needs to be altered. The pure/applied science distinction works against this dialogue. Traditionally, it was assumed that scientists are masters of fact; engineers masters of artifacts; and ethics and policy researchers are masters of visions and values. The splits between research cultures reflect these presumed differences between realms of expertise, and each field guards against encroachment upon its own domain. Since there was no presumed overlap of jurisdictions, exchange between scientists and ethics/policy researchers was too often viewed in terms of a turf war. Ethics and policy researchers tend to view scientists as naïve when they reflected on visions and values. Thus, Nordmann (2009) discounts statements by scientists about the enhancement prospects of their own research, seeing such statements as evidence of a kind of “ignorance at the heart of science.” He wants something like a reverse deficit model, where those in the humanities educate scientists about the naivety of their visions. On the other hand, scientists generally resent new requirements and expectations that they explicitly reflect upon the ethics of their own research. They see these as inappropriate, bureaucratic encroachment on their freedom to pursue science where it leads. In both cases, there is insufficient appreciation of the entanglement between the work of even the purest science and the work of ethics and policy. Both ethicists and scientists do not want to expend their “scarce resources” worrying about unneeded details integral to the other’s domain. They do not want to expend the effort to learn what the other knows, and they do not see the value of dialogue with those who have that knowledge.
For conventional applied ethicists, the incessant worry about “what’s new in nano” reflects this attempt to guard against diversion. But how can ethicists or scientists ever know if there is anything ethically novel if neither works at the intersection of both the emerging science (which by definition is scientifically novel) and the ethics/policy arenas where there is a nuanced appreciation of the scope and limits of existing ethical taxonomies. Without an active, ongoing dialogue between the “two cultures,” there is no social capacity for appropriately taking stock of the realistic possibilities and challenges inherent to that emerging research. Researchers and regulators then miss opportunities for modulating such research trajectories so potential downstream disruptions are mitigated at the outset, as part of the research and development process that realizes the promise inherent in the novel science. Establishing genuine dialogue thus stands as one of the central challenges for an ethic of nano/neuro convergence, and this, in turn, requires a significant change in the cultures of research in both the scientific and the ethics/policy communities (Stieglitz 2007).
3) Instead of focusing on standard types in some pre-given taxonomy of ethical problems, an ethic of nano/neuro convergence should consider regions in a possibility space. Older taxonomies are still helpful in organizing this space, but a much freer relation is needed between background assumptions and types and the space of emerging research practices. For a type-framed ethic, there is some discrete intervention, which is taken as an instance of some problem with specific, well-defined ramifications. The goal of the ethic is to gate and control the discrete intervention, so (p. 486) negative ramifications are minimized. This is how Nordmann frames the task of an ethics of nano/neuro convergence. “Ethics” reflects a kind of heightened socio-political control that arises at the nexus of specific kinds of discrete action. Instead of this act-oriented ethic, we need something closer to a virtue theory (MacIntyre 1984).
The goal of an ethic should be to cultivate responsible research practices that are proactively responsive to broader ramifications of the practices. For scientists, this means more reflective practices: they must reflect upon what they do and how this might foster or undermine the flourishing of what others are doing (Schon 1983). Here the “internal” ethic of science needs to be extended so it encompasses “external” concerns. When this arises, then scientists learn to gate their own practices. They learn to discern where social expertise is insufficient for understanding or managing potentially disruptive aspects of research that is underway, and they learn to draw into their practice settings those who might explore with them how best to configure the next stage of the research. Even beyond this, the goal is to have scientists who appreciate that others may see things in their own work that they do not recognize, and thus that a transparency and openness to critical reflection is needed. They already appreciate how such openness to their scientific peers is essential to science. Now this needs to be extended to a broader set of peers. But we will never get this, if the people in the humanities and social sciences dismiss or patronize the researchers, or if they want to come in and politically control things as soon as something seems risky. As long as ethics is viewed in terms of politics, scientists will legitimately guard their secrets. They will seek to downplay ethical novelty and forestall disclosure of more radical possibilities of disruption until later stages. But that delay undermines vital opportunities of midstream adjustments that could mitigate such problems. In place of top-down, large-scale, political control at nodes where research transitions from “basic” to “applied,” we should seek to cultivate a host of partial, small-scale, distributed adjustments that permeate the possibility space of research (Guston and Sarewitz 2002). Only when this occurs will we have the general capacity for strategic political action that is carefully targeted to address a narrow subset of those ethical issues that are not best addressed in a more free, decentralized manner.
4) Like the science, the ethics of nano/neuro convergence must be more reflexive (Rabinow and Bennett 2009 provide an outstanding overview of the challenge emerging research poses for both conventional mode 1 and even mode 2 approaches to ethics). An ethic should continually explore how the very character and content of ethical deliberation and action is (and ought to be) informed by developing notions of understanding and control integral to the emerging science. In practice, this means that the science, the ethics, and the philosophical and social scientific study of these will be deeply entangled, and will co-evolve in mutual dialogue (Khushf 2009). Scientists will play a greater role in framing and addressing the ethical issues, and ethicists will play a greater role in framing and even advancing the science. Here it is important to appreciate how traditional precautionary and cost/benefit models of ethical deliberation depend upon assumptions about gated control that are not applicable to many emerging research practices. Humans do not and cannot have that kind of control, and ethics should not work with such control as an ideal. An ideal of transparent or “see-through science” (Wilsdon and Willis 2004) should thus be (p. 487) abandoned. For the alternative, ethics can learn from the science. In research areas associated with embodied cognition, control theory, human factors engineering, nanoscience, complexity theory, and a host of other areas, we find strategies of understanding and control that do not involve complete transparency of the systems being manipulated. This is, of course, true for neuroscience and for all medical BMIs, as well. When Miesenbock utilizes an organism’s own mechanisms to solve the specificity problem, he doesn’t fully understand the system he is utilizing. In fact, the alterations precede the understanding, and become a vehicle toward stabilizing the system so it can be transparent to understanding. But the neural system that is stabilized is the one that now has the novel capacities that he introduced for controlling it. Here the capacity for control arises through an anticipation of a not-yet-realized stability that depends on the scientist’s innovative practice. Similarly, for Angela Belcher’s nanoscience, design of the complex array of QDs involves generation of large libraries of viral variants together with artificial selection mechanisms that isolate specific variants that have properties of interest. In both the research of Miesenbock and Belcher, there is a complex loop between a smaller scale, partially blind action and larger scale architectures that structure those small scale experiments so specific kinds of outcomes are rapidly identified. An ethic of nano/neuro convergence needs to explore how these new models might help us more appropriately manage the complex possibility space associated with emerging research.
Allhoff, F. (2007). On the autonomy and justification of nanoethics. NanoEthics, 1, 185–210.Find this resource:
Alpert, S. (2008). Neuroethics and nanoethics: do we risk ethical myopia? Neuroethics, 1, 55–68.Find this resource:
Andrews, R. (2007). Neuroprotection at the nanolevel – part I, Introduction to nanoneurosurgery. Annals of the New York Academy of Sciences, 1122, 169–84.Find this resource:
Arenkiel, B. and Ehlers, M. (2009). Molecular genetics and imaging technologies for circuit-based neuroanatomy. Nature, 461, 900–7.Find this resource:
Arrenberg, A., Bene, F.D., and Baier, H. (2009). Optical control of zebrafish behavior with halorhodopsin. PNAS, 106, 17968–73.Find this resource:
Azzazy, H., Mansour, M., and Kazmierczak, S. (2007). From diagnostics to therapy: prospects of quantum dots. Clinical Biochemistry, 40, 917–27.Find this resource:
Bannai, H., Levi, S., Schweizer, C., et al. (2009). Activity-dependent tuning of inhibitory neurotransmission based on BABAAR diffusion dynamics. Neuron, 62, 670–82.Find this resource:
Barr, S. (2008). The idea factory that spawned the Internet turns 50. Washington Post, April 7, p. D01.Find this resource:
Bejjani, B-P., Damier, P., Arnulf, I., et al. (1999). Transient acute depression induced by high-frequency deep-brain stimulation. The New England Journal of Medicine, 340, 1476–80.Find this resource:
Berdyyeva, T. and Reynolds, J. (2009). The dawning of primate optogenetics. Neuron, 62, 159–60.Find this resource:
Bush, V. (1945). Science The Endless Frontier. Washington, DC: United States Government Printing Office. Available at http://www.nsf.gov/about/history/vbush1945.htm.
Cai, W., Hsu, A., Li Z-B., and Chen, X. (2007). Are quantum dots ready for in vivo imaging in human subjects? Nanoscale Research Letters, 2, 265–81.Find this resource:
(p. 488) Cauller, L. and Penz, A. (2002). Artificial intelligence and natural intelligence. Converging Technologies for Improving Human Performance: Nanotechnology, Biotechnology, Information Technology, and Cognitive Science (NSF/DOC-sponsored report), pp. 227–33. Arlington, VA: World Technology Evaluation Center (WTEC).Find this resource:
Chan, W. and Nie, S., (1998). Quantum dot bioconjugates for ultrasensitive nonisotopic detection. Science, 281, 2016–18.Find this resource:
Dahan, M., Levi, S., Luccardini, C., Rostaing, P., Riveau, B., and Triller, A. (2003). Diffusion dynamics of glycine receptors revealed by single-qunatum dot tracking. Science, 302, 442–5.Find this resource:
Deisseroth, K., Feng, G., Majewska, A.K., Miesenbock, G., Ting, A., and Schnitzer, M. (2006). Next-generation optical technologies for illuminating genetically targeted brain circuits. Journal of Neuroscience, 46, 10380–6.Find this resource:
Elder, J., Liu, C., and Apuzzo, M. (2008). Neurosurgery in the Realm of 10-9, Part 2: Applications of nanotechnology to neurosurgery – present to future. Neurosurgery, 62, 269–84.Find this resource:
Fisher, E., Mahajan, R., and Mitcham, C. (2006). Midstream modulation of technology: governance from within. Bulletin of Science, Technology, and Society, 26, 485–96.Find this resource:
Foffani, G., Priori, A., Egidi, M., et al. (2003). 300-Hz subthalamic oscillations in Parkinson’s disease. Brain, 126, 2153–63.Find this resource:
Fuentes, R., Petersson, P., Siesser, W., Caron, M., and Nicolelis, M. (2009). Spinal cord stimulation restores locomotion in animal models of Parkinson’s disease. Science, 323, 1578–82.Find this resource:
Gao, X., Cui, Y., Levenson, R., Chung, L., and Nie, S. (2004). In vivo cancer targeting and imaging with semiconductor quantum dots. Nature Biotechnology, 22, 969–76.Find this resource:
Gomez, N., Winter, J., Shieh, F., Saunders, A., Korgel, B., and Schmidt, C. (2005). Challenges in quantum dot-neuron active interfacing. Talanta, 67, 462–71.Find this resource:
Gordijn, B. (2006). Converging NBIC technologies for improving human performance: a critical assessment of the novelty and the prospects of the project. Journal of Law, Medicine and Ethics, 34, 2–8.Find this resource:
Gorokhov, V. and Lenk, H. (2009). Nanotechnoscience as a cluster of the different natural and engineering theories and nanoethics. In M. Yuri, K. Sergey, K, and V. Ashok (eds.) Silicon Versus Carbon: Fundamental Nanoprocesses, Nanobiotechnology and Risk Assessment, pp. 199–222. NATO Science for Peace and Security Series B: Physics and Biophysics. Netherlands: Springer.Find this resource:
Gradinaru, V., Mogri, M., Thompson, K., Henderson, J., and Deisseroth, K. (2009). Optical deconstruction of Parkinsonian neural circuitry. Science, 324, 354–9.Find this resource:
Groc, Laurent, Lafourcade, M., Heine, M., et al. (2007). Surface trafficking of neurotransmitter receptor: comparison between single-molecule/quantum dot strategies. Journal of Neuroscience, 27, 12433–7.Find this resource:
Guston, D. and Sarewitz, D. (2002). Real-time technology assessment. Technology in Society, 24, 93–109.Find this resource:
Han, X., Qian, X., Bernsetin, J.G., et al. (2009). Milisecond-timescale optical control of neural dynamics in a nonhuman-primate brain. Neuron, 62, 191–8.Find this resource:
Henderson, J., Federici, T., and Boulis, N. (2009). Optogenetic neuromodulation. Neurosurgery, 64, 796–804.Find this resource:
Hira, R., Honkura, N., Noguchi, J., et al. (2009). Transcranial optogenetic stimulation for functional mapping of the motor cortex. Journal of Neuroscience Methods, 179, 258–63.Find this resource:
(p. 489) HLEG (High Level Expert Group). (2004). Foresighting the new technology wave. Converging Technologies: shaping the future of European societies. Luxemburg: Office for Official Publications of the European Communities.Find this resource:
Hoag, H. (2003). Remote control. Nature, 423, 796–9.Find this resource:
Kelty, C.M. (2009). Beyond implications and applications: the story of ‘safety by design. NanoEthics, 3, 79–96.Find this resource:
Khushf, G. (2004a). Systems theory and the ethics of human enhancement: a framework for NBIC Convergence. Annals of the New York Academy of Sciences, 1013, 124–49.Find this resource:
Khushf, G. (2004b). The ethics of nanotechnology: vision and values for a new generation of science and engineering. In National Academy of Engineering, Emerging Technologies and Ethical Issues in Engineering, pp. 255–78. Washington, DC: National Academies Press.Find this resource:
Khushf, G. (2006). An ethic for enhancing human performance through integrative technologies. In W.S. Bainbridge and M. Roco (eds.) Managing Nano-Bio-Info-Cogno Innovations: Converging technologies in society, pp. 255–78.Find this resource:
Khushf, G. (2007). Importance of a midterm time horizon for addressing ethical issues integral to nanobiotechnology. Journal of Long-term Effects of Medical Implants, 17, 185–91.Find this resource:
Khushf, G. (2008). Health as intra-systemic integrity: rethinking the foundations of systems biology and nanomedicine. Perspectives in Biology and Medicine, 51, 432–49.Find this resource:
Khushf, G. (2009). Open evolution and human agency: the pragmatics of upstream ethics in the design of artificial life. In M. Bedau and E. Parke (eds.) The Ethics of Protocells: Moral and Social Implications of Creating Life in the Laboratory, pp. 223–62. Cambridge, MA: MIT Press.Find this resource:
Kjolberg, K., Delgado-Ramos, G.C., Wickson, F., and Strand, R. (2008). Models of governance for converging technologies. Technology Analysis and Strategic Management, 20, 83–97.Find this resource:
Leary, S., Liu, C., and Apuzzo, M. (2006). Toward the emergence of nanoneurosurgery: Part III – Nanomedicine: targeted nanotherapy, nanosurgery, and progress toward the realization of nanoneurosurgery’, Nuerosurgery, 58, 1009–25.Find this resource:
Lebedev, M. and Nicolelis, M. (2006). Brain-machine interfaces: past, present and future. Trends in Neurosciences, 29, 536–46.Find this resource:
Lee, S-W, Mai, C., Flynn, C., and Belcher, A. (2002). Ordering of quantum dots using genetically engineered viruses. Science, 296, 892–5.Find this resource:
Lima, S. and Miesenbock, G. (2005). Remote control of behavior through genetically targeted photostimulation of neurons. Cell, 121, 141–52.Find this resource:
Litton, P. (2007). Nanoethics? What’s new? Hastings Center Report, 37, 22–5.Find this resource:
Llinas, R. and Makarov, V. (2002). Brain-machine interface via a neurovascular approach. In M. Roco and W.S. Bainbridge (eds.) Converging Technologies for Improving Human Performance: Nanotechnology, Biotechnology, Information Technology, and Cognitive Science (NSF/DOC-sponsored report), pp. 216–22. Arlington, VA: World Technology Evaluation Center (WTEC).Find this resource:
Llinas, R., Walton, K., Nakao, M., Hunter, I., and Anquetil, P. (2005). Neuro-vascular central nervous recording/stimulating system: using nanotechnology probes. Journal of Nanopartgicle Research, 7, 111–27.Find this resource:
Llinas, R. (2008). Brain-Machine Interface Systems and Methods. United States Patent 2008/0015459 A1.Find this resource:
Mao, C., Flynn, C., Hayhurst, A., et al. (2003). Viral assembly of oriented quantum dot nanowires. PNAS, 100, 6946–51.Find this resource:
(p. 490) Macnaughten, P., Kearnes, M., and Wynne, B. (2005). Nanotechnology, governance, and public deliberation: what role for the social sciences? Science Communication, 27, 268–91.Find this resource:
Merton, R. (1979). The Sociology of Science: Theoretical and Empirical Investigations Chicago, IL: University of Chicago Press.Find this resource:
Michalet, X., Pinaud, F., Bentolila, L.A., et al. (2005). Quantum dots for live cells, in vivo imaging, and diagnostics. Science, 307, 538–44.Find this resource:
Michler, P., Imamoglu, A., Mason, M.D., Carson, P.J., Strouse, G.F., and Buratto, S.K. (2000). Quantum correlation among photons from a single quantum dot at room temperature. Nature, 406, 968–70.Find this resource:
Miesenbock, G. (2009). The optogenetic catechism. Science, 326, 395–9.Find this resource:
Miesenbock, G. and Kevrekidis, I. (2005). Optical imaging and control of genetically designated neurons in functioning circuits. Annual Review of Neuroscience, 28, 533–63.Find this resource:
Miller, G. (2009). Rewiring faulty circuits in the brain. Science, 323, 1554–6.Find this resource:
Moreno, J. (2006). Mind Wars: Brain Research and the National Defense. New York: Dana Press.Find this resource:
Moxon, K., Kalkhoran, N., Markert, M., et al. (2004). Nanostructured surface modification of ceramic-based microelectrodes to enhance biocompatibility for a direct brain-machine interface. IEEE Transactions on Biomedical Engineering, 51, 881–9.Find this resource:
Murray, C.B., Kagan, C.R., and Bawendi, M.G. (1995). Self-organization of CdSe Nanocrystallites into three-dimensional quantum dot superlattices. Science, 270, 1335–8.Find this resource:
National Institutes of Health (NIH), Division of Program Coordination, Planning, and Strategic Initiatives (DPCPSI) web page on Nanomedicine Center: NDC for the optical control of biological function, http://nihroadmap.nih.gov/nanomedicine/devcenters/ progressreports/Isacoff_ExecSumm2009.asp (accessed December 10, 2009)
National Science and Technology Council (NSTC), Committee on Technology, Subcommittee on Nanoscale Science, Engineering and Technology (2002). Nanotechnology Initiative: The Initiation and its Implementation Plan. Washington, DC: Office of Science and Technology Policy.Find this resource:
Nicolelis, M. (2002). Human-machine interaction: potential impact of nanotechnology in the design of neuroprosthetic devices aimed at restoring or augmenting human performance. In M. Roco and W.S. Bainbridge (eds.) Converging Technologies for Improving Human Performance: Nanotechnology, Biotechnology, Information Technology, and Cognitive Science (NSF/DOC-sponsored report), pp. 223–6. Arlington, VA: World Technology Evaluation Center (WTEC).Find this resource:
Nordmann, A. (2007a). Knots and strands: an argument for productive disillusionment. Journal of Medicine and Philosophy, 32, 217–36.Find this resource:
Nordmann, A. (2007b). If and then: a critique of speculative nanoethics. Nanoethics, 1, 31–46.Find this resource:
Nordmann, A. (2009). Ignorance at the heart of science? Incredible narratives on brain-machine interfaces. In J. Ach and B. Luttenberg (eds.) Nanobiotechnology, Nanomedicine and Human Enhancement. Berlin: Munsteraner Bioethik-Studien.Find this resource:
Pancrazio, J. (2008). Neural interfaces at the nanoscale. Nanomedicine, 3, 823–30.Find this resource:
Pathak, S., Cao, E., Davidson, M., Jin, S., and Silva, G. (2006). Quantum dot applications to neuroscience: new tools for probing neurons and glia. The Journal of Neuroscience, 26, 1893–5.Find this resource:
Rabinow, P. and Bennett, G. (2009). Human practices: interfacing three modes of collaboration, in Bedau, M, Parke, E, The ethics of protocells: moral and social implications of creating life in the laboratory, pp .263–90. Cambridge, MA: MIT Press.Find this resource:
(p. 491) Ratner, M. and Ratner, D. (2003). Nanotechnology: Gentle Introduction to the Next Big Idea. Upper Saddle River, NJ: Prentice Hall.Find this resource:
Renn, O. and Roco, M. (2006). Nanotechnology and the need for risk governance. Journal of Nanoparticle Research, 8, 153–91.Find this resource:
Reed, M. (1993). Quantum dots. Scientific American, January, 118–23.Find this resource:
Roco, M. (2003). Nanotechnology: convergence with modern biology and medicine. Current Opinion in Biotechnology, 14, 337–46.Find this resource:
Roco, M. and Bainbridge, W.S. (2002a). Converging technologies for improving human performance: integrating from the nanoscale. Journal of Nanoparticle Research, 4, 281–95.Find this resource:
Roco, M. and Bainbridge, W.S. (eds.) (2002b). Converging Technologies for Improving Human Performance: Nanotechnology, Biotechnology, Information Technology, and Cognitive Science (NSF/DOC-sponsored report). Arlington, VA: World Technology Evaluation Center (WTEC).Find this resource:
Roco, M., and National Science, Engineering and Technology (NSET) Subcommittee, U.S. National Science and Technology Council (NSTC) (2004). Nanoscale science and engineering: unifying and transforming tools. American Institute of Chemical Engineers Journal, 50, 890–7.Find this resource:
Roco, M. and Montemagno, C. (eds.) (2004). Integrative technology for the twenty-first century. Annals of the New York Academy of Sciences, 1013.Find this resource:
Scanziani, M. and Hausser, M. (2009). Electrophysiology in the age of light. Nature, 461, 930–9.Find this resource:
Schon, D. (1983). The Reflective Practitioner: How Professionals Think in Action. New York: Basic Books.Find this resource:
Schneider, F., Habel, U., Volkmann, J., et al. (2003). Deep brain stimulation of the subthalamic nucleus enhances emotional processing in Parkinson disease. Arch of General Psychiatry, 50, 296–302.Find this resource:
Scientific American Editors. (2002). Understanding Nanotechnology. New York: Warner Books.Find this resource:
Shalk, G. (2008). Brain-computer symbiosis. Journal Neurological Engineering, 5, 1–15.Find this resource:
Silva, G. (2005). Small neuroscience: the nanostructure of the central nervous system and emerging nanotechnology applications. Current Nanoscience, 1, 225–36.Find this resource:
Silva, G. (2006). Neuroscience nanotechnology: progress, opportunities, challenges. Nature Reviews, 7, 65–74.Find this resource:
Smith, K. (2007). Brain waves reveal intensity of pain. Nature, 450, 329.Find this resource:
Stieglitz, T. (2007). Restoration of neurological functions by neuroprosthetic technologies: future prospects and trends towards micro-, nano-, and biohybrid systems. Acta Neurochir Suppl, 97, 435–42.Find this resource:
Sutherland, A. (2002). Quantum dots as luminescent probes in biological systems. Current Opinion in Solid State and Materials Science, 6, 365–70.Find this resource:
Tersoff, J. (1996). Self-organization in growth of quantum dot superlattices. Physical Review Letters, 76, 1675–8.Find this resource:
Toth, G., Lent, C.S., Tougaw, P.D., et al. (1996). Quantum cellular neural networks. Superlattices and Microstructures, 20, 4.Find this resource:
Tsai, H-C, Zhang, F., Adamantidis, A., et al. (2009). Phasic firing in dopaminergic neurons is sufficient for behavioral conditioning. Science, 324, 1080–4.Find this resource:
Van Roermund, A. and Hoekstra, J. (2000). Design philosophy for nanoelectronic systems from SETs to neural nets. International Journal of Circuit Theory and Applications, 28, 563–84.Find this resource:
(p. 492) Vanmaekelbergh D. and Liljeroth P. (2005). Eletron-conducting quantum dot solids: novel materials based on colloidal semiconductor nanocrystals. Chemical Society Reviews, 34, 299–312.Find this resource:
Vu, T., Maddipati, R., Blute, T., Nehilla, B., Nusblat, L., and Desai, T. (2005). Peptide-conjugated quantum dots activate neuronal receptors and initiate downstream signaling of neurite growth. Nano Letters 2005, 5, 603–7.Find this resource:
Wilsdon, J. and Willis, R. (2004). See-through Science: Why Public Engagement Needs to Move Upstream. London: Demos.Find this resource:
Wynne, B. (2005). Risk as globalizing ‘democratic’ discourse? Framing subjects as citizens, in Leach, M, Wynne, B (ed.), Science and Citizens: Globalization and the Challenge of Engagement, pp. 66–82. London: Zed Books.Find this resource:
Zemelman, B., Nesnas, N., Lee, G., and Miesenbock, G. (2003). Photochemical gating of heterologous ion channels: remote control over genetically designated populations of neurons. PNAS, 100, 1352–7.Find this resource:
Zhang, S. (2003). Fabrication of novel biomaterials through molecular self-assembly. Nature Biotechnology, 21, 1171–8.Find this resource:
Zhou, M. and Ghosh, I. (2006). Quantum dots and peptides: a bright future together. Peptide Science, 88, 325–9.Find this resource: