Scripta Varia

Neurotechnology for Human Benefit and the Impact of AI

John P. Donoghue*

Neuroprosthetics is an emerging field that is beginning to provide a technological approach to restore lost sensory functions, restore movement for those with paralysis, or repair cognitive deficits produced by disordered brain circuits (Donoghue, 2015). Neural prosthetics are technologies (i.e. a system of devices) that can be placed in or on the body to partially recover lost sight, hearing, or movement, or repair brain circuits that affect mood, memory or movement are either already available commercially or in human clinical trials, and there is a growing pipeline of new neurotechnologies emerging from research laboratories. It is possible to use technology to repair and restore function both because of an impressive (but still very incomplete) body of neuroscience knowledge and the transformational technology and information processing achievements of the last decades.

Our sense organs provide electrical patterns of information about the state of the world. Neural machinery spread across the central nervous system uses those patterns to compute new representation patterns that nearly always make sense to us when processed properly in the brain. (How this remarkable process occurs is the driving force for a large fraction of neuroscientists). On the output side of the nervous system we are capable of an enormous repertoire of dexterous movements, like piano playing or ballet dancing. To generate skilled voluntary movement, the brain plans actions by assembling sensory signals and internally known information and outputs out electrical patterns that drive the coordinated muscle activity. Brain networks, in ways still quite unclear, also capture, store, organize, and generate memory, behavioral plans, and other cognitive functions. All this appears to include computing new information from internally generated activity patterns.

Fundamental knowledge about how the nervous system codes and computes information is now sufficient, and computing hardware and software good enough, to create neural prosthetic systems that can write-in or read-out neural codes to reproduce aspects of sensory and motor functions, and correct abnormal cognitive brain networks when these functions are lost or disrupted due to disease or injury. However, current neural prosthetics, which aim to emulate the function of neural circuits, still perform well below their biological counterparts, largely from two broad limitations. First, knowledge of brain function, particularly as an integrated information processing system, remains inadequate. Second, technology needed to replace neural function cannot adequately capture signal patterns and then copy computations that occur in most real neural networks – a software problem – and the devices capable of these computations are bulky, power-hungry and slow compared to their biological counterparts, and they are difficult to integrate into body – a hardware problem. Nevertheless, neuroscience knowledge is sufficient, technology adequate to create useful neural prosthetics, but there is great room for improvement. AI is one area that may be able to contribute to a major advance in the processing capabilities of neural prosthetics. Here, I will provide a high level overview of the current state of neural prosthetics from four examples of clinically motivated prosthetics, discuss the limitations faced now. In the spirit of this volume, I will comment on how AI could be a valuable approach to improve sensory, motor and brain circuit neural prosthetics (Fig. 1). AI is used specifically to refer to the deep learning approach (Hinton et al., 2006) (LeCun et al., 2015), because, as will be illustrated below, neural prosthetics suffer from a common challenges of detecting often incomplete activity patterns in raw complex and poorly characterized signals and transforming them into a new representation. The ability for Deep Learning DL nets to be able to generate reliable outputs in complex data is well suited to this class of problem, perhaps not surprisingly because DL nets are an attempt to copy the very processes neural prosthetics are trying to replicate. Figure 1 provides more detail and a schematic of this problem in the context of each of the examples that will be described next.

Sensory, Circuit and Motor Prosthetics

Four devices illustrate the current state of neurotechnology in helping humans: sensory neuroprosthetics for hearing and for vision restoration, deep brain stimulation (DBS) to modulate dysfunctional brain circuits, and (3) brain computer interfaces (BCIs) to restore movement. These neurotechnologies exemplify the forms, spectrum of developmental stages, and state of our ability to restore brain function with technology. Each could be enhanced through the methods available from the field of AI.

Sensory Neurotechnology

Each sense organ is an exquisite structure that provides the brain with spatially distributed and temporally changing patterns of electrical impulses emanating in large sets of neurons. These spatiotemporal neural activity patterns are a filtered version of various forms energy in the world – sound, light, chemicals, or mechanical forces. When activated through its specialized sense organ traveling initially through selective pathways, the brain interprets these patterns, for example, as sounds (air pressure changes transduced from the eardrum through the cochlea) or vision (electromagnetic radiation from 390 to 700 nm wavelength processed through the retina). Neural patterns are transformed and spread widely in brain networks, continuously ‘computing’ activity patterns (at least in the conscious state) that lead to the perception of an the object or the understanding of a spoken word (Fig. 1A). Thus, a large part of brain information processing appears to be the transformation of one pattern into another, put in a shamefully over-simplistic way. Damage to a sense organ disconnects the brain from that perceptual system (and more), limiting the use of that input to understand, remember or interact with the world. Most often, sensory receptor degeneration (e.g. inherited genetic disorders, mechanical damage) is the reason a sensory capability is lost, but the computing neural hardware remains without receiving patterned input needed to compute (Mysore et al., 2015). Available neural prosthetics can provide these lost patterns for both hearing and sight.


The cochlear implant is the benchmark neurotechnology achievement for a human disorder. More that 250,000 devices have been implanted, allowing, for example, deaf children to attend standard educational programs. Nevertheless the understanding of sounds in the world is not at the level achieved by the intact biological interface between our acoustic world and the brain. This still crude neural prosthetic device has a profound personal and social impact (Bond et al., 2009).

The cochlea is a snail shell-shaped structure at the end of the middle ear where the mechanical motions of sound are converted by receptive hair cells lying a thin sheet along the length of the cochlea. Mechanical motion of the hairs atop these receptor cells result in electrical activity patterns in auditory nerve fibers, which connect to each hair cell. Sound generates patterns of activity across the auditory nerve. Many forms of deafness are the result of hair cell death but the auditory fibers typically still remain. A cochlear implant bypasses missing hair cells by delivering an artificial spatiotemporal electrical impulse patterns directly to the auditory nerve fibers in the cochlea. These electrical impulses, which at first are non-sense signals to the user when first supplied, over time become recognizable in brain auditory networks as meaningful, although not natural sound. Remarkably, comprehensible speech emerges when fewer than a dozen electrodes in the cochlea are used to replace thousands of lost hair cells. Thus, what is ordinarily a very rich spatiotemporal pattern of natural sound, can be replaced by impoverished pattern of electrical stimulation that the brain can still meaningfully use.

In the cochlear implant device, sounds are captured by an external microphone and processed using electronics housed in a small package worn behind the ear. The impulses are transmitted wirelessly through the skin to an implanted receiver-stimulator connected to a flexible, pencil-lead thin electrode that is threaded into the cochlea. In the intact ear, hairs atop different cells wiggle to different sound frequencies – different spots for different frequencies – which is in a simple sense a place code. Thus, the cochlea, in its simplest sense, has tonotopic map of sound in that frequency response is arranged spatially along the length of the cochlea. However, the actual transduction involves complex actions across the cochlea. In the healthy cochlea, hair cells chemically communicate their activation to auditory nerve fibers below them. The cochlear implant bypasses missing hair cells by directly activating auditory fibers, albeit with an electrode that probably activates hundreds of fibers at once because each stimulation site activates many fibers at once. Despite the impossibility of the current implant being able to recreate natural spatiotemporal auditory nerve fiber activity patterns, it is nevertheless quite successful in providing useful signal to the brain. In essence, the cochlear implant transforms sound in the world into a spatiotemporal pattern; this transformation is an attempt to copy the computation that the sound should have produced in the auditory nerve. A useful video of the system can be found at:

Cochlear implants do not produce a natural sound in part because the technology cannot produce the correct spatiotemporal activity patterns in the auditory pathway nerve. We lack an understanding of how to compute natural neural patterns from sound. The inadequate transformation probably accounts for problems such as the difficulty users have understanding speech in noisy environments. Here is where AI strategies could improve function. Deep learning might provide an effective way not only in learning the optimal spatiotemporal pattern of stimulation to compute percepts from sound, but also help to learn and then generate missing components in the neural signals needed to correct for changing environments. Static sound processor algorithms aim to find the best algorithm, but deep learning approaches that identify relationships between input and output patterns to solve problems like filtering speech in crowds are already beginning to show promise (Healy et al., 2015). As will be repeated in the subsequent examples, these biological, computational and technological shortcomings of cochlear implants are shared by all current neurotechnologies and AI may be able to help improve pattern transformation across all of them.

Sensory Neurotechnology – vision

Retinal implants, another neural prosthetic device that has recently been approved, has the ability to restore a level of vision for people with blindness from retinal degenerative disorders. Several hundred have already been implanted. Like hearing, vision involves the transduction of a complex pattern light that falls on the retina into a neural activity pattern, that is further computed in brain networks and interpreted as form, structure or meaning. Photoreceptors and the ensuing circuitry at the back of the tissue-paper thin retina at the back of the eye produce spatiotemporal activity patterns. This pattern is transmitted to the brain via ganglion cells, which project their axons from the retina through the optic nerve to multiple sites in the brain. Vision, especially when in the service of a behavior, engages networks across vast extent of the nervous system, again in a continual re-computing of one pattern into another.

Vision loss is commonly the result of photoreceptor degeneration (e.g. macular degeneration (Mysore et al., 2015)), which stops light from engaging the first step of the retinal circuit that leads to ganglion cell activation, the obligatory path from eye to brain. Typically, ganglion cells and their brain connections remain, a parallel to auditory receptor (hair cell) degeneration with the auditory nerve remaining. Retinal implants, which were first approved in the US in 2013, have followed a design similar to the cochlear implant: A set of electrical stimulating electrodes is used to replace lost photoreceptors to activate intact visual projections, via the ganglion cells, to deliver spatiotemporal pattern of information to the brain (Lin et al., 2015). For a retinal implant, a two-dimensional sheet of stimulating electrodes is laid at the back of the eye (above or below the retina). Patterns of light detected on a camera (worn outside the body) are transformed into spatiotemporal electrical stimulation pattern on the array, which activate the ganglion cells. The activated ganglion cells carry this artificial pattern from the eye, through the optic nerve, to the brain. The user perceives this patterned electrical stimulation of the retina not like a typical visual scene, but instead the image is reportedly somewhat like a pattern of light flashes on a movie marquis made of many light bulbs. The number of artificial visual channels is low: dozens of electrodes are tapped into the roughly one million channels that go from the human eye to the brain. Importantly, retinal stimulation bypasses the complex intraretinal ‘computational’ neural machinery that transforms the light pattern falling on the photoreceptor sheet into a new pattern in the ganglion cells. ‘Vision’ provided by neural prosthetics can require significant time for the users to interpret, presumably as the brain mechanisms are used to interpret this unusual pattern of activation. An example showing the use of a retinal implant is available at:  

The fact that vision is possible with a neural prosthesis is remarkable, and of great impact for those seeking to see again. However, restoration of natural vision will ideally require much better hardware (to couple more channels across the full extent retina) and processing to recreate natural vision, including mapping computation that occurs in the retina itself. For vision, AI could help in learning and then computing a more effective representation of signal that are produced after light is processed by the eye and retinal circuitry, which could produce more natural vision and help correct for real world complexities like changing illumination that the brain ‘expects’ from the eye. AI based on deep learning of natural scenes, which can perform many human-like perceptual functions, appear not to have been implemented in retinal prosthetics yet, but should be able to help the still small number of channels activated by the neural interface to create more natural activity patterns, leading to more natural vision.

Brain Circuits – Neuromodulation prosthetics

Stimulating sensory neurons activates pathways that are eventually interpreted by brain circuits. These circuits store information in memory, or can immediately invoke action, or delay it for later actions (planning). Highly interconnected brain networks quickly and flexibly combine information from any input and various ‘internal circuits’ to engage almost any output in ways still poorly understood. Disorders that emerge from imbalanced activity of certain brain networks appear to lead to perceptual, cognitive (including affective), and movement disorders. Neural prosthetics to correct malfunctioning circuits through targeted electrical stimulation-termed neuromodulation- are already in use, although they currently engage ‘brute force’ tactics that inject electrical impulses within a complex circuit without fully understanding how this injected ‘information’ modulates the computations produced by this network. The most remarkable and widely used example of neuromodulation success is the use of deep brain stimulation (DBS) in Parkinson’s disease (PD), where the shaking, rigidity and tremor of the disorder is relieved by stimulating a particular point in a cortical-basal ganglia circuit at about 100 times per second. Parkinson’s disease is the result of the loss of the neurotransmitter dopamine, which is essential for the normal operation of cortical-thalamic-basal ganglia networks that control movement planning and performance, as well as cognitive functions. Exactly how these circuits work, or depend on dopamine is not fully understood, but remarkably DBS stimulation at one node in this circuit overcomes the dopamine-induced deficit, readjusting the circuit so that it operates more normally, as long as stimulation is continued. With DBS, which has been applied in more 150,000 people symptoms are diminished often substantially (but not cured).

Deep brain stimulation (DBS) is a process of using a pattern of electrical stimulation through an electrode surgically inserted into a select region of a basal ganglia thalamo-cortical loop (i.e. subthalamic nucleus, STN) in order to alter the functional activity level, of a part of that circuit (Fig. 1B). Typical DBS electrodes consist of a spaghetti noodle-sized probe with four (or more) mm sized metal contacts near its end. The probe is inserted into the STN, a collection of neurons about the size of a lentil bean. Repeated electrical stimulation in STN – through an electronic pulse generator placed under the skin of the chest – modulates brain circuit function presumably modulating the circuit so that it computes properly. Typically one electrode is used, but multiple electrodes are being evaluated as a way to create more complex or accurately localized spatiotemporal patterns. The result in PD is impressive (one of many videos of the effect at:

The safety profile for DBS implantation is quite good (DiLorenzo et al., 2014). But not surprisingly, DBS can have side effects, including effects on cognition (Wu et al., 2014), perhaps due to the large electrodes (>1mm), the difficulty in very exact placement requirements, the proximity of other circuits to the stimulation site or the type of stimulation pattern employed, but it remains impressive that crude stimulation works so well. Tuning stimulation parameters is complex. DBS is now open loop, in that it does not use information about activity patterns in the circuit to shape the best stimulation pattern. This is hindered in part by the difficulty in monitoring the state of the circuit, although at least basic macro recording is now beginning to be possible. When readout is available, AI might be useful in finding intelligent ways to adaptively learn the best timing or intensity of stimulation to optimize the effect, which now is a barrier to effective DBS therapy (Arlotti et al., 2016). Effectively this plan would produce a bioelectronics hybrid brain circuit.

DBS is also being investigated as a circuit neuroprosthetic for broad range of other disorders including affective disorders like depression (Choi et al., 2015), obsessive-compulsive disorder (repetitive habits) (Fayad et al., 2016) and memory loss in Alzheimer’s disease (Mirzadeh et al., 2016) involving various other frontal circuits. The potential for DBS success in affective or cognitive circuits still requires considerable further inquiry. Advances are currently limited by our poor understanding of computations occurring in these complex, dynamic brain circuits, difficulties associated with knowing how or where to intervene in these networks, and availability of technology to precisely deliver correct spatial and temporal patterns. DL approaches would potentially be helpful in repairing these circuit disorders, by learning from abnormal brain patterns (by reading out patterns of activity) and converting them into meaningful activity patterns for that circuit (writing in by stimulation of the correct circuit sites).

Changing brain circuits raises ethics issues. DBS provides a tool to shape virtually any brain circuits including those affecting personality or behavior and therefore must be judiciously monitored for ethical application. The concept of altering brain circuits, and potentially behavior, with electrical stimulation (or other forms of energy) is more immediately concerning as simpler, but much less precise brain stimulation devices are used. It is now possible to influence circuits with technology that can be applied outside the head, which has growing adoption in the public. Most specifically, Transcranial Direct Brain Stimulation TDCs is possible with everyday technology (batteries and saltwater soaked sponges on the head) and it is very cheap and easy to make. TDCS has a popular following and is being used for every imaginable issue, often with no valid scientific backing (Wexler, 2017) raising ethical, legal and social concerns (Kuersten and Hamilton, 2014).

Movement restoration – Brain computer interfaces:

Voluntary movement also emerges from brain circuits and, not surprisingly, is elaborated by a vast network of central nervous system structures. However, the corticospinal pathway, a bundle of axons connecting neurons in cerebral motor areas to the spinal cord, is one critical path that provides a patterned input to the spinal cord (and many other structures) to generate skilled movement particularly of the fingers and hand. Paralysis results from a number of disorders, including stroke, spinal cord injury, or traumatic brain injury. When any of these disorders disrupts the corticospinal pathway anywhere along its route, paralysis of useful, skilled actions including hand motion, walking or speech may occur. A brain computer interface (BCI) is a system that is designed to bypass damaged brain structures and restore brain-controlled movement by using brain activity patterns as a source of movement commands. BCIs recreate action commands from limited samples of neural activity patterns from brain areas that have activity patterns related to movement intentions. These patterns can be read out and decoded into commands able to operate devices like a computer or a robot, or even the paralyzed muscles themselves. A BCI can be considered as the converse of devices discussed so far, in that a BCI is intended to read out brain activity (i.e. recording activity) so that intentions can become actions, rather than trying to write in signals into the brain or nerves. BCIs have been used in investigational studies in fewer than 20 people with severe paralysis to restore their ability to move or interact with the world. (for more comprehensive reviews see: (Donoghue et al., 2007; Hatsopoulos and Donoghue, 2009; Homer et al., 2013).

Movement intentions arise from a spatiotemporal pattern of activity in cortical neurons across a network of cerebral motor areas. The primary motor cortex (MI) is a major origin of the corticospinal pathway and a key node in a much larger cerebral motor network. Using an aspirin-sized bed of 100 hair-thin probes that are inserted just into the surface of the MI arm region, it is possible to record patterns of neural activity from a each of many individual neurons that reflect the coordinated motions of the arm, say to reach and grasp. Current, typically static, algorithms make it possible to convert that pattern from a small sample of neural activity into control signals that can allow a person who is fully paralyzed to control a multijoint robotic arm well enough to pick up a container of coffee, drink from it, and put it back on the table (for video see: Other groups have extended this work and demonstrated even more dexterous control (Collinger et al, 2013). However, actions of computer cursors or robotic arms using BCIs are slow and far less dexterous than we effortlessly accomplish all the time. The computational power of AI, by potentially learning better mappings between limited, complex patterns of cortical neural activity and the requisite command structure, could provide a much richer, faster and complex control (Fig. 1C). As depicted in Fig. 1 for motor systems, current sensors only sample incompletely – they have access to a tiny fraction of the activity ongoing to coordinate even the simplest reach and grasp action. Optimized DL algorithms (and hardware) might be able to make up for the small sample and missing information to achieve the speed and dexterity achieved by able-bodied people. DL could also benefit from information about real world actions which help constrain the problem (Howard et al., 2009), but this has not yet been tried.


Neural prosthetics are remarkable mainly early stage attempts to replace missing neural structures, but they are not able to fully replicate the brain structures they are intended to replace. Their shortcoming emerges from technology limitations, namely the inability of current electrode interfaces to address (write in) or sample (read out) the full spectrum of channels, the size of computational technologies which can limit their processing power or portability, or their compatibility with the body. Importantly, successful neural prosthetics are limited by the inability to reproduce computations of biological circuits, which can be simplistically reduced to a problem of computing one spatiotemporal pattern from another. The problem is exacerbated by incomplete and noisy information inherent to neural data. Deep learning appears to be a framework very well suited to more faithfully mimic this computation compared to current approaches, because this approach is particularly good at finding high-level abstractions, (i.e., complex patterns) in large-scale data.

AI, ethics and the limitations and potential of Neuroprosthetics:

The promise of neurotechnology to improve human health is substantial and not limited to the examples given above. There are many other neuroprosthetic technologies where advanced, intelligent computing can help improve the lives of those with neurological disorders or injury, such as creating a brain controlled artificial limb for people with limb loss. In my view, these technologies are likely to be realized, although it is very difficult to predict the timing and pace of success when fundamental research issues still are required. ‘AI’, neuroscience and engineering advances will all play a role in realizing more effective technology to restore vision, hearing, cognition, or movement to those disabled by nervous system disorders. AI offers neuroprosthetics a means to learn and then implement computations like those achieved by neural circuits, without full understanding how these computations are achieved. As they are integrated into neural functions, they may provide a framework to create more and more brain-like computing that could replicate or even exceed those capabilities. We should be aware of the impact of such advances on society and the individual.

Accelerating AI and in neuroprosthetics successes indicate that we are at an inflection point where the ability to augment human function with these bio-machine hybrids, though still a long way off, can be realized. Full restoration of humans who have lost critical functions of their nervous system would be an outstanding success for neuroengineering, the extension of this to ways to augment abilities in able-bodied individuals is easy to imagine as medical applications expand. One might envision retinal implants enabling night or infrared vision, or, more fancifully, memory circuit stimulation to double memory capacity. These speculations raise ethical and social challenges that need to be evaluated now in the scientific and legal communities so that we are prepared as these capabilities emerge. Lastly, it is important to recall that there are other big challenges to achieving the bionic human either to overcome disability or to augment function. High resolution communication with the nervous system, for the foreseeable future, will require surgical interventions that will slow adoption, due to cost and real or perceived risk, and will surely influence social views on using this type of neurotechnology.


* Wyss Center for Bio and Neuroengineering, Campus Biotech, Chemin des Mines 9, 1202 Geneva, Switzerland.



Arlotti M, Rosa M, Marceglia S, Barbieri S, Priori A (2016) The adaptive deep brain stimulation challenge. Parkinsonism Relat Disord 28:12-17 Available at: [Accessed February 26, 2017].

Bond M, Mealing S, Anderson R, Elston J, Weiner G, Taylor R, Hoyle M, Liu Z, Price A, Stein K (2009) The effectiveness and cost-effectiveness of cochlear implants for severe to profound deafness in children and adults: a systematic review and economic model. Health Technol Assess (Rockv) 13:1-330 Available at: [Accessed February 26, 2017].

Collinger, J.L., Wodlinger, B., Downey, J.E., Wang, W., Tyler-Kabara, E.C.,

Weber, D.J., … Schwartz, A.B. (2013). High-performance neuroprosthetic control by an individual with tetraplegia. Lancet, 381(9866), 557-64.

Choi KS, Riva-Posse P, Gross RE, Mayberg HS (2015) Mapping the "Depression Switch" During Intraoperative Testing of Subcallosal Cingulate Deep Brain Stimulation. JAMA Neurol 72:1252-1260 Available at: [Accessed February 26, 2017].

DiLorenzo DJ, Jankovic J, Simpson RK, Takei H, Powell SZ (2014) Neurohistopathological Findings at the Electrode-Tissue Interface in Long-Term Deep Brain Stimulation: Systematic Literature Review, Case Report, and Assessment of Stimulation Threshold Safety. Neuromodulation Technol Neural Interface 17:405-418 Available at: [Accessed February 25, 2017].

Donoghue JP (2015) Neurotechnology. In: The future of the brain: essays by the world’s leading neuroscientists (Marcus GF (Gary F, Freeman J, eds), pp 219-233. Princeton University Press.

Donoghue JP, Nurmikko A, Black M, Hochberg LR (2007) Assistive technology and robotic control using motor cortex ensemble-based neural interface systems in humans with tetraplegia. J Physiol 579:603-611.

Fayad SM, Guzick AG, Reid AM, Mason DM, Bertone A, Foote KD, Okun MS, Goodman WK, Ward HE (2016) Six-Nine Year Follow-Up of Deep Brain Stimulation for Obsessive-Compulsive Disorder. Bankiewicz K, ed. PLoS One 11:e0167875 Available at: [Accessed February 26, 2017].

Hatsopoulos NG, Donoghue JP (2009) The science of neural interface systems. Annu Rev Neurosci 32:249-266 Available at: [Accessed February 26, 2017].

Healy EW, Yoho SE, Chen J, Wang Y, Wang D (2015) An algorithm to increase speech intelligibility for hearing-impaired listeners in novel segments of the same noise type. J Acoust Soc Am 138:1660-1669 Available at: [Accessed February 24, 2017].

Hinton GE, Osindero S, Teh Y-W (2006) A Fast Learning Algorithm for Deep Belief Nets. Neural Comput 18:1527-1554 Available at: [Accessed February 26, 2017].

Homer ML, Nurmikko A V, Donoghue JP, Hochberg LR (2013) Sensors and decoding for intracortical brain computer interfaces. Annu Rev Biomed Eng 15:383-405 Available at: [Accessed July 14, 2014].

Howard IS, Ingram JN, Körding KP, Wolpert DM (2009) Statistics of natural movements are reflected in motor errors. J Neurophysiol 102:1902-1910 Available at: [Accessed February 26, 2017].

Kuersten A, Hamilton RH (2014) The brain, cognitive enhancement devices, and European regulation. J Law Biosci 1:340-347 Available at: [Accessed February 26, 2017].

LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436-444 Available at: [Accessed February 26, 2017].

Lin T-C, Chang H-M, Hsu C-C, Hung K-H, Chen Y-T, Chen S-Y, Chen S-J (2015) Retinal prostheses in degenerative retinal diseases. J Chinese Med Assoc 78:501-505 Available at: [Accessed February 26, 2017].

Mirzadeh Z, Bari A, Lozano AM (2016) The rationale for deep brain stimulation in Alzheimer’s disease. J Neural Transm 123:775-783 Available at: [Accessed February 26, 2017].

Mysore N, Koenekoop J, Li S, Ren H, Keser V, Lopez-Solache I, Koenekoop RK (2015) A Review of Secondary Photoreceptor Degenerations in Systemic Disease. Cold Spring Harb Perspect Med 5:a025825 Available at: [Accessed February 26, 2017].

Wexler A (2017) Recurrent themes in the history of the home use of electrical stimulation: Transcranial direct current stimulation (tDCS) and the medical battery (1870-1920). Brain Stimul 10:187-195 Available at: [Accessed February 26, 2017].

Wu B, Han L, Sun B-M, Hu X-W, Wang X-P (2014) Influence of deep brain stimulation of the subthalamic nucleus on cognitive function in patients with Parkinson’s disease. Neurosci Bull 30:153-161 Available at: [Accessed February 26, 2017].



Power and Limits of Artificial Intelligence

Proceedings of the Workshop Power and Limits of Artificial Intelligence 30 November-1 December... Read more