Patricia Churchland | UCSD

Motivations and Drives Are Computationally Messy

Deep learning strategies have achieved successes that have surprised those who favor a traditional write-a-program approach to artificial intelligence. The dramatic success of AlphaGo in defeating Lee Sodol, one of the very best Go players in the world, in 4 games out of 5, was a very public vindication for those who took advantage of increased computer power and steadfastly improved the performance of Artificial Neural Nets (ANNs).

Although insights relevant to neuroscience may emerge from how ANNs work, so far the accomplishments of deep learning are essentially confined to pattern recognition problems, and indeed, to pattern recognition in one single domain per machine. Striking as these pattern recognition feats are, any animal whose capacity was confined to one category of pattern recognition, even if it is brilliant pattern recognition for that category, would be an evolutionary casualty.

The success of ANNs notwithstanding, it must be acknowledged that the behavior is a far cry from what a rat or a human can do, as they live out their lives on the planet. But could we not just scale up pattern recognition so that it could be the equal of the accomplishments of a rat or a human? My judgment is that this is a lot harder than trumpeting the words “scale up pattern recognition” and confidently waving your hands.

All animals have the capacity to maintain homeostasis. Their inner milieu must stay within a restricted temperature range, the circuitry must organize the animal’s movements so that it has sufficient energy, water, and oxygen. Homeostasis is anything but a simple business, and as endotherms emerged, a much narrower temperature range had to be maintained on pain of death. Maintaining homeostasis often involves competing values and competing opportunities, as well as trade-offs and priorities. While enthusiastic ANN designers might bet serious money that simple extensions to learning algorithms could easily handle these jobs, I would bet that a wholly new wrinkle is needed. To mimic what evolution discovered over many hundreds of millions of years may be much more difficult than scaling up pattern recognition in a really big ANN with a cobbled-up trick or two.

All vertebrate species are able to detect threats, and to behave appropriately in response to motivations to survive, thrive, and reproduce. In this domain, as well as maintaining homeostatic functions, there are typically competing values and competing opportunities: should I mate or hide from a predator, should I eat or mate, should I fight or flee or hide, should I back down in this fight or soldier on, should I find something to drink or sleep, and so forth. The underlying neural circuitry for essentially all of these decisions is understood if at all, then only in the barest outline. And they do involve some sense of “self”, which is a sort of brain-construction, not a feat of pattern recognition. Biological evolution favors those whose values and decision allow them to survive long enough to pass on their genes.[1] But the dynamics of the neural business is poorly understood.

Not everything in the world is of equal interest across species. Dung beetles are highly motivated to seek dung; squirrels are not. On the other hand, squirrels are keen to find nuts and to distinguish between fresh and stale nuts, but dung beetles care not. Dogs are typically motivated to sniff the behinds of other dogs, humans are not. And so forth. Goals and plans to achieve them are internal to the animal. Often the stimuli are essentially neutral, but for the animal’s goal.[2] So there are internal settings, some acquired but some not, that manipulate such pattern recognition functions as these animals deploy.

Mammals, at least, appear to build causal models of the world. Since causality is a stronger relation that correlation, the standard real-brain tactic for upgrading to causality involves intervention and manipulation. This may be easier for an animal that can move than a stationary, if deep, learning machine. The capacity for movement, especially if you have limbs, is anything but simple, that we do know.

So far as I can tell, no one has a genuinely workable plan concerning how to capture motivation and drives and motor control into a plausible pattern recognition regime.[3] The problem is not straightforward because motivations come in different packages – hunger is different from thirst, which is different from lust or fear or curiosity or joy. Temperament comes in different “colors on a spectrum” – introvert or less so, risk averse or less so, energetic or less so, and so forth. These factors change with age, with time of day, with sleep, with mood changes, and with diseases. These functions might be understood as the drivers of pattern recognition jobs in real animals, not as pattern recognition themselves.[4] The hypothalamus and brainstem, which are crucial in the nervous system of real animals for the managements of these functions, are not yet well understood in neuroscience, to put it mildly. The circuitry is ancient, and extremely complicated. It does not look like it is just doing pattern recognition, whatever that might mean in this context.

Why not just assign different numbers to different motivational forms, and add a plus or a minus to mark strength? Ditto for temperament, ditto for levels of arousal? One problem is that the “just assign numbers” idea blurs the differences between kinds of motivational states, and the relevance of such differences to decision-making.[5] Fear and lust both involve arousal, but they are different and have different trajectories in brain space. In any case, the idea needs to be fleshed out to show how such a system makes decisions to behave that are comparably suitable to those of a fruit fly or a rat.

There is a precise pattern of causality between kinds of functions that so far is not captured by the “pattern recognition” paradigm. Until we understand much more about the nervous systems of animals, we cannot specify with any precision the nature of the causal relationships managed by the hypothalamus, brainstem and basal ganglia, or how adequately to model what is going on.

What really are depression or exuberance or patience or tenacity or resilience? Not just pattern recognition, almost certainly. How do these phenomena interact with motivation, drive and desires? What is the role of neuromodulators in these and other phenomena such as curiosity or wanderlust or sociality or aggression? Neuroscientists are indeed exploring these phenomena, one and all, but their neurobiological bases are not easy to plumb. That they are simply, at bottom, forms of pattern recognition seems unlikely at this point.

Neuromodulation more generally seems to affect what is learned, when something is learned, and how it is learned, yet so far, ANN modelers give it no role whatsoever, as though neuromodulation is a “mere biological by-product” – existing in us because we are the products of guess-and-by-golly evolution, but definitely not the crux of bit of engineering magic.

Perhaps they are right. But consider. Because so much is unknown about the relevant neurobiology, perhaps what are waved off as activities incidental to intelligence may turn out to be essential features that “scaffold” real intelligence. The analogy here is with early brain researchers who thought that all the cognitive action was in the ventricles, not in the brain itself.[6] The thing is, apart from biological intelligence, we have no understanding of what to count as real intelligence – we have no other criteria. For example, a person who is a great mathematician may be a dud in practical matters of health, finance, sex, and food. Mathematicians may say she is intelligent, but financiers or fighter pilots will not.

Go ahead and market something as “intelligent”, but if it is brittle, lacks flexibility and “common sense”[7] and has nothing approximating motivation or drive or emotions or moods, it may be difficult to persuade the rest of us that it is intelligent in the way that biological entities can be. Redefine “intelligence” you may, but the redefinition per se will not make the machine intelligent in any generally recognizable sense.

At least some of the dystopian predictions concerning the eventual threat to humans of intelligent machines depend on the tendentious assumption that engineers have now cracked the problem of intelligence in a machine. However dramatic such predictions may be, they are not tethered by a biological understanding of what makes for intelligence, and they certainly are not grounded in a biological understanding of the nature of motivation and goals. Although it is always ticklish to downplay dystopian predictions lest one seem indifferent, it is nevertheless worth balancing dystopian predictions by noting that our realistic time horizon is only about five to ten years out. Machines that care and desire control are unlikely within that time horizon.[8]

End Notes

[1] Unless, for example, they are honeybees, where their decisions are geared to passing on the genes of the queen bee.

[2] Vikram Gadagkar et al., (Dec 8 2016), Science. Dopamine neurons encode performance error in singing birds.

[3] Though I should mention that Yann LeCun has some ideas about internal motivation, on a “happy or not happy” dimension. This could be a fruitful start.

[4] Sejnowski, T.J. Poizner, H. Lynch, G. Gepshtein, S. Greenspan, R. Prospective Optimization, Proceedings of the IEEE, 102, 799-811, 2014.

[5] Raposo, D., Kaufman M.T., and Churchland A.K. (2014) Nature Neuroscience: 17(12): 1784-92. A category free neural population supports evolving demands during decision-making.

[6] See for example, Hieronymous Brunschwig (1497, second edition 1525) The Noble Experyence of the Vertuous Handy Werke of Surgerie. Descartes seems to have thought along similar lines.

[7] See also Yann LeCun on this point.

[8] Thanks to Terry Sejnowski for comments on an earlier draft.