Max Planck Institute for Brain Research (MPIB); Ernst Struengmann Institute for Neuroscience (ESI) in Cooperation with Max Planck Society; Frankfurt Institute for Advanced Studies (FIAS).
Biological evolution has brought forth systems of ever increasing complexity. This trend culminated in the emergence of highly structured societies, also addressed as super-organisms, whose complexity exceeds that of their constituting components and reached a maximum in human societies. Why did evolution take this direction? The most likely reason is that increasing complexity enhances fitness. One important fitness factor is the cognitive and executive abilities of organisms, functions that depend to a large extent on the information processing capacity of neuronal networks. These functions permit organisms to generate detailed models of the world and to design well-adapted coping strategies. Enhancing cognitive functions obviously requires investment in the complexity of sense organs and nervous systems.
In simple organisms the components of the nervous systems, the neurons, interact mainly serially, relaying information sequentially from sensory organs to effector systems. Consequently the emerging dynamics have low dimensionality and complexity even though the nervous systems may consist of a large number of components. While such feed-forward architectures are conserved in the brains of virtually all species, they prevail in simple organisms such as molluscs and insects. As a consequence, the ability of these organisms to process complex constellations of sensory signals is limited to solutions for special problems. Similar principles of information processing are realized in artificial systems, known as deep learning networks, that have revolutionized the field of artificial intelligence and outperform human cognitive abilities in some selected domains. These systems capitalize on the power and speed of modern computers and the availability of immense databases for the training of artificial networks. However, in numerous domains they fall short of the performance of biological systems because they lack certain architectural features that characterize more evolved nervous systems.
The decisive difference is recurrence and re-entry. In the more evolved systems the components are coupled reciprocally rather than only serially. The flow of information is no longer unidirectional from low to high processing levels, from sensory organs to effectors. Rather, components interact with one another already within each of the processing levels and these levels are in addition coupled via feedback loops, allowing for bidirectional flow of information. Systems with such features develop extremely complex, non-linear dynamics and are capable of self-organization. They can generate a very large number of distinct spatially and temporally structured patterns of activity: Their non-linear dynamics allow them to explore a very high dimensional state space and to assume a virtually infinite number of different states. Evolution discovered the power of these principles early on and selection pressure led to a massive increase of processing architectures, that allow for recurrence and self-organization. Examples are the extremely complex interaction networks that have emerged in higher organisms at all levels of organization: already genes interact with one another through extremely complex molecular signalling systems and mutually influence each others’ expression. It is mainly the architecture of this interaction network that accounts for the fact that a relatively small set of similar genes can instruct the development of very different organisms. These gene-gene interactions orchestrate the species-specific expression pattern of the genes. Drosophila, the fruit fly, has more or less the same set of coding genes as us, but the interaction network controlling the expression of these genes differs. Very similar principles govern the interactions in metabolic networks, in the immune system and, above all, in the neuronal networks of highly evolved organisms.
In the case of neuronal systems, simple feed-forward interactions are complemented more and more by reciprocal interactions between the nodes of the networks. Neurons interact reciprocally with one another both within and across processing levels. This connectivity motive is particularly pronounced in the cerebral cortex and the hippocampus, structures that appeared in rudimentary form first in reptiles. Once these structures had been developed, further increases in the brain’s complexity consisted essentially in a massive expansion of the cerebral cortex. This suggests that the functions associated with the evolution of structures capable of supporting recurrent processing are scalable and very effective in enhancing biological fitness.
We are still far from fully understanding the functions realized on the basis of the extremely complex, non-linear and high dimensional dynamics that evolve in the recurrent networks of the cerebral cortex. However, we begin to see that the astounding cognitive and executive capacities of highly evolved brains are due to computations that take place in the cerebral cortex and capitalize on the dynamics and self-organizing properties of this structure: the ability to store an immense quantity of information in a high dimensional dynamic space and to allow for ultrafast retrieval of this information, the generation of associations between experiences gathered in different sensory modalities – a prerequisite for abstraction – the generation of complex patterns for the control of executive functions, the encoding of contents in abstract symbolic format and finally, in the case of humans, the generation of a highly differentiated, rule-based communication system, our grammar- and syntax-based language. It goes without saying that these functions have a substantial survival value. As the acquisition of these highly differentiated cognitive functions is a direct consequence of the increasing complexity of underlying brain processes, it is plausible that selection pressure favoured the evolution of ever more complex brains. Moreover, in order to fully exploit the enhanced information processing capacities of highly evolved brains, coevolution may have led to more versatile and hence more complex sensory organs and effector systems.
In conclusion, evidence suggests that increasing complexity endows organisms with improved cognitive and executive functions that in turn allow them to cope more effectively with the challenges of survival in a rapidly changing, dangerous and competitive environment.
However, at first sight this interpretation seems at odds with the intuition that increasing complexity is also likely to enhance the vulnerability of the systems and thereby reduces fitness. The more components a system has, the higher the probability that one of them will fail. The denser and more complex the interactions between components are, the higher the probability that local failures lead to a catastrophic collapse of the whole system, in particular when the systems have highly non-linear dynamics. In this case, small local disturbances can undergo massive amplification and propagate throughout the entire interaction network with deleterious consequences for the functions of the whole system.
Interestingly, however, the intuition that complexity and vulnerability are positively correlated may not be true. There is evidence that increasing complexity could, in itself, be a fitness factor that enhances the resilience and robustness of systems. In principle, there are two strategies to augment the resilience of systems. One consists in increasing redundancy by multiplying critical components. Until recently this has been the most common strategy applied in technical systems. Another, and at first sight contra-intuitive strategy, is to increase the complexity of systems. As mentioned above, truly complex systems consist in reciprocally interacting components and exhibit non-linear dynamics. Such systems are capable of self-organization and self-healing. They are able to compensate by reorganization for the drop out of individual elements. As a consequence, they often undergo “graceful degradation” of their function if some of their components fail, but they rarely lose all functions. Simple systems with unidirectional flow of information within hierarchical architectures and mainly linear dynamics, by contrast, cannot self-organize. Here the failure of a component of the processing stream leads to a complete loss of the corresponding function. The resilience of such systems can only be enhanced by increasing redundancy, a strategy that requires massive investment in hardware. Thus, increasing complexity may actually have served fitness in two ways. It allowed realization of sophisticated cognitive and executive functions and, at the same time, it enhanced the robustness of the organisms by endowing them with the capacity to self-organize. And interestingly, this strategy to enhance resilience by increasing the complexity of network interactions by recurrence and re-entry is not confined to the evolution of individual organisms. As far as biological evolution is concerned, it is probably also the reason for the emergence of superorganisms such as colonies of bees and ants, in which the agents are interacting reciprocally through highly structured communication networks. And last but not least, the generation of social and economic networks formed by social animals, which reached a maximum of complexity in human societies once cultural evolution complemented biological evolution, are also likely to have the same reason: increasing resilience through cooperation and self-organization.
However, there is also a downside of complex systems with highly non-linear dynamics, in particular when it is required to control their future trajectories. The very same mechanisms that support their self-organizing capacity and resilience towards disturbances and make them robust and resistant to the failure of components and adverse influences from outside, also make it very difficult to control their collective dynamics.
Reciprocal coupling, recurrence and re-entry are also distinctive features of our social, political, economic and financial systems. Accordingly, these systems also exhibit highly non-linear dynamics and self-organizing capacities – and as globalisation proceeds and digital communication networks proliferate, their complexity increases with unprecedented speed. The problem with such complex systems is that their future development cannot be reliably predicted. One popular example is the dynamics of stock markets. For the same reason it is very hard to control the long-term behaviour of these systems. Manipulating the activity of individual nodes or the coupling strength of interactions can lead to entirely unforeseen responses of the systems because of their non-linear dynamics. Thus, top down control and dirigisme are ineffective governance strategies when one has to do with complex systems endowed with strong self-organizing capacities. One way out is to destroy their capacity for self-organization. This can be achieved by abolishing interactions between the components and aligning them in hierarchical structures in which information flow is unidirectional. In this case the activity of individual nodes can be reliably controlled in a strictly top-down regime – a strategy opted for by totalitarian systems. This, however, sacrifices the resilience and robustness inherent in self-organizing systems and is likely to be efficient only for the governance of relatively simple systems, whose dynamics are essentially linear. Historical evidence about the fate of totalitarian systems suggests, that dirigisme is an inappropriate strategy once systems reach a critical threshold of complexity and defy analytical approaches.
These notions about the properties of complex systems have far-reaching consequences for our attempts to take responsible action and to design effective governance regimes. In essence we are trapped in a tragic circle. We cannot stop acting. Thus, we continue to permanently interfere with the systems in which we are embedded and to contribute to their future trajectories. At the same time we know that we cannot really control the systems that we have created – a dilemma to which there is no simple solution.
Even before the theory of complex systems provided explanations for the difficulty to control the dynamics of non-linear systems, humans experienced that they are embedded in a world whose evolution they can neither understand nor effectively control. This collective intuition of being thrown into an uncontrollable world whose properties cannot be deduced from the properties of the components – another important characteristics of complex, self organizing systems – is probably one reason for postulating the existence of transcendental forces that are endowed with the meta-intelligence and power required to stabilize the world and to determine the fate of individuals.
In the following I shall examine to which extent it is possible to derive some rules of conduct from the humbling notion that we are embedded in a complex systems, whose stability depends on the principles of self-organization and whose future trajectories are not determinable. Such rationally deducted rules could perhaps serve as mosaic stones for the development of secular ethics.
First and above all we need to internalize the insight that we are actors in a system whose trajectory is not readily predictable and that this uncertainty is not simply due to incomplete knowledge of initial conditions but is constitutive of complex systems with highly non-linear dynamics. This insight calls for humbleness and should nurture scepticism vis-à-vis affirmative promises and simple recipes. The problem of unpredictability can be alleviated to some extent by simulating possible outcomes with computational models – as is currently done to derive prognoses on climate change, but in principle this approach too cannot provide more than probabilities. One option to cope with these uncertainties and the resulting feeling of helplessness is to foster confidence in the stability and resilience of complex, self-organizing systems. This confidence is warranted since we owe our existence to the robustness of such systems. However, in order to justify this confidence, we need to assure that the systems in which we evolve preserve their self-stabilizing property.
Fortunately, some of the mechanisms supporting self-organization are known and this allows us to deduce by reasoning which prerequisites should be fulfilled to facilitate self-organizing processes. One of these prerequisites, already alluded to above, is to ensure dense and reliable communication among the nodes of the network. Here it is imperative to assure that the flow of information is not distorted or compromised, as this would jeopardize the self-organizing capacity of the system. Nodes of the network must be reliable, they must not lie. If the forager bees lied to the hive about the location of resources, the whole system would rapidly break down. Furthermore we are well advised to not interfere with the existing systems too much and to induce only incremental changes in order to avoid abrupt bifurcations and catastrophic instabilities. This calls for a conservative attitude towards change, diligent use of resources and preservation of interaction architectures that have proven efficient in the past. And most importantly, monitoring systems have to be implemented that evaluate the state of the system and keep track of its evolution. Ideally, such evaluation systems should take into account both the wellbeing of the individual nodes and the stability of the whole system in order to allow adjustments of the system’s interaction architecture. Unlike the rulers of hierarchical systems these evaluation systems do not have to get involved in the micromanagement of the system. They only have to provide continuous feedback about the state of the system and assure that the consequences of spontaneously occurring or induced changes in the system’s architecture are tracked at short intervals. It would then suffice to reinforce and consolidate those interactions that have enhanced wellbeing and stability and to weaken interactions that had adverse consequences. In a sense this would mimic the evolutionary mechanisms that have been so successful in bringing about stable and highly complex systems. Our brains are organized in this way. They possess evaluation systems that monitor the internal state of the brain and the success or failure of actions. These evaluation systems, also addressed as reward systems, supervise the experience-dependent changes in the system’s functional architecture, i.e. changes in the strength of coupling among nodes that are the basis of learning processes. Features of the functional architecture that favour positive behavioural outcomes become reinforced and those having adverse effects get suppressed. The similarity of this self-organizing optimization process with evolutionary mechanisms is obvious.
The challenge for societies opting for such self-organized governance structures is of course to define the evaluation criteria and the variables to be monitored. Furthermore, mechanisms have to be implemented that allow for the continuous adjustment of the system’s interaction architecture. As a first step it would be interesting to examine to which extent the various forms of democracy and economic systems already rely on such evolutionary mechanisms of self-organization and optimization. Changes of interaction architectures are constantly induced by legislation, economical decisions and spontaneous fluctuations and the consequences of these changes on the state of the system are monitored by ballots and elections, causing stabilization of architectures perceived as advantageous.
However, there is a decisive difference between the self-organizing mechanisms that supported biological evolution and those that govern our fate since the beginning of cultural evolution. As far as we know, biological evolution had no goal and the agents promoting it had no insight in their destiny. Human agents, by contrast are aware of being able to intentionally interact with their environment, to articulate goals and to act accordingly. This endows them with responsibility for the evolutionary trajectory of the systems that they generate and keep in motion. Thus, even though we are still just components of an immensely complex self-organizing system, we want this process to be goal directed. And this imposes upon us the obligation to acquire as much knowledge as possible about the system that we evolve in and to use this knowledge for the design of interaction architectures that serve both the wellbeing of the components and the stability of the whole system. In addition, because we are intentional agents endowed with insight we need to identify the rules of conduct that assure the stability of our systems and we need to comply with them. We need to behave as moral agents and the rules that we identify should become the pillars of a secular ethics. Some of these rules are easy to identify and it comes as no surprise that they are virtually identical to the principal commandments found in the world religions. Apparently collective experience about stability generating rules of conduct has converged between different cultures. However, these rules of conduct often reflect the particular conditions and needs of human communities that evolved within rather different biotopes. Thus, not all of these rules are readily generalizable and some of them are perceived by members of the respective other culture as a consequence of false beliefs. As globalisation proceeds, these cultural idiosyncrasies become a major source of conflict. Therefore, attempts have to be intensified to arrive at a consensus on the most important rules of conduct. It is my conviction that such a process can only converge if arguments are based on evidence and reasoning rather than beliefs – and this is where cooperation between the natural sciences and the humanities is urgently needed.