Is Neuroscience Limited by Tools or Ideas? – Scientific American

Intricate, symmetric patterns, in tiles and stucco, cover the walls and ceilings of Alhambra, the red fort, the dreamlike castle of the medieval Moorish kings of Andalusia. Seemingly endless in variety, the two dimensionally periodic patterns are nevertheless governed by the mathematical principles of group theory and can be classified into a finite number of types: precisely seventeen, as shown by Russian crystallographer Evgraf Federov. The artists of medieval Andalusia are unlikely to have been aware of the mathematics of space groups, and Federov was unaware of the art of Alhambra. The two worlds met in the 1943 PhD thesis of Swiss astronomer Edith Alice Muller, who counted eleven of the seventeen planar groups in the adornments of the palace (more have been counted since). All seventeen space groups can also be found in the periodic patterns of Japanese wallpaper.

Without conscious intent or explicit knowledge, the creations of artists across cultures at different times nevertheless had to conform to the constraints of periodicity in two dimensional Euclidean space, and were thus subject to mathematically precise theory. Does the same apply to the endless forms most beautiful, created by the biological evolutionary process? Are there theoretical principles, ideally ones which may be formulated in mathematical terms, underlying the bewildering complexity of biological phenomema? Without the guidance of such principles, we are only generating ever larger digital butterfly collections with ever better tools. In a recent article, Krakauer and colleagues argue that by marginalizing ethology, the study of adaptive behaviors of animals in their natural settings, modern neuroscience has lost a key theoretical framework. The conceptual framework of ethology contains in it the seeds of a future mathematical theory that might unify neurobiological complexity as Fedorovs theory of wallpaper groups unified the patterns of the Alhambra.

The contemporary lack of ethological analysis is part of a larger deficit. Darwins theory of natural selection, arguably the most important theoretical framework in biology, is prominent by its absence in modern neuroscience. Darwins theory has two main tenets: the unguided generation of heritable variation, and the selection of such variation by an environmental niche to produce adaptive traits in organisms. The general role of the animal brain is to enable adaptive behaviors. It is reasonable to argue that a study of these adaptive behaviors (natural behaviors) should guide the experimental study of brain circuits. Indeed, this was the premise of the field of ethology developed by Tinbergen, Lorenz and von Frisch in the mid-twentieth century. The observational field studies of core natural behaviors such as mating, aggression and critical-period learning by ethologists enabled the subsequent elucidation of the underlying neural circuitry by neuroethologists.

In contrast to this empirical method of observing a freely behaving animal in its adaptive niche (natural settings) is the controlled experimental approach developed by Pavlov and Skinner to study conditioned behaviors, and psychophysical tests developed by experimental psychologists to characterize perception. This approach draws inspiration from physics, with its emphasis on isolating a system from external influences. The animal is placed in a controlled environment, subjected to simple stimuli and highly constrained in its behavior (e.g., forced to choose between two alternatives). The majority of contemporary neuroscientific studies use the controlled experimental approach to behavior with ethological analysis taking a back seat. Krakauer et al argue that all of the emphasis on tool building and gathering large neural data sets, while neglecting ethological grounding, has led the field astray.

The rationale of the current approach is that detailed recordings of neural activity (neural correlates) associated with behavior, and interventions in the behavior by suitable circuit manipulations, go beyond mere description of behavior and therefore provide greater explanatory power. Krakauer et al challenge this school of thought and argue that neither method is fruitful without first understanding natural behaviors in their own right, to set a theoretical context and guide experimental design. The tools to record and manipulate neural activity cannot substitute for ethological analysis, and may even impede progress by providing a false narrative of causal-interventionist explanation.

Misplaced concreteness in recording/manipulating neural activity can lead to the mereological fallacy, which incorrectly attributes to a part of a system a property of the whole system. The authors point to the popular mirror neurons as an example. Mirror neurons show the same activity when a primate performs a task, as compared to when the primate observes a different actor performing the task. However, this partial match between neural activities, does not by itself imply any similarity of psychological state between the observer and the actor. It would therefore be a conceptual error to use the activity of the mirror neurons as an interchangeable proxy for the psychological state. Krakauer et al hold that such an error is prevalent in the literature.

Generally, it is impossible to obtain a complete system-wide measurement of neural activity. Even the best current efforts to measure the activity of thousands of neurons falls far short of recording the electrical activity of entire nervous systems, including all of the axons, dendrites and chemical messages. There is no escape from the need to generalize from partial neural observations. These generalizations are fragile and may not provide any insight into the adaptive behaviors unless the experiments are carefully designed, taking those behaviors explicitly into account. Ignoring Darwin is not a good recipe for success in gaining biological understanding. Conversely, the authors draw upon studies of Bradykinesia in Parkinsonian disease, sound localization in barn owls, navigation in electric fish and motor learning, to show that ethologically informed experimental design coupled with neural activity measurements and perturbations can lead to better insight.

The call to re-focus on natural behaviors is timely but not really controversial. However, Karakauer et. al. proceed to make stronger claims regarding behaviors as emergent phenomena that cannot even in principle be explained in neural terms. Here they are on shakier ground. Quasi-mystical claims regarding emergence in biology are endemic in the literature and uncomfortably echo discarded notions of Cartesian dualism and Bergsonian vitalism. In support of their argument Krakauer et al refer to the collective behavior of flocks of birds, which exhibit large-scale spatiotemporal patterns (murmurations) not obvious from the behavior of one or a few birds. The fallacy of the argument is starkly evident in an amplifying commentary in The Atlantic on Krakauers article, where it is noted that the patterns can be reproduced in simple models of flocking with elementary rules dictating the flight behavior of individual birds in the context of their neighbors. This is in keeping with innumerable studies throughout the twentieth century: It has been repeatedly observed that seemingly complex patterns can be explained by simple, local rules.

These exercises demonstrate that complex collective behavior of systems can indeed be explained by simple rules of interactions between the elements of the system. Snatching defeat from the jaws of victory, the Atlantic commentary concludes that you would never have been able to predict the latter (i.e. the flocking patterns) from the former (the simple rules). But this was precisely what was done by the computer models cited, namely the flocking patterns were predicted by the simple rules! Perhaps what is implied is that the outcome of the model is not obvious in a subjective sense: i.e. we may not be able to do the math in our heads to connect the dots between the interaction rules and the collective behaviors (though this can be disputed one can indeed build the relevant intuition using appropriate theoretical, paper-pencil calculations of a pre-computer age, nineteenth century variety). However, that is a statement about our subjective feelings about the topic and has no bearing on the in principle question as to whether simple interaction rules lead to complex macroscopic behaviors. We now understand that they can. Working out the connections between the microscopic details and macroscopic behaviors may be practically challenging, but much theoretical progress has been made on this topic, and no in principle explanatory gap exists between the microscopic and macroscopic. Leaving aside the canard of emergence, Krakauer et al have hit upon a central issue that bears amplification. The problem with the mechanistic-reductionist explanation of nervous systems is not that there is an in principle gap between microscopic neuronal details and macroscopic behaviors (emergence), but that this style of explanation is largely divorced from Darwins theory of natural selection. This is particularly evident in the lack of niche-adaptive behaviors in driving experimental design, as pointed out by Krakauer et al. As is customary in the neuroscience literature, in contrasting the how (mechanistic) style of explanation from the why (adaptive) style of explanation of behavior, Krakauer et al invoke David Marrs computational level of analysis and Tinbergens ultimate causes. Marr defines three levels of analysis, computation (problem to be solved), algorithm (rules) and implementation (physical). Tinbergens analysis of behavior is separated into proximate or mechanistic explanations and ultimate or adaptive explanations. However, one might as well directly go to Darwin, since the context is broader than that of computational explanations or ethology, and originates in a fundamental tension between the biological and physical sciences.

Questions regarding function (in the English dictionary sense of purpose) belong exclusively to the biological domain. Exorcism of teleological considerations was central to modern physics; an explanation such as the purpose of the sun is to give light has no place in a physics textbook. Yet a statement with the same epistemological status, that the function of hemoglobin is to transport oxygen would be completely uncontroversial in a biology textbook. This cognitive dissonance between the status of teleological explanations in the two sciences has historical roots. Aristotles biological teleology stood in contrast with Democritus physics-style atomism. The teleology-atomism contrast in understanding nature is not special to classical Greek philosophy and occurs for example in classical Hindu philosophy. The role of function in the dictionary sense of purpose continues to be debated in the contemporary philosophy of biology. The working neuroscientist may regard these philosophical discussions as a waste of time (or worse, as crypto-vitalism). However as the recent controversy over defining DNA function in the ENCODE project shows, lack of agreement about function has practical consequences for the scientific community.

A more satisfactory treatment of function could dispel much of the theoretical confusion in understanding brain complexity. Coherent conceptual accounts already exist. Card-carrying biologists like Ernst Mayr have distinguished between cosmic teleology, corresponding to an inherent purposefulness of Nature that has no place in modern science, and teleonomy, or apparent purposefulness instantiated in genetic programs evolved through natural selection. Animal behavior within the lifetime of an individual is highly purposeful, executing programmed behaviors adapted to ecological niches. The program of instructions or the genetic code itself of course changes over evolutionary time scales. Developmental and adult plasticity of the nervous system does not fundamentally negate the existence of species-specific adaptive behaviors; indeed, plasticity itself is an evolved species-specific mechanism (as is illustrated by the convergent evolution of vocal learning in multiple taxa including humans and songbirds).

Fragments of a theory of design that deals squarely with teleonomic issues exist, including the ethological considerations and computationalist accounts referred to in Krakauers article. However, without a more robust, mathematically sound and conceptually coherent theoretical enterprise that has better explanatory power and provides guidance to experimental design, we are likely to be staring for a long time at the intricate patterns of neurobiological wallpaper without uncovering the underlying simplicities.

What is the way forward? Fedorov discovered the mathematics of space groups governing the patterns of Alhambra by studying crystals rather than by visiting the palace. It is possible that the underlying mathematical principles, that govern apparently purposeful biological systems, have their own intrinsic logic and may be discovered independently in a different domain. This is indeed the hope of researchers in the field of modern Machine Learning, who aim to discover the abstract principles of intelligence in a technological context largely removed from neuroscience. Human engineers, in trying to solve problems that often resemble those that animal nervous systems may have encountered in their adaptive niches, have come up with mathematically principled theoretical frameworks. These engineering theories classically include the three Cs (Communications, Computation and Control) and one should add Statistics or Machine Learning. These theories are taught in different departments in universities, but the modern context of interconnected systems and distributed networks has also brought the disciplines together into a mix that is ripe for connecting to neuroscience.

In terms of engineering metaphors in neuroscience, the computer has dominated, as can be seen from the discussions in Krakauers article. This may be a mistake: while no doubt the most popular textbook metaphor for brains, Theories of Computation as substantiated by Turings model or Von Neumanns computer architecture separating processors from memory, have been singularly unsuccessful in providing biological insight into brain function or experimental guidance to the practicing neuroscientist. It may also provide a simple explanation for the negative results obtained in the recent study by Jonas and Kording where standard analysis methods used by neuroscientists were unsuccessful in shedding insight into the architecture of a computer programmed to play a video game.

This study has led to much self-flagellation, but the neuroscience data analysis methods actually have been quite successful in exploring a different engineering metaphor for nervous systems, namely signal and image processing, usually studied in the context of communications or control. Paradigmatic of this success is our understanding of the primate visual system, understanding that has now borne fruit in a multi-billion-dollar Machine Vision industry. If the neuroscience data analysis methods fail in understanding a Von Neumann computer architecture separating processor from memory and using precise elements, its not such a big deal since no one expects the brain to conform to that architecture anyway. It is telling that the modern advances in Machine Learning have come from an abandonment of the digital, rule-based AI methods of traditional computer science, and an adoption of the analog, linear algebra and probability theory based methods more in the domain of statisticians, physicists and control theorists. Calling for interdisciplinary research is a clich, but the theoretical framework we need for neuroscience is unlikely to be based in an existing academic department.

Modern neuroscience needs pluralism not only in the epistemological levels of analysis, as Krakauer et al calls for, but also in the diversity of species studied. The biological counterpart to engineering theorizing is the comparative method that looks at a broad range of species across taxa to find cross-cutting principles. The comparative method has been in decline for decades, under pressure from the expansion of studies of a few model organisms, particularly those suited for translational medical research. The tool-building drive has forced this decline further: we now study the visual system of the mouse not because vision is a primary niche-adaptation for this species (an ethological dictum known as Kroghs principle), but simply because elaborate genetic tools are available.

We cannot brute force our way through the complexities of nervous systems. There is no doubt that we need better tools, but the best tool that we have for the problem perhaps resides in our own craniums. If there are no deep theoretical principles to be found in the study of animal nervous systems, then we are doomed to cataloguing the great variety of detail that is characteristic of biology, and tools will dominate. The hope is that underlying the endless and beautiful forms produced by the struggle for existence are mathematically quantifiable simplicities, fearful symmetries as it were. Then ideas will win the day.

View original post here:
Is Neuroscience Limited by Tools or Ideas? - Scientific American

Related Posts