“So, what did you do in your thesis?”, people ask me. Many things, this and that. But if you look hard (and benevolent) enough, an overarching story does in fact emerge - or maybe two. In this post, I have tried to summarize this story in ~1000 words with as little jargon as I can.

Every waking moment, countless nerve impulse reach the brain from diverse sensors spread throughout the entire body. How the brain manages to integrate all these particular bits of information into a coherent whole - and moreover in such a fast, robust and energy efficient way - has fascinated scientists for a long time. But in recent years, this questions has also become increasingly relevant for computer science, since the same principles that underlie information processing in the brain might hold the key to a novel form of highly efficient computer chips, which the industry is desperately waiting for. Therefore, I investigated in my dissertation, how individual neurons in the brain process and transmit information, hoping to learn something for the development of biologically inspired computers on the way. Two properties of the brain are of particular interest to me: adaptivity and parallelisation.

Adaptivity means, on the one hand, that the brain is remarkably good at adjusting to changing circumstances. For example, we quickly get used to the volume of background noise at a loud party or inside a silent room, where you could literally hear a pin drop. The mechanism behind this kind of adaptivity is so-called homeostasis. In abstract terms, this refers to active self-regulation that stabilizes vital parameters of the system or organism within acceptable bounds by counteracting any changes. Concretely, this could mean for an auditory nerve cell to compensate for any change in volume by sensitizing or desensitizing accordingly.

On the other hand, adaptivity also means that the brain itself can grow and keep learning over the entire life-span, which necessarily leads to structural changes. The perhaps most important mechanism for this is so-called synaptic plasticity, which forms connections between neurons that show similar behavior and dissolves existing connection, where their behavior is too different. Relevant structures and patterns in signals can thus be discovered and consolidated through synaptic connections. The scientist that first described this phenomenon, Donald Hebb, (or rather his intellectual descendants) put this succinctly as: “What fires together, wires together.” But this process can easily get out of hand, because if precisely those connections between neurons that already behave similarly are amplified further, then they will assimilate even more in a positive feedback-loop.

This raises the question of how these two important, but fundamentally opposed processes interact with each other: the stabilizing homeostatic plasticity on the one hand, and the destabilizing synaptic plasticity on the other. To describe this interaction mathematically, I developed a model that incorporates both forms of plasticity. By analyzing and simulating this model, I could show that both mechanisms can, in fact, complement each other in an optimal way: while synaptic plasticity discovers and refines relevant structures in the incoming signals, it is restrained by homeostatic self-regulation. This results in a stable learning mechanism that is robust to sudden changes in the environment.

However, nerve cells don’t operate alone; in fact, countless neurons need to work together to process a sensory stimulus. Unlike a classical computer, the brain also doesn’t have a central processor and storage; instead both memory and computation are distributed throughout. The benefit of this parallel processing is obvious, i.e. improved speed, which is why modern computers also increasingly employ parallel processors. But parallelization requires that each neuron has all required pieces of information available at exactly the right point in time, which is difficult to guarantee in the absence of centralized control. To work truly in parallel, each individual neuron must therefore be able to somehow integrate and retain relevant information in memory until it is ultimately needed. My colleague Pascal Nieters and I investigated two different concepts of how this might work without a central storage and clock.

One option would be for nature to utilize an effect that might at first glance appear as a weakness: information transmission is not instantaneous, i.e. a nerve impulse needs a correspondingly longer time to travel a longer distance. A neuron could hence “store” information for a brief time-interval by sending a nerve impulse on a “round trip”, from which it will only return with some delay. If the same signal is transmitted not just by one, but by many such paths with different delays, then old and new bits of information meet at the target neuron and mix in complex ways. We identified the condition that might allow a neuron to use this phenomenon for the detection of temporal patterns.

Unfortunately, this process operates on a very short timescale of a few tens of milliseconds, which is much too short for many tasks like speech recognition. Furthermore, not everyone speaks at the same pace, so the ability to detect only rigid temporal patterns is insufficient for this task, in any case. Instead, the ability to detect specific sequences of signals independently of the precise timing seems to be crucial. For example, to understand the word “Panama”, it is irrelevant how much time precisely passes between the “Pa”, “na” and “ma”, provided that the order of the three syllables is correct and they follow in reasonably quick succession. This same problem, i.e. the need to detect temporally ordered sequences with some degree of variability, appears in various forms in neuroscience from olfaction to spatial localization. Since we believe this function to be absolutely fundamental, we looked for a biological mechanism that might solve this problem within the single neuron. We focussed on recent measurements from neurobiology that show how individual parts of a complex branched neuron can suddenly become electrically active. This excited state lasts for a rather long period of time (often more than 100 milliseconds), after which it switches itself off again. This effect often remains locally confined, so we expected this to act as some form of distributed internal memory, which allows individual parts of a neuron to store a single bit of information for some period of time. Based on this insight, we developed a model that shows, how a neuron might use this internally stored information to detect long and even highly variable sequences of signals. If true, this would imply that the individual neuron is substantially more capable than previously assumed - in other words, there’s a computer within each neuron! It remains to be seen how well this hypothesis ultimately holds up, but it already guides our search for novel electronic circuits for machine learning and has lead to a patent filing in this context.

All of these were just tiny steps in a wide field that is likely to grow substantially in the next few years. The more we integrate artificial intelligence and machine learning into our daily lives, the more important the search for robust and resource efficient solutions will become. And what better role model could we choose than nature itself?