The brain is, at least to me, an enigma wrapped in mystery. People who are smarter than me ̵
We know in detail how neurons function. Neurotransmitters, synapse firing, excitation and suppression are all textbook knowledge. In fact, we have abstracted these ideas to create black box algorithms to help us ruin people's lives by completing real-world tasks.
We also understand the brain at a higher, more structural level: we know which bits of the brain are involved in the processing of different tasks. For example, the visual system is outlined with exquisite detail. Yet the intermediate level between these two areas remains disappointingly unclear. We know that a set of neurons can be involved in identifying vertical lines in our field of view, but we don't really understand how that recognition works.
Memory is solid
Similarly, we know that the brain can keep memories, We can even create and delete memory with a mouse. But the details of how the memory is encoded are unclear. Our main hypothesis is that memory is something that is stored over time: the constant of varieties (we know that memories vary with recall, but they are still relatively constant). This means that there must be something permanent in the brain to store the memory. But the brain is incredibly dynamic and very little remains constant.
Here comes the latest research: abstract constants have been proposed that can retain memories.
So what constant did the researchers find? Let's say that a group of six neurons is connected through interconnected synapses. The firing of each particular synapse is completely unpredictable. Similarly, its impact on neighbors' activities is unpredictable. So no synapses or neurons encode memory.
But hidden in all this unpredictability is the predictability that allows us to model a neural network with a relatively simple set of equations. These equations reproduce synapse statistics that shoot very well (if not, artificial neural networks are unlikely to work)
A critical part of the equations is the weighting or the influence of synaptic input on a particular neuron. Each weight varies over time, but can be amplified or weakened by learning and recall. To study this, researchers examined the dynamic behavior of a network by focusing on so-called fixed points (or set points).
You technically need to understand complex numbers to understand the set points. But I have a short haircut. The world of dynamics is divided into stable things (such as planets orbiting the Sun), unstable things (such as rocks balanced on dots) and things that are extremely unpredictable.
Memory is plastic
The neuron is a strange combination of stable and unpredictable. Neurons have firing rates and patterns that stay within certain limits, but you can never know exactly when a particular neuron will fire. Researchers have shown that a feature that keeps the network stable does not store information for a long time. However, the feature that drives unpredictability stores stores information and looks like it can do it indefinitely.
Researchers demonstrated this by exposing their model to an input stimulus, which they found to change network fluctuations. , Moreover, the longer the model was exposed to the stimulus, the greater its influence.
The individual firing pattern was still unpredictable, and memory could not be seen in the stimulus in any single neuron or its firing behavior, and yet it was still hidden in the global behavior of the network.
Further analysis shows that, in terms of dynamics, there is a big difference between this way of encoding memory and previous models. In previous models, the memory is a fixed point that corresponds to a specific nerve firing pattern. In this model, memory is a form. It can be a two-dimensional shape in a plane, as researchers in their model have found. But the size of the format can be much larger, allowing for many complex memories to be encoded.
In a 2D model, the firing behavior of a neuron follows a restrictive cycle, which means that the model constantly changes in a number of states, which eventually repeats, although this is only apparent during a call.
Another interesting aspect of the model is that recall has an effect on memory. Memories recalled by such a stimulus in some cases weaken, while in others they intensify.
Where from here?
Researchers continue to suggest that evidence of their model can be found in biological systems. It must be possible to find invariant forms in neural connectivity. However, I imagine this is not an easy search. A simpler test is that there must be asymmetry in strength in the connections between two neurons during learning. This asymmetry has to change between learning and rest.
So, yes, in principle the model is a test. But it looks like these tests will be very difficult. We can wait a long time to get some results in one way or another.
Nature Communications, 2019, DOI: 10.1038 / s41467-019-12306-2 (For DOI)