๐ Read more: Exercise Beats Depression Better Than Medication
๐ง Backpropagation: The Algorithm Shared by Brain and AI
Backpropagation is the foundational algorithm behind every modern deep learning system, from ChatGPT to self-driving cars. Its principle is simple: when a neural network makes an error, it sends error signals โbackwardsโ โ from output to input โ adjusting the weights of each connection to improve the next prediction.
For decades, neuroscientists considered it impossible for something like this to occur in the biological brain. Neurons send signals in only one direction, so how could error signals travel โbackwardsโ? The answer came from a series of studies that revealed the brain uses analogous โ though not identical โ mechanisms.
Measurements in mouse neurons showed that error signals do indeed travel in reverse through neural circuits. The brain does not use exactly the same algorithm as artificial networks, but the result is remarkably similar: synapses are adjusted based on error signals, in a way that gradually optimizes system performance.
๐ฌ Dendrites: The Brain's Hidden Computers
One of the most groundbreaking discoveries came in 2022 from Bar-Ilan University in Israel. Professor Ido Kanter's team published findings in Scientific Reports that shake a 70-year-old assumption: that learning in the brain occurs exclusively through modification of synaptic strength (the connections between neurons).
Instead, the research showed that learning occurs primarily in dendritic trees โ the long, branching โarmsโ of each neuron. The trunk and branches of the tree modify their strength, not just the โleavesโ (the synapses). This means that a single neuron can implement deep learning algorithms that previously required thousands of artificial neurons.
๐ก Why This Is So Important
For 70 years, neuroscience was based on the assumption that learning happens at synapses โ the connections between neurons. Now it appears that each neuron is far more complex than we believed: it functions like an entire neural network on its own. This represents "a step toward the biologically plausible implementation of backpropagation algorithms," according to the researchers.
This discovery highlights a deep irony: artificial neural networks were designed as abstract simplifications of the brain, yet the brain ultimately uses mechanisms closer to AI algorithms than their creators ever realized.
๐ Read more: Photons Mimic Brain Function: Quantum Neuromorphic Memory
๐ฎ Predictive Coding: The Brain as a Prediction Machine
A second striking similarity concerns how the brain processes sensory information. According to the theory of predictive coding, the brain does not passively wait to receive information from the eyes or ears. Instead, it continuously generates predictions about incoming signals and processes only the prediction errors โ that is, the difference between what it expected and what it actually received.
This is remarkably similar to how modern AI language models work. Models like GPT are trained in exactly this way: they predict the next word and learn from their mistakes. The brain does something analogous across every sensory function โ from vision to hearing โ continuously minimizing โprediction error.โ
Analyses of brain activity during speech comprehension revealed that neuron activation patterns strikingly resemble the internal representations of large language models. The brain processes language in hierarchical layers โ from sounds, to words, to meaning โ exactly like multilayered neural networks.
๐ Read more: Plants That Touch Each Other Are More Resilient to Stress
๐ Convexity: The Geometry That Unites Mind and Machine
A 2025 study from the Technical University of Denmark (DTU), published in Nature Communications, added another piece to the puzzle. Researchers discovered that convexity โ a geometric property โ is a common feature of both the human mind and artificial neural networks.
What does this mean in practice? When we learn what a โcatโ is, we don't store a single image but build a flexible, convex region in our mental space that encompasses every possible cat โ large, small, black, white. Lenka Tetkova's team proved that artificial neural networks develop exactly this structure. AI models, trained on images, text, audio, and medical data, form convex concept regions in a way identical to the human mind.
"We discovered that the same geometric principle that helps humans form and share concepts โ convexity โ also shapes the way machines learn, generalize, and align with us."
If convexity proves to be a reliable performance indicator, it could lead to AI models that learn faster and more efficiently โ especially in cases where available training data is scarce.
๐ Read more: Robots Share Resources Like an Insect Colony
โก Super-Turing AI: When Technology Learns from Biology
These discoveries don't remain in theory. In March 2025, engineers at Texas A&M published in Science Advances the creation of a "Super-Turing AI" that reproduces fundamental characteristics of the biological brain. Unlike current AI systems, where training and memory are separated across different hardware components, this system integrates them โ exactly like the brain.
"AI data centers consume energy in gigawatts, while our brain consumes just 20 watts," explained Dr. Suin Yi of Texas A&M. His team leveraged principles of Hebbian learning โ โcells that fire together, wire togetherโ โ instead of traditional backpropagation. In testing, a drone equipped with this system navigated a complex environment without prior training, learning and adapting in real time โ faster and with less energy than conventional AI.
๐ก Why This Changes Everything
These findings are not just about fundamental science. They have practical implications in two directions. First, in neuroscience: understanding that the brain uses mechanisms analogous to deep learning could help treat neurological disorders, from Alzheimer's disease to schizophrenia โ as these conditions likely disrupt precisely these learning mechanisms.
Second, in artificial intelligence: if we can replicate the brain's efficiency in artificial systems, we could create AI that consumes a fraction of today's energy. This is critical at a time when AI data centers pose an increasingly significant environmental challenge.
The biggest surprise, perhaps, is philosophical. For decades, the question was whether AI can โthink like us.โ Now it seems that, to some degree, it always did โ because it was inspired by the very mechanisms the brain actually uses. The convergence between biological and artificial intelligence is no coincidence โ it's evidence that there are fundamental learning principles that hold regardless of the substrate, whether neurons or transistors.
๐ Sources
- ๐ Scientific Reports โ Efficient dendritic learning as an alternative to synaptic plasticity (Hodassman, Kanter et al., 2022)
- ๐ Nature Communications โ On convex decision regions in deep network representations (Tetkova et al., 2025)
- ๐ Science Advances โ HfZrO-based synaptic resistor circuit for a Super-Turing intelligent system (Lee, Yi et al., 2025)
- ๐ ScienceDaily โ New brain learning mechanism calls for revision of long-held neuroscience hypothesis (2022)
- ๐ ScienceDaily โ Artificial intelligence uses less energy by mimicking the human brain (2025)
- ๐ Phys.org โ A geometric link: Convexity may bridge human and machine intelligence (2025)
