3D computer image of blue translucent neurons with flashes of electricity traveling between them

Artificial Neurons Organize Themselves

A research team constructs network of self-learning infomorphic neurons

Written byMax Planck Institute for Dynamics and Self-Organization
| 2 min read
Register for free to listen to this article
Listen with Speechify
0:00
2:00

Novel artificial neurons learn independently and are more strongly modeled on their biological counterparts. A team of researchers from the Göttingen Campus Institute for Dynamics of Biological Networks (CIDBN) at the University of Göttingen and the Max Planck Institute for Dynamics and Self-Organization (MPI-DS) has programmed these infomorphic neurons and constructed artificial neural networks from them. The special feature is that the individual artificial neurons learn in a self-organized way and draw the necessary information from their immediate environment in the network. The results were published in PNAS.

Both, human brain and modern artificial neural networks are extremely powerful.

Lab manager academy logo

Advanced Lab Management Certificate

The Advanced Lab Management certificate is more than training—it’s a professional advantage.

Gain critical skills and IACET-approved CEUs that make a measurable difference.

At the lowest level, the neurons work together as rather simple computing units.

An artificial neural network typically consists of several layers composed of individual neurons.

An input signal passes through these layers and is processed by artificial neurons in order to extract relevant information.

However, conventional artificial neurons differ significantly from their biological models in the way they learn.

Interested in chemistry and materials science?

Subscribe to our free Chemistry & Materials Science Newsletter.

Is the form not loading? If you use an ad blocker or browser privacy features, try turning them off and refresh the page.

By subscribing, you agree to receive email related to Lab Manager content and products. You may unsubscribe at any time.

While most artificial neural networks depend on overarching coordination outside the network in order to learn, biological neurons only receive and process signals from other neurons in their immediate vicinity in the network.

Biological neural networks are still far superior to artificial ones in terms of both, flexibility and energy efficiency.

The new artificial neurons, known as infomorphic neurons, are capable of learning independently and self-organized among their neighboring neurons.

This means that the smallest unit in the network has to be controlled no longer from the outside, but decides itself which input is relevant and which is not.

In developing the infomorphic neurons, the team was inspired by the way the brain works, especially by the pyramidal cells in the cerebral cortex.

These also process stimuli from different sources in their immediate environment and use them to adapt and learn.

The new artificial neurons pursue very general, easy-to-understand learning goals: "We now directly understand what is happening inside the network and how the individual artificial neurons learn independently," emphasizes Marcel Graetz from CIDBN.

By defining the learning objectives, the researchers enabled the neurons to find their specific learning rules themselves. The team focused on the learning process of each individual neuron. They applied a novel information-theoretic measure to precisely adjust whether a neuron should seek more redundancy with its neighbors, collaborate synergistically, or try to specialize in its own part of the network's information. "By specializing in certain aspects of the input and coordinating with their neighbors, our infomorphic neurons learn how to contribute to the overall task of the network," explains Valentin Neuhaus from MPI-DS. With the infomorphic neurons, the team is not only developing a novel method for machine learning, but is also contributing to a better understanding of learning in the brain.

-Note: This news release was originally published by the Max Planck Institute for Dynamics and Self-Organization. As it has been republished, it may deviate from our style guide.

Journal References:

  1. Abdullah Makkeh, Marcel Graetz, Andreas C. Schneider, David A. Ehrlich, Viola Priesemann, Michael Wibral. A general framework for interpretable neural learning based on local information-theoretic goal functions. Proceedings of the National Academy of Sciences, 2025; 122 (10) DOI: 10.1073/pnas.2408125122
  2. Schneider, A. C., Neuhaus, V., Ehrlich, D. A., Makkeh, A., Ecker, A. S., Priesemann, V., & Wibral, M. What should a neuron aim for? Designing local objective functions based on information theory.. The Thirteenth International Conference on Learning Representations (ICLR), 2025 [abstract]
Loading Next Article...
Loading Next Article...

CURRENT ISSUE - May/June 2025

The Benefits, Business Case, And Planning Strategies Behind Lab Digitalization

Joining Processes And Software For a Streamlined, Quality-First Laboratory

Lab Manager May/June 2025 Cover Image