What is it about?

This study reveals that artificial neural networks aren’t just collections of code. Instead, they behave like physical systems operating near a critical state. Much like the release of energy in an earthquake or the coordinated firing of neurons in a human brain, neural networks display signs of "self-organized criticality" during training. When neural networks learn, their parameters (connection strengths between nodes) change. The magnitude of these changes follows a heavy-tailed power-law-like pattern: while most changes are small, large changes occasionally occur. This power-law-like behavior consistently emerged regardless of batch size, learning rate, or network depth, suggesting a fundamental principle rather than an artifact of the specific implementation. The authors developed a theoretical framework based on nonequilibrium statistical physics showing that this pattern arises from balancing two forces: maximum entropy (promoting random exploration) and mutual information constraint (ensuring task relevance). This trade-off naturally pushes the network into a critical regime with power-law signatures where it is flexible enough to learn but disciplined enough to succeed.

Featured Image

Why is it important?

By framing neural network learning as a nonequilibrium process governed by the fundamental trade-off between randomness and relevance, this bridges AI with physics, enabling tools from statistical mechanics to optimize machine learning systems.

Perspectives

This paper elegantly connects nonequilibrium statistical physics and machine learning by showing that neural networks exhibit self-organized criticality, a phenomenon seen in sandpiles, earthquakes, and forest fires. What’s compelling is the move from descriptive observations to mechanistic explanation grounded in first principles.

Dr. Xin-Ya Zhang
Westlake University

Read the Original

This page is a summary of: Heavy-tailed update distributions arise from information-driven self-organization in nonequilibrium learning, Proceedings of the National Academy of Sciences, December 2025, Proceedings of the National Academy of Sciences,
DOI: 10.1073/pnas.2523012122.
You can read the full text:

Read

Contributors

The following have contributed to this page