Lab Manager | Run Your Lab Like a Business
how physics can help solve AI challenges

Physics Can Assist with Key Challenges in Artificial Intelligence

Physical mechanism reveals–in theory—how many examples in deep learning are required to achieve a desired test accuracy

by Bar-Ilan University
Register for free to listen to this article
Listen with Speechify

Current research and applications in the field of artificial intelligence (AI) include several key challenges. These include: (a) A priori estimation of the required dataset size to achieve a desired test accuracy. For example, how many handwritten digits does a machine have to learn before being able to predict a new one with a success rate of 99 percent? Similarly, how many specific types of circumstances does an autonomous vehicle have to learn before its reaction will not lead to an accident? (b) The achievement of reliable decision making under a limited number of examples, where each example can be trained only once, i.e., observed only for a short period. This type of realization of fast online decision making is representative of many aspects of human activity, robotic control, and network optimization.

In an article published today (Nov. 12) in the journal Scientific Reports, researchers show how these two challenges are solved by adopting a physical concept that was introduced a century ago to describe the formation of a magnet during a process of iron bulk cooling.

Using a careful optimization procedure and exhaustive simulations, a group of scientists from Bar-Ilan University has demonstrated the usefulness of the physical concept of power-law scaling to deep learning. This central concept in physics, which arises from diverse phenomena, including the timing and magnitude of earthquakes, Internet topology and social networks, stock price fluctuations, word frequencies in linguistics, and signal amplitudes in brain activity, has also been found to be applicable in the ever-growing field of AI, and especially deep learning.

Related Article: The Power of Algorithms in Analytical Chemistry

Rapid decision making: A deep learning neural network where each handwritten digit is presented only once to the trained network.
Professor Ido Kanter, Bar-Ilan University

"Test errors with online learning, where each example is trained only once, are in close agreement with state-of-the-art algorithms consisting of a very large number of epochs, where each example is trained many times. This result has an important implication on rapid decision making such as robotic control," said professor Ido Kanter, of Bar-Ilan's Department of Physics and Gonda (Goldshmied) Multidisciplinary Brain Research Center, who led the research. "The power-law scaling, governing different dynamical rules and network architectures, enables the classification and hierarchy creation among the different examined classification or decision problems," he added.

"One of the important ingredients of the advanced deep learning algorithm is the recent new bridge between experimental neuroscience and advanced artificial intelligence learning algorithms," said PhD student Shira Sardi, a co-author of the study. Our new type of experiments on neuronal cultures indicate that an increase in the training frequency enables us to significantly accelerate the neuronal adaptation process. "This accelerated brain-inspired mechanism enables building advanced deep learning algorithms which outperform existing ones," said PhD student Yuval Meir, another co-author.

The reconstructed bridge from physics and experimental neuroscience to machine learning is expected to advance artificial intelligence and especially ultrafast decision making under limited training examples as to contribute to the formation of a theoretical framework of the field of deep learning.

- This press release was originally published on the Bar-Ilan University website