A study recently published in PLOS Computational Biology is the latest addition to literature exploring how to make artificial neural networks (ANNs) more efficient by designing them to mimic human neurological patterns. In this new study, the authors found that using “spiking networks”—that is, networks which transmit information between nodes in spurts of data rather than in a continuous stream—to design ANNs prevents the ANNs from “forgetting” data.
A problem with many ANNs is that they “catastrophically forget,” or overwrite, information when trained on data sequentially. The research team from the University of California - San Diego, comprised of neuroscientist grad student Ryan Golden and sleep research Maxim Bazhenov, PhD, along with a few others, teamed up to build an ANN that mimics how the human brain prunes learned information and consolidates memories during sleep. This was accomplished with spiking networks. According to the study, “Interleaving new task training with periods of off-line reactivity, mimicking biological sleep, mitigated catastrophic forgetting.”
This study is not the first to explore the topic. In 2020, researchers from the Los Alamos National Laboratory in New Mexico used neuromorphic hardware to implement spike neural networks. The researchers found that when the network was learning continuously without given time to sleep—that is, fed data comparable to the brain waves humans produce when sleeping—its neurons fired continuously regardless of what data was being fed, essentially “hallucinating” nonsense data derived from learned data. Only after “sleeping” did the network become stable again.
Besides reinforcing the importance of proper sleep in organisms, this research may prove to be very influential in the future development of artificial neural networks.