more energy efficient artificial intelligence

iStock

How Spiraling Circuits Could Mean More Efficient AI

Researchers from Japan have developed a new method that could make artificial intelligence more energy efficient

Written byInstitute of Industrial Science, The University of Tokyo
| 2 min read
Register for free to listen to this article
Listen with Speechify
0:00
2:00

TOKYO, JAPAN — Researchers from the Institute of Industrial Science at the University of Tokyo designed and built specialized computer hardware consisting of stacks of memory modules arranged in a three-dimensional (3D) spiral for artificial intelligence (AI) applications. This research may open the way for the next generation of energy efficient AI devices.

Machine learning is a type of AI that allows computers to be trained by example data to make predictions for new instances. For example, a smart speaker algorithm like Alexa can learn to understand your voice commands, so it can understand you even when you ask for something for the first time. However, AI tends to require a great deal of electrical energy to train, which raises concerns about adding to climate change.

Want to stay up to date on the latest lab management news?

Subscribe to our free Lab Manager Monitor Newsletter.

Is the form not loading? If you use an ad blocker or browser privacy features, try turning them off and refresh the page.

Now, scientists from the Institute of Industrial Science at The University of Tokyo have developed a novel design for stacking resistive random-access memory modules with oxide semiconductor (IGZO) access transistor in a 3D spiral. Having on-chip nonvolatile memory placed close to the processors makes the machine learning training process much faster and more energy efficient. This is because electrical signals have a much shorter distance to travel compared with conventional computer hardware. Stacking multiple layers of circuits is a natural step, since training the algorithm often requires many operations to be run in parallel at the same time.


Related Article: New System Reduces the Environmental Footprint of AI


Researchers from the University of Tokyo create a new integrated 3D-circuit architecture for AI applications with spiraling stacks of memory modules, which may help lead to specialized machine-learning hardware that uses much less electricity.
Institute of Industrial Science, the University of Tokyo

"For these applications, each layer's output is typically connected to the next layer's input. Our architecture greatly reduces the need for interconnecting wiring," says first author Jixuan Wu.

The team was able to make the device even more energy efficient by implementing a system of binarized neural networks. Instead of allowing the parameters to be any number, they are restricted to be either +1 or -1. This both greatly simplifies the hardware used, as well as compressing the amount of data that must be stored. They tested the device using a common task in AI, interpreting a database of handwritten digits. The scientists showed that increasing the size of each circuit layer could enhance the accuracy of the algorithm, up to a maximum of around 90 percent.

"In order to keep energy consumption low as AI becomes increasingly integrated into daily life, we need more specialized hardware to handle these tasks efficiently," explains senior author Masaharu Kobayashi.

This work is an important step towards the "internet of things," in which many small AI-enabled appliances communicate as part of an integrated "smart home."

The paper, "A Monolithic 3D Integration of RRAM Array with Oxide Semiconductor FET for In-memory Computing in Quantized Neural Network AI Applications," was presented at the VLSI Technology Symposium 2020.

- This press release was originally published on the University of Tokyo's Institute of Industrial Science website

Loading Next Article...
Loading Next Article...

CURRENT ISSUE - April 2025

Sustainable Laboratory Practices

Certifications and strategies for going green

Lab Manager April 2025 Cover Image