New Reprogrammable Chip Lets AI Learn Continuously—Just Like the Brain

electronic programmable computing chip brain

One of the reasons for the brain’s incredible power is its ability to rewire itself as it learns. Now researchers have created electronic circuits that can do the same.

Efforts to mimic the brain in silicon—a field known as neuromorphic computing—have a long pedigree, and have seen significant investments from computing powerhouses like Intel and IBM. So far, most research has focused on replicating the functionality and connectivity of biological neurons and synapses in the hope of replicating the brain’s incredible learning efficiency.

One feature of neurons that has received less attention is the way they’re able to reorganize themselves in response to experience. This powerful capability allows the brain to change both its structure and function as it learns, optimizing its underlying hardware to new challenges on the fly.

Now though, a team led by engineers from Purdue University has demonstrated new circuit components whose functions can be reconfigured with simple electronic pulses. This allows them to seamlessly switch between acting as resistors, memory capacitors, artificial neurons, and artificial synapses. The breakthrough opens the door to creating dynamic neural networks in hardware that can rewire themselves as they learn—just like the brain.

The new devices, reported on last week in Science, are made from a material called perovskite nickelate, whose electrical properties can be altered by adding hydrogen ions at particular locations in its lattice-like structure. The researchers found that certain configurations of hydrogen ions could create patterns of conductivity that replicated a variety of different electronic components.

More importantly, they also found that they could shuffle around the locations of these hydrogen ions by applying electrical pulses at different voltages. This makes it possible to shift from one configuration to another on demand, allowing the same device to take on the attributes of a wide range of electronic building blocks.

The devices are also very stable. The research showed that the hydrogen atoms stayed put for at least six months with no loss in resistance, and that the switching behavior still worked reliably after millions of cycles. On top of that, the devices can be manufactured using conventional chip fabrication technology.

After testing the performance of individual devices, the researchers then used their data to create simulations of large networks of them. They used the simulations to implement a form of machine learning called reservoir computing that employs similar principles to neural networks. They showed that these networks outperformed other theoretical and experimental models in both digit recognition and heartbeat classification tasks.

They also used these networks to implement what is known as a “grow when required” (GWR) neural network that creates and prunes neurons and connections depending on the task set for it. They compared these networks to a similar kind of self-organizing network that has a fixed number of neurons.

When they tested the networks on an incremental learning task, where the number of classes of data the model had to classify increased over time, they found the dynamic network was over 200 percent more accurate than a static one using the same number of neurons the GWR network reached at its peak. They also showed that GWR networks could grow and shrink depending on the size of the problem, optimizing their efficiency in a way the static network was incapable of.

As impressive as these capabilities are, the technology still faces significant hurdles. In an accompanying perspective article, Rohit Abraham John of ETH Zurich points out that working out how to rearrange connections between these devices as they switch between functions is an outstanding challenge.

However, the technology could also have applications beyond brain-inspired computing. John notes that the possibility of being able to create a wide variety of electrical components from the same material could be a significant simplification compared to current chipmaking practices.

And while it may still be early days, the researchers say they are now investigating how to combine these devices to create large-scale chips. A silicon brain that can rewire itself just like ours might not be so far away.

Image Credit: Purdue University/Rebecca McElhoe



* This article was originally published at Singularity Hub

Post a Comment

0 Comments