Intel launches its next-generation neuromorphic processor—so, what’s that again?

[embedded content]
Mike Davies, director of Intel’s Neuromorphic Computing Lab, explains the company’s efforts in this area. And with the launch of a new neuromorphic chip this week, he talked Ars through the updates.

Despite their name, neural networks are only distantly related to the sorts of things you’d find in a brain. While their organization and the way they transfer data through layers of processing may share some rough similarities to networks of actual neurons, the data and the computations performed on it would look very familiar to a standard CPU.

But neural networks aren’t the only way that people have tried to take lessons from the nervous system. There’s a separate discipline called neuromorphic computing that’s based on approximating the behavior of individual neurons in hardware. In neuromorphic hardware, calculations are performed by lots of small units that communicate with each other through bursts of activity called spikes and adjust their behavior based on the spikes they receive from others.

On Thursday, Intel released the newest iteration of its neuromorphic hardware, called Loihi. The new release comes with the sorts of things you’d expect from Intel: a better processor and some basic computational enhancements. But it also comes with some fundamental hardware changes that will allow it to run entirely new classes of algorithms. And while Loihi remains a research-focused product for now, Intel is also releasing a compiler that it hopes will drive wider adoption.

To make sense out of Loihi and what’s new in this version, let’s back up and start by looking at a bit of neurobiology, then build up from there.

From neurons to computation

The foundation of the nervous system is the cell type called a neuron. All neurons share a few common functional features. At one end of the cell are structures called a dendrites, which you can think of as receivers. This is where the neuron receives inputs from other cells. Nerve cells also have an axon, which act as a transmitter, connecting with other cells to pass along signals.

The signals take the form of what are called “spikes,” which are brief changes in the voltage across the neuron’s cell membrane. Spikes travel down axons until they reach the junctions with other cells (called synapses), at which point they’re converted to a chemical signal that travels to the nearby dendrite. This chemical signal opens up channels that allow ions to flow into the cell, starting a new spike on the receiving cell.

The receiving cell integrates a variety of information—how many spikes it has seen, whether any neurons are signaling that it should be quiet, how active it was in the past, etc.—and uses that to determine its own activity state. Once a threshold is crossed, it’ll trigger a spike down its own axons and potentially trigger activity in other cells.

Typically, this results in sporadic, randomly spaced spikes of activity when the neuron isn’t receiving much input. Once it starts receiving signals, however, it’ll switch to an active state and fire off a bunch of spikes in rapid succession.

A neuron, with the dendrites (spiky protrusions at top) and part of the axon (long extension at bottom right) visible.
Enlarge / A neuron, with the dendrites (spiky protrusions at top) and part of the axon (long extension at bottom right) visible.

How does this process encode and manipulate information? That’s an interesting and important question, and one we’re only just starting to answer.

One of the ways we’ve gone about answering it was via what has been called theoretical neurobiology (or computational neurobiology). This has involved attempts to build mathematical models that reflected the behavior of nervous systems and neurons in the hope that this would allow us to identify some underlying principles. Neural networks, which focused on the organizational principles of the nervous system, were one of the efforts that came out of this field. Spiking neural networks, which attempt to build up from the behavior of individual neurons, is another.

Spiking neural networks can be implemented in software on traditional processors. But it’s also possible to implement them through hardware, as Intel is doing with Loihi. The result is a processor very much unlike anything you’re likely to be familiar with.

Spiking in silicon

The previous-generation Loihi chip contains 128 individual cores connected by a communication network. Each of those cores has a large number of individual “neurons,” or execution units. Each of these neurons can receive input in the form of spikes from any other neuron—a neighbor in the same core, a unit in a different core on the same chip or from another chip entirely. The neuron integrates the spikes it receives over time and, based on the behavior it’s programmed with, uses that to determine when to send spikes of its own to whatever neurons it’s connected with.

All of the spike signaling happens asynchronously. At set time intervals, embedded x86 cores on the same chip force a synchronization. At that point, the neuron will redo the weights of its various connections—essentially, how much attention to pay to all the individual neurons that send signals to it.

Put in terms of an actual neuron, part of the execution unit on the chip acts as a dendrite, processing incoming signals from the communication network based in part on the weight derived from past behavior. A mathematical formula was then used to determine when activity had crossed a critical threshold and to trigger spikes of its own when it does. The “axon” of the execution unit then looks up which other execution units it communicates with, and it sends a spike to each.

In the earlier iteration of Loihi, a spike simply carried a single bit of information. A neuron only registered when it received one.

Unlike a normal processor, there’s no external RAM. Instead, each neuron has a small cache of memory dedicated to its use. This includes the weights it assigns to the inputs from different neurons, a cache of recent activity, and a list of all the other neurons that spikes are sent to.

One of the other big differences between neuromorphic chips and traditional processors is energy efficiency, where neuromorphic chips come out well ahead. IBM, which introduced its TrueNorth chip in 2014, was able to get useful work out of it even though it was clocked at a leisurely kiloHertz, and it used less than .0001 percent of the power that would be required to emulate a spiking neural network on traditional processors. Mike Davies, director of Intel’s Neuromorphic Computing Lab, said Loihi can beat traditional processors by a factor of 2,000 on some specific workloads. “We’re routinely finding 100 times [less energy] for SLAM and other robotic workloads,” he added.


Leave a Reply

Your email address will not be published. Required fields are marked *