At full concentration, the human brain uses around 20 W; a computer would use megawatts to simulate the brain processing the same tasks. One of the great efficiencies of biological neural networks stems from the way synapses combine signal transmission with learning functions, so that the synaptic weight – how readily a signal is transmitted – is set by the signal history. There have been several designs for devices that emulate synaptic activity, but their implementation in full-scale neuromorphic circuits can lead to issues around signal bottlenecks and current loads.

Most set ups that emulate synaptic activity simulate a "vanilla" spike-timing dependent plasticity (STDP) so that whether the synaptic weight is increased or decreased depends on whether the presynaptic pulse precedes the postsynaptic pulse or not. However, developments in neuroscience have identified mechanisms based on other factors for setting the synaptic weight, such as the concentration of neurotransmitters and ions.

"Looking into these new mechanisms makes things easier," says Giacomo Indiveri, who has been working on neuromorphic computing since the mid 1990s. "If the learning circuit has to implement STDP it needs to manage overlapping pulses – it’s easier to charge a capacitor and let it leak slowly, simulating the release of neurotransmitters. And this leads to more compact and elegant circuit designs too."

Alongside his colleagues Manu V Nair and Lorenz K Muller at the Institute of Neuroinformatics, at the University of Zurich and ETH Zurich, Indiveri has also been tackling the issue of current loads. A large current is needed to set the memristive devices and if this current also flows in the neuron circuits, it can lead to significant power dissipation issues.

How?

Resistive memories based on HfO2 are generic elements that are used by researchers to mimic synaptic behaviour, and a variety of configurations have been proposed. The circuit design proposed by the Zurich researchers hinges on a "differential memristive approach" whereby the algorithm sets the synaptic weight from the difference between the values of two memristors. The learning circuits used with the differential memristive synapses emulate ion and neurotransmitter concentration effects in biological synapses rather than relying purely on pulse timing, thereby avoiding overlapping pulses. A happy side effect of this approach is that it also reduces variability.

While the proposed circuit design tackles a number of issues, Indiveri describes their "Eureka moment" as the discovery that they could use the "Gilbert normalizer". This circuit allows them to use large currents during the write phase, while having a small current in the artificial neuron during the read phase. The approach alleviates power consumption and dissipation issues, and ensures that the neurons are not influenced by the write phase.

What has been gained?

"Most other publications try to highlight the feature of high density – densely packed synapses – we gave up on the density," says Indiveri. Instead his team have focused on improving the functionality of the synapses. While there is some cost in terms of device size – Indiveri estimates synapse blocks measuring anywhere between 5 μm2 to 20 μm2, depending on the technology node used and on the specifics of the memristive devices used – there are several advantages of the neuromorphic circuits proposed to justify this footprint. In addition, they could make the main circuit smaller by placing the Gilbert normalizer outside it.

As for what these circuits could achieve that regular computing can’t – the big win is the non-volatility and dynamic features of the memristive devices. Algorithms such as those used by Google are making great advances in image recognition, such as distinguishing a static picture of a dog from that of a cat , and here the differential memristive approach with Gilbert normalizer is not really competitive. However, Indiveri points out that there is a need for small lithium-battery-powered devices that extract information from time-varying sensory signals – such as body temperature and heart beat. Here the proposed neuromorphic circuit could be in its element. Mobile phones are also in need of low-power computational resources, as each new release includes five to six additional sensors. Cameras for face recognition and any other sensors that need to be permanently enabled can eat up the battery unless power consumption can be reduced.

The current work is based on simulations, but the researchers are working with partners like CEA-LETI, who is implementing these concepts in their resistive memories technology demonstrators. The INI is also one of the partners of the NeuRAM3 EU project, along with seven other European institutions, and that aims at combining innovative bio-inspired approaches with advanced technologies to realise the next generation of embedded neuromorphic circuits.

Full details are reported in Nano Futures.