Standard computing architectures are based on the von Neumann computing paradigm that exploits processing and memory units that are physically separate. This naturally limits the machine performance in terms of time (the von Neumann bottleneck) and energy due to the large amount and rate of data movement between units. Therefore, the possibility of computing and storing data on the same physical unit represents an appealing technology.

Why memcomputing?

This alternative computing paradigm has its roots in a novel, ideal machine that is an alternative to the Turing Machine paradigm: the Universal Memcomputing Machine. Roughly speaking this is a brain-inspired architecture composed of interacting memory cells controlled by external signals. These machines not only overcome the von Neumann bottleneck, but they have other appealing features such as "intrinsic parallelism", "functional polymorphism" and "information overhead".

The first feature means that a group of connected memory cells operate simultaneously and collectively during the computation. The second feature relates to the ability to compute different functions without modifying the topology of the machine network; simply by applying the appropriate input signals. The last important property – namely their information overhead – is related to the way in which memory cells, interconnected by a physical coupling, can store an amount of data larger than that of the same memory cells isolated.

DCRAM: a possible realization of memcomputing machines employing memcapacitors

A multidisciplinary team of researchers from the Autonomous University of Barcelona in Spain, the University of California San Diego and the University of South Carolina in the US, and the Polytechnic of Turin in Italy, suggest a realization of "memcomputing" based on nanoscale memcapacitors. They propose and analyse a major advancement in using memcapacitive systems (capacitors with memory), as central elements for Very Large Scale Integration (VLSI) circuits capable of storing and processing information on the same physical platform. They name this architecture Dynamic Computing Random Access Memory (DCRAM).

Using the standard configuration of a Dynamic Random Access Memory (DRAM) where the capacitors have been substituted with solid-state based memcapacitive systems, they show the possibility of performing WRITE, READ and polymorphic logic operations by only applying modulated voltage pulses to the memory cells. Being based on memcapacitors, the DCRAM expands very little energy per operation. It is a realistic memcomputing machine that overcomes the von Neumann bottleneck and clearly exhibits intrinsic parallelism and functional polymorphism.

The researchers presented their work in the journal Nanotechnology 25 285201.

Further reading

Nanotechnology special issue: Synaptic Electronics (Sep 2013)
Analog memory paves the way for efficient information processing (Jan 2012)
Kubo response theory applied to memristive, memcapacitive and meminductive systems (July 2013)
How to build a memcomputer (Dec 2013)