For those working in the field of advanced artificial intelligence, getting a computer to simulate brain activity is a gargantuan task, but it may be easier to manage if the hardware is designed more like brain hardware to start with.
This emerging field is called neuromorphic computing. And now engineers at MIT may have overcome a significant hurdle - the design of a chip with artificial synapses.
For now, human brains are much more powerful than any computer - they contain around 80 billion neurons, and over 100 trillion synapses connecting them and controlling the passage of signals.
How computer chips currently work is by transmitting signals in a language called binary. Every piece of information is encoded in 1s and 0s, or on/off signals.
To get an idea of how this compares to a brain, consider this: in 2013, one of the world's most powerful supercomputers ran a simulation of brain activity, achieving only a minuscule result.
Riken's K Computer used 82,944 processors and a petabyte of main memory - the equivalent of around 250,000 desktop computers at the time.
It took 40 minutes to simulate one second of the activity of 1.73 billion neurons connected by 10.4 trillion synapses. That may sound like a lot, but it's really equivalent to just one percent of the human brain.
But if a chip used synapse-like connections, the signals used by a computer could be much more varied, enabling synapse-like learning. Synapses mediate the signals transmitted through the brain, and neurons activate depending on the number and type of ions flowing across the synapse. This helps the brain recognise patterns, remember facts, and carry out tasks.
Replicating this has proven difficult to date - but researchers at MIT have now designed a chip with artificial synapses made of silicon germanium that allow the precise control of the strength of electrical current flowing along them, just like the ion flow between neurons.
In a simulation, it was used to recognise handwriting samples with 95 percent accuracy.
Previous designs for neuromorphic chips used two conductive layers separated by an amorphous "switching medium" to act like the synapses. When switched on, ions flow through the medium to create conductive filaments to mimic synaptic weight, or the strength or weakness of a signal between two neurons.
The problem with this approach is that, without defined structures to travel along, the signals have an infinite number of paths - and this can make the chips' performance inconsistent and unpredictable.
"Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way," said lead researcher Jeehwan Kim.
"But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it's hard to control. That's the biggest problem - nonuniformity of the artificial synapse."
With this in mind, the team created lattices of silicon germanium, with one-dimensional channels through which ions can flow. This ensures the exact same path is used every time.
These lattices were then used to build a neuromorphic chip; when voltage was applied, all synapses on the chip showed the same current, with a variation of just 4 percent.
A single synapse was also tested with voltage applied 700 times. Its current varied just 1 percent - the most uniform device possible.
The team tested the chip on an actual task by simulating its characteristics and using those with the MNIST database of handwriting samples, usually used for training image processing software.
Their simulated artificial neural network, consisting of three neural sheets separated by two layers of artificial synapses, was able to recognise tens of thousands of handwritten numerals with 95 percent accuracy, compared to the 97 percent accuracy of existing software.
The next step is to actually build a chip that is capable of carrying out the handwriting recognition task, with the end goal of creating portable neural network devices.
"Ultimately we want a chip as big as a fingernail to replace one big supercomputer," Kim said. "This [research] opens a stepping stone to produce real artificial [intelligence] hardware."
The research has been published in the journal Nature Materials.