By Dario Borghino Ref Gizzmag
November 19, 2012
November 19, 2012
Cognitive computingThe human brain, arguably the most complex object in the known universe, is a truly remarkable power-saver: it can simultaneously gather thousands of sensory inputs, interpret them in real time as a whole and react appropriately, abstracting, learning, planning and inventing, all on a strict power budget of about 20 W. A computer of comparable complexity that uses current technology, according to IBM's own estimates, would drain about 100 MW of power.
Clearly, such power consumption would be highly impractical. The problem, then, begs for an entirely new approach. IBM's answer is cognitive computing, a newly coined discipline that combines the latest discoveries in the field of neuroscience, nanotechnology and supercomputing.
Neuroscience has taught us that the brain consumes little power mainly because it is "event-driven." In simple terms this means that individual neurons, synapses and axons only consume power as they are activated – e.g. by an external sensory input or other neurons – and consume no power otherwise. This is however not the case with today's computers, which, in comparison, are huge power wasters.
The IBM engineers have leveraged this knowledge to build a novel computer architecture, and then used it to simulate a number of neurons and synapses comparable to what would be found in a typical human brain. The result is not a biologically or functionally accurate simulation of the human brain – it cannot sense, conceptualize, or "think" in any traditional sense of the word – but it is still a crucial step toward the creation of a machine that, one day, might do just that.
How it worksCoCoMac, a comprehensive but incomplete database detailing the wiring of a macaque's brain. After four years of painstaking work patching the database, the team members were able to obtain a workable dataset which they used to inspire the layout of their artificial brain.
Inside the system, the two main components are neurons and synapses.
Neurons are the computing centers: each neuron can receive input signals from up to ten thousand neighboring neurons, elaborate the data, and then fire an output signal. Approximately 80 percent of neurons are excitatory – meaning that, if they fire a signal, they also tend to excite neighboring neurons. The remaining 20 percent of neurons are inhibitory – when they fire a signal, they also tend to inhibit neighboring neurons.
Synapses link up different neurons, and it is here that memory and learning actually take place. Each synapse has an associated "weight value" that changes based on the number of signals, fired by the neurons, that travel along them. When a large number of neuron-generated signals travel through the same synapse, the weight value increases and the virtual brain begins to learn by association.
The algorithm periodically checks whether each neuron is firing a signal: if it is, the adjacent synapses will be notified, and they will update their weight values and interact with other neurons accordingly. The crucial aspect here is that the algorithm will only expend CPU time on the very small fraction of synapses that actually need to be fired, rather than on all of them – saving massive amounts of time and energy.
The beauty of this new computer architecture is that – just like an organic brain – it is event-driven, distributed, highly power-conscious, and bypasses some of the well-known limitations intrinsic to the way standard computers are designed.
IBM's end goal is to eventually build a machine with human-brain complexity in a comparably small package, and with a power consumption approaching 1 kW. For the time being, however, this milestone has been accomplished by the not so portable (nor particularly power-conscious) Blue Gene/Q Sequoia supercomputer, using 1,572,864 processor cores, 1.5 PB (1.5 million GB) of memory, and 6,291,456 threads.
Making up each core are "neurons," "synapses" and "axons." Despite their names, the design of these components wasn't biologically inspired, but was rather highly optimized for the sake of minimizing manufacturing costs and maximizing performance.
The experiment allowed IBM to better understand the limitations of the standard computer architecture, including the trade-offs between memory, computation and communication on a very large scale. Looking forward, it also gathered the know-how that will serve design and enable even better low-power, massively parallel chips with improved performance.
Future applications could include dramatically improved weather forecasts, stock market predictions, intelligent patient monitoring systems that can perform diagnoses in real time, and optical character recognition (OCR) and speech recognition software matching human performance, to name just a few.
As for recreating the actual behavior of a human brain, we're still many, many years away by all accounts. But at least, it seems, progress is being made.
The video below is a short introduction to the cognitive computing paradigm by IBM's Dharmendra Modha.
Sources: IBM (PDF), Dharmendra S Modha, Design Automation Conference
Blogger Reference Link http://www.p2pfoundation.net/Multi-Dimensional_Science