Looks like we are baby-stepping towards the singularity:
An international team of scientists in Europe has created a silicon chip designed to function like a human brain. With 200,000 neurons linked up by 50 million synaptic connections, the chip is able to mimic the brain’s ability to learn more closely than any other machine.
Although the chip has a fraction of the number of neurons or connections found in a brain, its design allows it to be scaled up, says Karlheinz Meier, a physicist at Heidelberg University, in Germany, who has coordinated the Fast Analog Computing with Emergent Transient States project, or FACETS.
I think this is very cool, for a couple of reasons. As the article points out, this project is hardly unique in attempting to simulate fine-grained structural details of the brain, but it is ahead of software-based approaches (like the Blue Brain project) since it actually runs in parallel and at speeds as fast (or faster than) actual brains. Moreover, it simulates, entirely in silicon, the complex dynamics of not only neurons’ cell bodies but the interactions between neurons at synapses.
In spite of all this, the most exciting thing for me about this kind of technology is that it means neuroscience—the really juicy, gritty kind—is finally getting to a place where it can contribute to cognitive science in concrete ways. Cognitive neuroscience is fraught with interpretive difficulties, and what contributions squishy neuroscience has made to cognitive science have generally trickled down. But if we can pick apart brains, cell-by-cell, figure out how they are organized, and then use conventional chip-building technology to create hunks of silicon that let us quickly test hypotheses about what we get out of certain architectures, we can strongly constrain our more abstract cognitive models. This is, after all, the whole point of cognitive science: to get all kinds of people who are studying the mind and brain to talk to each other.
I’m probably a little over-enthusiastic about this kind of neuroscience, which I guess I might attribute to my frustration about the interpretive difficulties faced by more explicitly cognitive-flavored neuroscience, and maybe the fact that my thesis consists almost entirely of continuous-time neural network modeling boosterism. In the end, it’s extremely difficult to draw firm conclusions in one domain from results in another. The ultimately usefulness of this chip might be as just that: a chip. Parallel computing is all the rage, and in a couple of decades the chip in your laptop (or whatever we’ll have then) might look a lot like this one, especially with the advent of programming paradigms that take advantage of the multi-core chips that are already standard.
(Link: MIT Technology Review, via boing boing)