Viewing entries in
The Singularity

the moon is our backup drive

the moon is our backup drive

My two year old son loves the Moon. He sings about it all day long. He can't wait for nightfall

Brain Scanners That Read Your Mind

What are you thinking about? Which memory are you reliving right now as you read this? You may believe that only you can answer, but by combining brain scans with pattern-detection software, neuroscientists are prying open a window into the human mind.

In the last few years, patterns in brain activity have been used to successfully predict what pictures people are looking at, their location in a virtual environment or a decision they are poised to make. The most recent results show that researchers can now recreate moving images that volunteers are viewing - and even make educated guesses at which event they are remembering.

Last week at the Society for Neuroscience meeting in Chicago, Jack Gallant, a leading "neural decoder" at the University of California, Berkeley, presented one of the field's most impressive results yet. He and colleague Shinji Nishimoto showed that they could create a crude reproduction of a movie clip that someone was watching just by viewing their brain activity. Others at the same meeting claimed that such neural decoding could be used to read memories and future plans - and even to diagnose eating disorders.

Understandably, such developments are raising concerns about "mind reading" technologies, which might be exploited by advertisers or oppressive governments. Yet despite - or perhaps because of - the recent progress in the field, most researchers are wary of calling their work mind-reading. Emphasising its limitations, they call it neural decoding.

The development of 'mind-reading' technologies is raising concerns about who might exploit them

They are quick to add that it may lead to powerful benefits, however. These include gaining a better understanding of the brain and improved communication with people who can't speak or write, such as stroke victims or people with neurodegenerative diseases. There is also excitement over the possibility of being able to visualise something highly graphical that someone healthy, perhaps an artist, is thinking.

So how does neural decoding work? Gallant's team drew international attention last year by showing that brain imaging could predict which of a group of pictures someone was looking at, based on activity in their visual cortex. But simply decoding still images alone won't do, says Nishimoto. "Our natural visual experience is more like movies."

Nishimoto and Gallant started their most recent experiment by showing two lab members 2 hours of video clips culled from DVD trailers, while scanning their brains. A computer program then mapped different patterns of activity in the visual cortex to different visual aspects of the movies such as shape, colour and movement. The program was then fed over 200 days' worth of YouTube clips, and used the mappings it had gathered from the DVD trailers to predict the brain activity that each YouTube clip would produce in the viewers.

Finally, the same two lab members watched a third, fresh set of clips which were never seen by the computer program, while their brains were scanned. The computer program compared these newly captured brain scans with the patterns of predicted brain activity it had produced from the YouTube clips. For each second of brain scan, it chose the 100 YouTube clips it considered would produce the most similar brain activity - and then merged them. The result was continuous, very blurry footage, corresponding to a crude "brain read-out" of the clip that the person was watching.

In some cases, this was more successful than others. When one lab member was watching a clip of the actor Steve Martin in a white shirt, the computer program produced a clip that looked like a moving, human-shaped smudge, with a white "torso", but the blob bears little resemblance to Martin, with nothing corresponding to the moustache he was sporting.

Another clip revealed a quirk of Gallant and Nishimoto's approach: a reconstruction of an aircraft flying directly towards the camera - and so barely seeming to move - with a city skyline in the background omitted the plane but produced something akin to a skyline. That's because the algorithm is more adept at reading off brain patterns evoked by watching movement than those produced by watching apparently stationary objects.

"It's going to get a lot better," says Gallant. The pair plan to improve the reconstruction of movies by providing the program with additional information about the content of the videos.

Team member Thomas Naselaris demonstrated the power of this approach on still images at the conference. For every pixel in a set of images shown to a viewer and used to train the program, researchers indicated whether it was part of a human, an animal, an artificial object or a natural one. The software could then predict where in a new set of images these classes of objects were located, based on brain scans of the picture viewers.

Movies and pictures aren't the only things that can be discerned from brain activity, however. A team led by Eleanor Maguire and Martin Chadwick at University College London presented results at the Chicago meeting showing that our memory isn't beyond the reach of brain scanners.

Movies and pictures aren't the only things that can be discerned from brain activity

A brain structure called the hippocampus is critical for forming memories, so Maguire's team focused its scanner on this area while 10 volunteers recalled videos they had watched of different women performing three banal tasks, such as throwing away a cup of coffee or posting a letter. When Maguire's team got the volunteers to recall one of these three memories, the researchers could tell which the volunteer was recalling with an accuracy of about 50 per cent.

That's well above chance, says Maguire, but it is not mind reading because the program can't decode memories that it hasn't already been trained on. "You can't stick somebody in a scanner and know what they're thinking." Rather, she sees neural decoding as a way to understand how the hippocampus and other brain regions form and recall a memory.

Maguire could tackle this by varying key aspects of the clips - the location or the identity of the protagonist, for instance - and see how those changes affect their ability to decode the memory. She is also keen to determine how memory encoding changes over the weeks, months or years after memories are first formed.

Meanwhile, decoding how people plan for the future is the hot topic for John-Dylan Haynes at the Bernstein Center for Computational Neuroscience in Berlin, Germany. In work presented at the conference, he and colleague Ida Momennejad found they could use brain scans to predict intentions in subjects planning and performing simple tasks. What's more, by showing people, including some with eating disorders, images of food, Haynes's team could determine which suffered from anorexia or bulimia via brain activity in one of the brain's "reward centres".

Another focus of neural decoding is language. Marcel Just at Carnegie Melon University in Pittsburgh, Pennsylvania, and his colleague Tom Mitchell reported last year that they could predict which of two nouns - such as "celery" and "airplane" - a subject is thinking of, at rates well above chance. They are now working on two-word phrases.

Their ultimate goal of turning brain scans into short sentences is distant, perhaps impossible. But as with the other decoding work, it's an idea that's as tantalising as it is creepy.

What do you think? Heh...

Moving Towards An Open Singularity

Recently, I had a dialogue with some colleagues (Tina and RJ), about technology and the future. The focus of our discussion was the Metaverse and The Singularity. Although, my colleagues were unfamiliar with these exact terms. I believe the dialog important enough to want to share some thoughts about that discussion and the singularity prior to the Singularity Summit (which is happening in NYC on October 3-4). And I encourage anyone reading this to attend.

 

Yes, this post is long, but worthwhile, if for no other reason than to share the ideas of The Singularity and the Metaverse as well some new thoughts I had on those subjects.

 

So, the conversation with my colleagues when like this (paraphrasing):

 

- "What happens when.. virtual worlds meet geospatial maps of the planet?"

- "When simulations get real and life and business go virtual?"

- "When you use a virtual Earth to navigate the physical Earth, and your avatar becomes your online agent?"

-- "What happens then," I said, "is called the Metaverse."

I recall an observation made by polio vaccine pioneer Dr. Jonas Salk. He said that the most important question we can ask of ourselves is, "are we being good ancestors?"

 

This is a particularly relevant question for those of us that will be attending the Singularity Summit this year. In our work, in our policies, in our choices, in the alternatives that we open and those that we close, are we being good ancestors? Our actions, our lives have consequences, and we must realize that it is incumbent upon us to ask if the consequences we're bringing about are desirable.

 

This question was a big part of the conversation with my colleagues. Although, that is not an easy question to answer, in part because it can be an uncomfortable examination. But this question becomes especially challenging when we recognize that even small choices matter. It's not just the multi-billion dollar projects and unmistakably world-altering ideas that will change the lives of our descendants. Sometimes, perhaps most of the time, profound consequences can arise from the most prosaic of topics.

 

Which is why I'm going to write a bit here about video games.

 

Well, not just video games, but video games and camera phones (which many of my readers know - I happen to know quite a bit about), and Google Earth and the myriad day-to-day technologies that, individually, may attract momentary notice, but in combination, may actually offer us a new way of grappling with the world. And just might, along the way, help to shape the potential for a safe Singularity.

 

In the Metaverse Roadmap Overview the authors sketch out four scenarios of how a combination of forces driving the development of immersive, richly connected information technologies may play out over the next decade. But what has struck me more recently about the roadmap scenarios is that the four worlds could also represent four pathways to a Singularity. Not just in terms of the technologies, but—more importantly—in terms of the social and cultural choices we make while building those technologies.

 

The four metaverse worlds emerged from a relatively commonplace scenario structure. The authors arrayed two spectra of possibility against each other, thereby offering four outcomes. Analysts sometimes refer to this as the "four-box" method, and it's a simple way of forcing yourself to think through different possibilities.

 

This is probably the right spot to insert this notion: scenarios are not predictions, they're provocations. They're ways of describing different future possibilities not to demonstrate what will happen, but to suggest what could happen. They offer a way to test out strategies and assumptions—what would the world look like if we undertook a given action in these four futures?

 

To construct the scenario set the authors selected two themes likely to shape the ways in which the Metaverse unfolds: the spectrum of technologies and applications ranging from augmentation tools that add new capabilities to simulation systems that model new worlds; and the spectrum ranging from intimate technologies, those that focus on identity and the individual, to external technologies, those that provide information about and control over the world around you. These two spectra collide and contrast to produce four scenarios.

 

The first, Virtual Worlds, emerges from the combination of Simulation and Intimate technologies. These are immersive representations of an environment, one where the user has a presence within that reality, typically as an avatar of some sort. Today, this means World of Warcraft, Second Life, PlayStation Home and the like.

 

Over the course of the Virtual Worlds scenario, we'd see the continued growth and increased sophistication of immersive networked environments, allowing more and more people to spend substantial amounts of time engaged in meaningful ways online. The ultimate manifestation of this scenario would be a world in which the vast majority of people spend essentially all of their work and play time in virtual settings, whether because the digital worlds are supremely compelling and seductive, or because the real world has suffered widespread environmental and economic collapse.

 

The next scenario, Mirror Worlds, comes from the intersection of Simulation and Externally-focused technologies. These are information-enhanced virtual models or “reflections” of the physical world, usually embracing maps and geo-locative sensors. Google Earth is probably the canonical present-day version of an early Mirror World.

 

While undoubtedly appealing to many individuals, in my view, the real power of the Mirror World setting falls to institutions and organizations seeking to have a more complete, accurate and nuanced understanding of the world's transactions and underlying systems. The capabilities of Mirror World systems is enhanced by a proliferation of sensors and remote data gathering, giving these distributed information platforms a global context. Geospatial, environmental and economic patterns could be easily represented and analyzed. Undoubtedly, political debates would arise over just who does, and does not, get access to these models and databases.

 

Thirdly, Augmented Reality looks at the collision of Augmentation and External technologies. Such tools would enhance the external physical world for the individual, through the use of location-aware systems and interfaces that process and layer networked information on top of our everyday perceptions.

 

Augmented Reality makes use of the same kinds of distributed information and sensory systems as Mirror Worlds, but does so in a much more granular, personal way. The AR world is much more interested in depth than in flows: the history of a given product on a store shelf; the name of the person waving at you down the street (along with her social network connections and reputation score); the comments and recommendations left by friends at a particular coffee shop, or bar, or bookstore. This world is almost vibrating with information, and is likely to spawn as many efforts to produce viable filtering tools as there are projects to assign and recognize new data sources.

 

Lastly, we have Lifelogging, which brings together Augmentation and Intimate technologies. Here, the systems record and report the states and life histories of objects and users, enhancing observation, recall, and communication. I've sometimes discussed one version of this as the "participatory panopticon."

Here, the observation tools of an Augmented Reality world get turned inward, serving as an adjunct memory. Lifelogging systems are less apt to be attuned to the digital comments left at a bar than to the spoken words of the person at the table next to you. These tools would be used to capture both the practical and the ephemeral, like where you left your car in the lot and what it was that made your spouse laugh so much. Such systems have obvious political implications, such as catching a candidate's gaffe or a bureaucrat's corruption. But they also have significant personal implications: what does the world look like when we know that everything we say or do is likely to be recorded?

 

This underscores a deep concern that crosses the boundaries of all four scenarios: trust.

 

"Trust" encompasses a variety of key issues: protecting privacy and being safely visible; information and transaction security; and, critically, honesty and transparency. It wouldn't take much effort to turn all four of these scenarios into dystopias. The common element of the malevolent versions of these societies would be easy to spot: widely divergent levels of control over and access to information, especially personal information. The ultimate importance of these scenarios isn't just the technologies they describe, but the societies that they create.

 

So what do these tell us about a Singularity?

 

Across the four Metaverse scenarios, we can see a variety of ways in which the addition of an intelligent system would enhance the audience's experience. Dumb non-player characters and repetitive bots in virtual worlds, for example, might be replaced by virtual people essentially indistinguishable from characters controlled by human users. Efforts to make sense of the massive flows of information in a Mirror World setting would be enormously enhanced with the assistance of sophisticated machine analyst. Augmented Reality environments would thrive with truly intelligent agent systems, knowing what to filter and what to emphasize. In a lifelogging world, an intelligent companion in one's mobile or wearable system would be needed in order to figure out how to index and catalog memories in a personally meaningful way; it's likely that such a system would need to learn how to emulate your own thought processes, becoming a virtual shadow.

 

None of these systems would truly need to be self-aware, self-modifying intelligent machines—but in time, each could lead to that point.

 

But if the potential benefits of these scenarist worlds would be enhanced with intelligent information technology, so too would the dangers. Unfortunately, avoiding dystopian outcomes is a challenge that may be trickier than some may expect—and is one with direct implications for all of our hopes and efforts for bringing about a future that would benefit human civilization, not end it.

 

It starts with a basic premise: software is a human construction. That's obvious when considering code written by hand over empty pizza boxes and stacks of paper coffee cups. But even the closest process we have to entirely computer-crafted software—emergent, evolutionary code—still betrays the presence of a human maker: evolutionary algorithms may have produced the final software, and may even have done so in ways that remain opaque to human observers, but the goals of the evolutionary process, and the selection mechanism that drives the digital evolution towards these goals, are quite clearly of human origin.

 

To put it bluntly, software, like all technologies, is inherently political. Even the most disruptive technologies, the innovations and ideas that can utterly transform society, carry with them the legacies of past decisions, the culture and history of the societies that spawned them. Code inevitably reflects the choices, biases and desires of its creators.

 

This will often be unambiguous and visible, as with digital rights management. It can also be subtle, as with operating system routines written to benefit one application over its competitors (I know some of you reading this are old enough to remember "DOS isn't done 'til Lotus won't run"). Sometimes, code may be written to reflect an even more dubious bias, as with the allegations of voting machines intentionally designed to make election-hacking easy for those in the know. Much of the time, however, the inclusion of software elements reflecting the choices, biases and desires of its creators will be utterly unconscious, the result of what the coders deem obviously right.

 

We can imagine parallel examples of the ways in which metaverse technologies could be shaped by deeply-embedded cultural and political forces: the obvious, such as lifelogging systems that know to not record digitally-watermarked background music and television; the subtle, such as augmented reality filters that give added visibility to sponsors, and make competitors harder to see; the malicious, such as mirror world networks that accelerate the rupture between the information haves and have-nots—or, perhaps more correctly, between the users and the used; and, again and again, the unintended-but-consequential, such as virtual world environments that make it impossible to build an avatar that reflects your real or desired appearance, offering only virtual bodies sprung from the fevered imagination of perpetual adolescents.

 

So too with what we today talk about as a "singularity." The degree to which human software engineers actually get their hands dirty with the nuts & bolts of AI code is secondary to the basic condition that humans will guide the technology's development, making the choices as to which characteristics should be encouraged, which should be suppressed or ignored, and which ones signify that "progress" has been made. Whatever the degree to which post-singularity intelligences would be able to reshape their own minds, we have to remember that the first generation will be our creations, built with interests and abilities based upon our choices, biases and desires.

 

This isn't intrinsically bad; emerging digital minds that reflect the interests of their human creators is a lever that gives us a real chance to make sure that a "singularity" ultimately benefits us. But it holds a real risk. Not that people won't know that there's a bias: we've lived long enough with software bugs and so-called "computer errors" to know not to put complete trust in the pronouncements of what may seem to be digital oracles. The risk comes from not being able to see what that bias might be.

 

Many of us rightly worry about what might happen with "Metaverse" systems that analyze our life logs, that monitor our every step and word, that track our behavior online so as to offer us the safest possible society—or best possible spam. Imagine the risks associated with trusting that when the creators of emerging self- aware systems say that they have our best interests in mind, they mean the same thing by that phrase that we do.

 

For me, the solution is clear. Trust depends upon transparency. Transparency, in turn, requires openness.

 

We need an Open Singularity.

 

At minimum, this means expanding the conversation about the shape that a singularity might take beyond a self-selected group of technologists and philosophers. An "open access" singularity, if you will. Ray Kurzweil's books and lectures are a solid first step, but the public discourse around the singularity concept needs to reflect a wider diversity of opinion and perspective.

 

If the singularity is as likely and as globally, utterly transformative as many here believe, it would be profoundly unethical to make it happen without including all of the stakeholders in the process—and we are all stakeholders in the future.

 

World-altering decisions made without taking our vast array of interests into account are intrinsically flawed, likely fatally so. They would become catalysts for conflicts, potentially even the triggers for some of the "existential threats" that may arise from transformative technologies. Moreover, working to bring in diverse interests has to happen as early in the process as possible. Balancing and managing a global diversity of needs won't be easy, but it will be impossible if democratization is thought of as a bolt-on addition at the end.

 

Democracy is a messy process. It requires give-and-take, and an acknowledgement that efficiency is less important than participation.

 

We may not have an answer now as to how to do this, how to democratize the singularity. If this is the case—and I suspect that it is—then we have added work ahead of us. The people who have embraced the possibility of a singularity should be working at least as hard on making possible a global inclusion of interests as they do on making the singularity itself happen. All of the talk of "friendly AI" and "positive singularities" will be meaningless if the only people who get to decide what that means are the few hundred who read and understand this blog posting.

 

My preferred pathway would be to "open source" the singularity, to bring in the eyes and minds of millions of collaborators to examine and co-create the relevant software and models, seeking out flaws and making the code more broadly reflective of a variety of interests. Such a proposal is not without risks. Accidents will happen, and there will always be those few who wish to do others harm. But the same is true in a world of proprietary interests and abundant secrecy, and those are precisely the conditions that can make effective responses to looming disasters difficult. With an open approach, you have millions of people who know how dangerous technologies work, know the risks that they hold, and are committed to helping to detect, defend and respond to crises. That these are, in Bill Joy's term, "knowledge-enabled" dangers means that knowledge also enables our defense; knowledge, in turn, grows faster as it becomes more widespread. This is not simply speculation; we've seen time and again, from digital security to the global response to influenza, that open access to information-laden risks ultimately makes them more manageable.

 

The Metaverse Roadmap offers a glimpse of what the next decade might hold, but does so recognizing that the futures it describes are not end-points, but transitions. The choices we make today about commonplace tools and everyday technologies will shape what's possible, and what's imaginable, with the generations of technologies to come. If the singularity is in fact near, the fundamental tools of information, collaboration and access will be our best hope for making it happen in a way that spreads its benefits and minimizes its dangers—in short, making it happen in a way that lets us be good ancestors.

 

If we're willing to try, we can create a future, a singularity, that's wise, democratic and sustainable—a future that's open. Open as in transparent. Open as in participatory. Open as in available to all. Open as in filled with an abundance of options.

 

The shape of tomorrow remains in our grasp, and will be determined by the choices we make today. Choose wisely.

THE ROBOT WITH A BIOLOGICAL BRAIN

Meet Gordon, probably the world's first robot controlled exclusively by living brain tissue.

Stitched together from cultured rat neurons, Gordon's primitive grey matter was designed at the University of Reading by scientists who unveiled the neuron-powered machine on Wednesday. Their groundbreaking experiments explore the vanishing boundary between natural and artificial intelligence, and could shed light on the fundamental building blocks of memory and learning.

"The purpose is to figure out how memories are actually stored in a biological brain," said Kevin Warwick, a professor at the University of Reading and one of the robot's principle architects.

Observing how the nerve cells cohere into a network as they fire off electrical impulses, he said, may also help scientists combat neurodegenerative diseases that attack the brain such as Alzheimer's and Parkinson's. "If we can understand some of the basics of what is going on in our little model brain, it could have enormous medical spinoffs," he said.

Gordon has a brain composed of 50,000 to 100,000 active neurons. Once removed from rat foetuses and disentangled from each other with an enzyme bath, the specialised nerve cells are laid out in a nutrient-rich medium across an eight-by-eight centimetre (five-by-five inch) array of 60 electrodes. This "multi-electrode array" (MEA) serves as the interface between living tissue and machine, with the brain sending electrical impulses to drive the wheels of the robots, and receiving impulses delivered by sensors reacting to the environment. Because the brain is living tissue, it must be housed in a special temperature-controlled unit -- it communicates with its "body" via a Bluetooth radio link.

The robot has no additional control from a human or computer.

From the very start, the neurons get busy. "Within about 24 hours, they start sending out feelers to each other and making connections," said Warwick. "Within a week we get some spontaneous firings and brain-like activity" similar to what happens in a normal rat -- or human -- brain, he added. But without external stimulation, the brain will wither and die within a couple of months.

"Now we are looking at how best to teach it to behave in certain ways," explained Warwick. To some extent, Gordon learns by itself. When it hits a wall, for example, it gets an electrical stimulation from the robot's sensors. As it confronts similar situations, it learns by habit. To help this process along, the researchers also use different chemicals to reinforce or inhibit the neural pathways that light up during particular actions.

Gordon, in fact, has multiple personalities -- several MEA "brains" that the scientists can dock into the robot. "It's quite funny -- you get differences between the brains," said Warwick. "This one is a bit boisterous and active, while we know another is not going to do what we want it to."

Mainly for ethical reasons, it is unlikely that researchers at Reading or the handful of laboratories around the world exploring the same terrain will be using human neurons any time soon in the same kind of experiments. But rats brain cells are not a bad stand-in: much of the difference between rodent and human intelligence, speculates Warwick, could be attributed to quantity not quality.

Rats brains are composed of about one million neurons, the specialised cells that relay information across the brain via chemicals called neurotransmitters. Humans have 100 billion.

"This is a simplified version of what goes on in the human brain where we can look -- and control -- the basic features in the way that we want. In a human brain, you can't really do that," he says.

A Functional Brain Model

Those of you who have heard my lectures know I often talk about "smarter than human intelligence," and with it's advent the imapct on the human race as we move close to the idea of "the Singularity."  

In order to achieve the Singularity we first must fully map and understand the human brian and one such ambitious project to create an accurate computer model of the brain has reached an impressive milestone. Scientists in Switzerland working with IBM researchers have shown that their computer simulation of the neocortical column, arguably the most complex part of a mammal's brain, appears to behave like its biological counterpart. By demonstrating that their simulation is realistic, the researchers say, these results suggest that an entire mammal brain could be completely modeled within three years, and a human brain within the next decade. Right on target for my futurist predictions to date.


"What we're doing is reverse-engineering the brain," says Henry Markram, codirector of the Brain Mind Institute at the Ecole Polytechnique Fédérale de Lausanne, in Switzerland, bluebrain_x220.jpgwho led the work, called the Blue Brain project, which began in 2005. By mimicking the behavior of the brain down to the individual neuron, the researchers aim to create a modeling tool that can be used by neuroscientists to run experiments, test hypotheses, and analyze the effects of drugs more efficiently than they could using real brain tissue.

The model of part of the brain was completed last year, says Markram. But now, after extensive testing comparing its behavior with results from biological experiments, he is satisfied that the simulation is accurate enough that the researchers can proceed with the rest of the brain.

"It's amazing work," says Thomas Serre, a computational-neuroscience researcher at MIT. "This is likely to have a tremendous impact on neuroscience."

The project began with the initial goal of modeling the 10,000 neurons and 30 million synaptic connections that make up a rat's neocortical column, the main building block of a mammal's cortex. The neocortical column was chosen as a starting point because it is widely recognized as being particularly complex, with a heterogeneous structure consisting of many different types of synapse and ion channels. "There's no point in dreaming about modeling the brain if you can't model a small part of it," says Markram.

The model itself is based on 15 years' worth of experimental data on neuronal morphology, gene expression, ion channels, synaptic connectivity, and electrophysiological recordings of the neocortical columns of rats. Software tools were then developed to process this information and automatically reconstruct physiologically accurate 3-D models of neurons and their interconnections.

The neuronal circuits were tested by simulating specific input stimuli and seeing how the circuits behaved, compared with those in biological experiments. Where gaps in knowledge appeared about how certain parts of the model were supposed to behave, the scientists went back to the lab and performed experiments to identify the kinds of behavior that needed to be reproduced. In fact, about a third of the team of 35 researchers was devoted to carrying out such experiments, says Markram.

Through an iterative process of testing, the simulation has gradually been refined to the point where Markram is confident that it behaves like a real neocortical column.

However, none of these results have so far been published in the peer-reviewed literature, says Christof Koch, a professor of biology and engineering at Caltech. And this is by no means the first computer model of the brain, he points out. "This is an evolutionary process rather than a revolutionary one," he says. As long ago as 1989, Koch created a 10,000-neuron simulation, albeit in a far simpler model.

Furthermore, Koch is skeptical about how quickly the brain model can progress. Any claims that the human brain can be modeled within 10 years are so "ridiculous" that they are not worth discussing, he says.

Rat brains have about 200 million neurons, while human brains have in the region of 50 to 100 billion neurons. "That is a big scale-up," admits Markram.

But he is confident that his model is robust enough to be expanded indefinitely. What's more, he believes that the level of detail of the model can also be taken further. "It's at quite a high resolution," he says. "It's still at a cellular level, but we want to look at the molecular level." Doing so would enable simulation-based drug testing to be carried out by showing how specific molecules affect proteins, receptors, and enzymes.

"I wouldn't be surprised if they could do it," says Serre. "However, it's not clear what they could get out of it," he says. If you want this model to be useful, you have to be able to understand how the behavior relates to specific brain functions. So far, it is not clear that the Blue Brain project has done this, he says.