FAQ
1. What is the goal of the EPFL Blue Brain Project?
The goal is to pioneer simulation neuroscience by developing all the biological algorithms, scientific processes and software needed to digitally reconstruct and simulate the brain. The first use case is the rodent brain, and more specifically the mouse brain.
Simulation neuroscience is an approach to mapping the brain. It is not feasible to map every detail of the brain experimentally because, there are too many parts (over 20,000 genes, more than 100,000 different types of proteins, more than a trillion organic molecules in a single cell, nearly 100 billion neurons, up to 1,000 trillion synapses and over 800 different brain regions in the human brain), too many complex relationships between all these parts, and then too many variations of the brain; across individuals, genders, age, and species. Working out how to generically fill the gaps of our knowledge from sparse data and fundamental principles is the fundamental challenge, which will allow one to build and obtain a detailed map of any brain in the distant future.
By around 2024, the Blue Brain Project aims to have reached a cellular-level model of an entire mouse brain. It will be a first draft at the cellular-level of detail. In other words, a digital model with all the neurons (around 100 million), most of the types of neurons (around 1,000 different types) in the morphological detail (with all their tree like branches) and most of the synapses that they form (around a trillion).
2. What is the proof of concept?
In 2015, Blue Brain reached an important milestone with the publication of a first draft of the digital reconstruction of neocortical microcircuitry (Markram et al, 2015). The study confirmed the feasibility of digitally reconstructing a part of the neocortical tissue in great detail from sparse experimental data and using fundamental principles to fill vast gaps in our knowledge. We showed that if you can copy biology closely, then the digital brain tissue naturally behaves like the real brain tissue. This means one can now study this digital tissue almost like one would study real brain tissue. In fact, we could reproduce many experimental studies and go beyond to do in-silico experiments that are not even technically possible today in the laboratory. We are now scaling this core strategy up to reconstruct whole brain regions and then the whole mouse brain.
3. What are Blue Brain’s main achievements?
Blue Brain follows a rolling four-year roadmap with specific scientific milestones to achieve during this period as we work towards our ultimate goal of digitally reconstructing the entire mouse brain.
The scientific milestones, which are verified by independent scientists, guide the Blue Brain in our day-to-day science but when achieved and shared, help other brain initiatives and the wider scientific communities achieve their goals.
Furthermore, objective metrics of impact include over 200 published papers and preprints to date. These papers are frequently cited and have thousands of views. The 2017 paper, ‘Cliques of Neurons bound into Cavities Provide a Missing Link between Structure and Function’ for example has been viewed over 300,000 times. While the 2021 paper, ‘A Machine-Generated View of the Role of Blood Glucose Levels in the Severity of COVID-19’ has over 100,00 views.
The Project has released over 31,000 model neurons, over a million data traces (via the Channelpedia and the NMC Portal) and millions of predicted missing data in its web portals. These portals have been referenced in published papers and to date, the NMC Portal has been accessed more than 130,000 times by 26,000+ users. We have released over 120 open source repositories on GitHub as well as 9 major libraries. We developed a MOOC series, which to date has seen over 18,000 student registrations.
At the start of 2020 two of our collaborators used the digital reconstructions we have developed to make new scientific discoveries. Working with scientists at the Hebrew University of Jerusalem, together, we formulated a unique analytical approach to the challenge of reducing the complexity of neuron models while retaining their key input/output functions and their computational capabilities. ‘Neuron_Reduce’ is a new computational tool that provides the scientific community with a straightforward capability to simplify complex neuron models of any cell type and still faithfully preserve its input-output properties while significantly reducing simulation run-time. Meanwhile, an international research collaboration with the Florey Institute, has uncovered evidence that a common neurotransmitter can selectively regulate the excitability of neurons.
2020 also saw Blue Brain begin its contribution to the fight against COVID-19 through translational knowledge and expertise and the development and open sourcing of new software and tools.
In 2021, Blue Brain open sourced the Brain Molecular Atlas and digitally reconstructed the architecture of the Neuro-Glia-Vascular system.
2022, bought us closer to being able to build digital twins of brains when we found a way to use only mathematics to automatically draw neurons in 3D.
Click here to read more about the achievements and the BBP timeline.
Click here to read more about Blue Brain’s Milestones.
4. Why does one need simulation neuroscience to understand the brain?
The central pillar of neuroscience is an approach called experimental neuroscience. This approach allows us to experimentally measure all aspects of the brain’s structure and function, in health and in disease. The second pillar is theoretical neuroscience. This approach allows us to translate our knowledge of the brain into concepts and describe these concepts mathematically. This approach allows us to build abstract models to test these specific concepts. Many other sciences (physics, astronomy, chemistry, etc.) have matured further to have a third pillar – simulation.
Simulating a complex system implies first copying that system as accurately as possible. For example, the space shuttle Columbia was digitally reconstructed, with every panel, screw and wire exactly as it was in real life before this type of model of the shuttle could be used to work out how the shuttle crashed. Of course, the big question for neuroscience that everyone struggles with and debates over, is whether one can build such accurate copies of the brain. There are just too many pieces and connections and we will never be able to measure them all, so how is it possible – well, we need a very different approach. Finding that approach is the most important achievement that the Blue Brain Project can make.
That is the first baby steps of simulation neuroscience – an approach to copy the brain when one only has a few pieces of the puzzle. If one can solve this problem in a principled and generalizable way, then one can build the brain at any age, gender, species and in any disease. Like with the space shuttle, one can then simulate all these parts to get to mechanisms in the brain that would be very difficult using an experimental or theoretical approach.
Neuroscience needs this third pillar to complement the other two pillars to understand the structure of the brain – how it works, and what can go wrong.
5. How does building the rat and mouse brain help understand the human brain?
We need to solve the general problem of how to digitally reconstruct any brain from extremely sparse data (like the challenge of finishing a trillion piece puzzle when you only have a few piece to start with). Once this is solved then one can apply this process to any brain (most of the components and interactions are inferred, not experimentally measured).
At Blue Brain, we started with the rodent brain because there is much more data to begin with (more pieces to start with) and makes the problem a little easier. Rodents are mammals so we can also tackle all the mammal specific challenges and it would be a closer step to the adjustments needed to simulate the human brain. There are simpler and more “primitive” brains of course, but spending all this time and money on working out how to reconstruct these brains may not translate so easily to mammalian brains and even less to the human brain.
We also needed to start with the rodent brain because we have to wait to have big enough computers to reconstruct and simulate the human brain. Years ago, as we followed the targets of the computer industries we thought they would be ready sooner. However, it turns out that it is becoming increasingly more difficult to improve the performance and memory of supercomputers due to physical limitations computer development is facing. Despite continual progress, as of today we are still far from computers that would be powerful enough for detailed brain simulation.
6. When will the Blue Brain Project simulate the human brain?
It is not the goal of the Blue Brain Project to simulate the human brain.
The Blue Brain is currently funded to focus on building the foundations for a future project to simulate the human brain by first simulating the mouse brain. One needs massive funding and a massive international collaboration to start such a project. One just has to look at what it takes to carry out some previous large projects. It took 13 years for around 250 scientists to create the first draft map of the Human Genome Project at a cost of $3 billion. The Large Hadron Collider cost around $4.75 billion and took 10 years to build. It took 11 years of a massive effort with around 400’000 people at its peak to get to the moon at a cost of $153 billion in today’s terms. Estimates to get humans to Mars in the next 30 years ranges up to as much as $1 trillion.
From start to finish one could produce a rough first draft simulation of the human brain within about 10 years. It would then take many more years to get every detail (e.g. every molecule) into these models.
In 2013, Blue Brain Founder and Director, Prof. Henry Markram also conceived and headed up the European Human Brain Project (HBP), a Europe-wide effort to advance simulation neuroscience, which initially focused on creating a simulation of the human brain. Markram led the HBP’s executive committee and headed up a team of over one hundred research groups that secured one of the two first flagship grants of the European Commission (EC) amounting to just short of 0.5 billion EUR direct funding from the EC over its 10 year period. In 2016, the HBP was restructured, shifting the focus of the project towards an infrastructure for wider neuroscience research.
Therefore, a project or initiative to simulate the human brain has not yet begun nor is it underway.
7. Would a digital model of the human brain be conscious?
It is not the goal of the Blue Brain Project to simulate consciousness and never has been.
We do not know what is needed for consciousness so no one can really answer the question, let alone whether a digital model can become conscious. One can only speculate. If it turns out that, every atomic interaction in the brain is important to become conscious then one will need to simulate all these interactions to simulate consciousness, which is unlikely to become possible for decades, if not centuries. If, however one only needs to simulate the basic interactions between the neurons then one could begin seeing something that would look like consciousness. But simulating consciousness is not the same as consciousness itself, so it would mostly be useful for us to study the mechanisms that underlie the emergence of consciousness. The media has often focused on this aspect, but we have never made simulating consciousness as a goal in the Blue Brain Project (and not even in the Human Brain Project).
8. How is the Blue Brain Project funded and for how long is the funding?
The Blue Brain Project (BBP) is supported and funded by the Swiss Government and distributed by the governing board of the two Swiss Federal Institutes (ETHZ and EPFL), with around $22 million per year.
Additional funding comes from scientific grants such as the European Commission’s Human Brain Project (HBP), to which the Blue Brain Project participates by making its informatics tools accessible for other modeling efforts. This funding from the HBP grant accounts for about 5% of BBP’s budget.
BBP’s current funding is for 10 years from 2013 to 2024 to reach the milestone of a first draft simulation of the mouse brain.
9. How accurate are Blue Brain’s models and simulations?
“All models are wrong, but some are useful”. Don’t mistake that statement as a reason to discard modeling. Without models, we cannot make any meaningful predictions. You could not walk down a flight of stairs if your brain would not make constant predictions of the height of the steps, your momentum, your position in space, the muscle control etc. Your brain has a model of how to walk stairs, which safely gets you up and down most of the time, so this model is pretty useful. But, once in a while, you hit a step that is a little bit higher than usual, you didn’t pay attention, didn’t update your model and you stumble…
Now, when we want to describe explicitly the actions in the world around us, e.g. an apple falling from the tree, we can use mathematics and Newtonian physics. We can write down the equations, share them, solve them accurately and use them to make predictions. This is very useful, but Einstein and others showed us that the Newtonian description of the movements of an apple don’t hold at very fast speeds (light speed) or when things get very small. Does it make Newtonian physics wrong?
When we model more complex systems, e.g. when we do a weather forecast or make climate models, the same principles apply. We are using mathematics to describe the physical properties and interactions to some degree of detail and some degree of completeness. In a good number of cases, we can quite accurately predict the weather of the next day. In this setting, what matters is to feed the models with data in the biological ranges of precision and establish a process of validation, showing you in which conditions your model makes accurate predictions.
When we are modeling a piece of a brain, this is very much the same. First, our models are built by gathering data from state-of-the-art cell morphology and physiology neuroscience research; we call this process data-driven model building. Second, these biologically realistic simulations can, in turn, be validated by comparing modeling results to experimental data on the behavior of real-life brain networks. In these two ways, our model is anchored on experimental data and has very accurately reflected those measurements at multiple scales.
The methods and software that we have built help us ‘reverse engineer’ the brain. We build models, test them and then rebuild them again. This cycle is repeated until the model becomes consistent with all the data and knowledge we have about the brain. As we turn this wheel, the models get more accurate. We also find out what mistakes there are in our data and knowledge, and in this way, we can make many predictions and make new advances. Therefore, each time we turn the wheel of building, testing and building again, the models become more and more precise copies of regions of the mouse brain and our predictions become more and more accurate.
Read more about Blue Brain’s data-driven research approach – https://www.epfl.ch/research/domains/bluebrain/research-2/
10. What is the difference between the Blue Brain Project and the Human Brain Project?
The EPFL Blue Brain Project (BBP) is a Swiss funded large-scale science initiative that is implemented in the form of a research center at EPFL. The Human Brain Project is a collaboration of more than 100 groups in different universities, EPFL Blue Brain among them, funded by a research grant of the European Commission.
Historically, Prof. Markram and the Swiss Blue Brain team, together with other European groups, developed a proposal for a major European community project, the Human Brain Project (HBP) in 2010, which we won in 2013.
The HBP project proposal was to build reconstructions and simulations of the human brain, to use them for clinical research, and ultimately to emulate the brain’s computational capabilities in new computing technologies.
After multiple rounds of peer review, culminating in a review and decision by a panel of more than a dozen experts, including Nobel Prize Laureates, the HBP was selected as one of the European Commission’s first flagship projects in January 2013. Operations began in October 2013, and the BBP led the project for the first two years, reaching all the scientific milestones that were set. In 2016, the HBP was restructured, with resources originally earmarked for brain simulation being spread among neuroscience research groups more widely.
Since then, scientists and engineers of the Blue Brain team have contributed to some aspects of the Human Brain Project, notably to the HBP’s Neuroinformatics, EBRAINS and Neurorobotics Platforms, and High Performance Computing Platforms.
11. Why the ‘Blue’ Brain Project?
The Blue Brain Project began in July 2005 as a Swiss research initiative at the intersection of neuroscience and Big Data.
Henry Markram at École polytechnique fédérale de Lausanne (EPFL) and International Business Machines (IBM) collaborated with a vision to get supercomputers ready for brain simulations.
IBM’s BlueGene/L supercomputer was one of the first machines with sufficient computing power to handle Blue Brain’s initial goal: to model the neural microcircuitry of a rodent neocortical column. Accordingly, EPFL procured such a supercomputer for the Blue Brain Project in 2005. As the Project developed in modeling complexity simulating larger neuron networks in greater biological detail, the supercomputer was upgraded accordingly through to BlueGene/Q (also known as ‘Blue Brain 4’).
Blue Brain currently manages its supercomputing activities with a Hewlett Packard Enterprise (HPE) next-generation supercomputer housed at the Swiss National Supercomputing Centre (CSCS) in Lugano, Switzerland. The new supercomputer has been named ‘Blue Brain 5’.
Blue Brain 5 is customized for us to tackle the many different mathematical operations we have to run when we build, simulate, visualize and analyze digital copies of brain tissue, and eventually whole brains.
Blue Brain 5 is a set of tightly integrated high-performance resources. It is comprised of an HPE SGI 8600 cluster system with 327 nodes, 0.8 Peta Flops, 64 TeraBytes (TB) aggregated bandwidth, 100 Gbps EDR Infiniband interconnect, 2.3 TB High speed DRAM, 96 TB DDR-4 DRAM, 160 TB SSD. It has recently undergone its phase 2 upgrade, adding 880 nodes to the system.
12. How exactly are HPC systems like Blue Brain 5 leveraged by the Project?
Modeling an individual neuron in Blue Brain today requires solving around 20,000 ordinary differential equations. When modeling entire brain regions, we need to solve roughly 100 billion equations in each time step of the simulation (one fortieth of a millisecond).
Simulation is not the only supercomputing task we have. Digitally building the brain is a major challenge for supercomputers. For example, in a part of the brain the size of a pinhead, there are roughly a billion points where the branches of neurons touch each other. We need to find those touches and then apply biological rules to each touch point to decide if that is where a synapse can form or not, based on principles of how synapses form. Most of the touch points do not form synapses, so one ends up with around 40 million touch points where synapses can form. Performing this calculation took us two weeks on our first supercomputer in 2005, but today takes seconds and we can scale this up to trillions and trillions of touch points on our latest supercomputer.
Visualization and analysis also need different kinds of supercomputing architectures. With Blue Brain 5, HPE have put together a heterogeneous compute architecture that closely meets all the different workloads we have.
13. How does Blue Brain use Deep and Machine Learning?
Machine learning, and deep learning are a set of specific approaches that are applied in science and engineering to accelerate a diversity of workflows. Accordingly, the Blue Brain Project uses machine learning in a number of its activities, e.g. text and data mining, registration of microscope images against a reference atlas or the classification of neurons, to name a few. However, the biophysical models of brain tissue pursued in the Blue Brain Project are not based on the type of deep networks with sigmoidal activation functions as used in machine learning, but are far more complex and biologically accurate.