Andreas Kluth
TT

What is Consciousness? Scientists Compete to Find Out

In the world of science, there’s so much to be excited about right now. But I’m especially intrigued by a set of experiments that may, eventually, raise human consciousness.

I mean that literally: The goal of this research is to understand what exactly “consciousness” is and how it works. Which animals have it? Why do people sometimes lose it? Could artificial intelligence ever make our machines self-aware?

But I’m also talking about raising consciousness in a meta-sense. The way these studies are being conducted could point us to a better approach for doing research in science and other fields. And, in this seemingly post-truth world, it even hints at a way to settle some of our other conflicts with intellectual integrity.

The method is called adversarial collaboration. In science as in life, people usually have lots of theories about stuff. Logically, those can’t all be true at the same time. And yet many theories live on indefinitely in the safety of their intellectual silos. So the solution is to invite proponents of conflicting narratives to identify some point of contradiction that can be tested. That would let us falsify the wrong theories, which is a good definition of progress.

This notion isn’t totally new. In 1919, Arthur Eddington, a British astronomer, used a solar eclipse to test two conflicting theories — Isaac Newton’s notions about gravity and Albert Einstein’s on general relativity. (Einstein’s won). But there’s been no large-scale research of this kind with the active participation of dueling scientists.

The Templeton World Charity Foundation wants to change that. The nonprofit funds research on some of humanity’s biggest questions, especially those at the intersection of science and spirituality. That includes consciousness.

Humans have always been fascinated and perplexed by the concept. Most famously, the French philosopher Rene Descartes expressed it as “cogito, ergo sum” — I think, therefore I am. While I appreciate his epistemological subtlety, I’ve always felt that was a bit like saying, “I know it when I see it.”

Why do humans usually have consciousness? What happens when we lose it, as in a coma or dreamless sleep, during seizures or anaesthesia? Why does injury to the cerebellum, which has 69 of the 86 billion neurons in our brains, not cause a loss of consciousness, whereas damage to other regions does?

These questions also have moral significance. Do newborn babies have consciousness? What about preterm ones? Fetuses? Apes and other primates almost certainly do, but what about octopuses? Bees? Fruit flies? More disturbingly, will our machines and algorithms — which already defeat humans at chess and may soon be better at driving our cars — one day become conscious?
Dawid Potgieter, a South African who runs the Templeton project, told me that his team identified about a dozen plausible theories on consciousness. Then they paired them in such a way that experiments could disprove one in each couple. Eventually, he wants to run about five or six face-offs.

The first one is now underway, with six labs, spread between the US, Europe and China, scanning and wiring up participants — and all duplicating each other’s work to eliminate biases. These experiments pit the so-called “global workspace theory” (GWT), defended by Stanislas Dehaene at the College de France in Paris, against “integrated information theory” (IIT), as championed by Giulio Tononi at the University of Wisconsin in Madison.

Both theories are so complex that in trying to understand them I nearly lost consciousness. So I asked Lucia Melloni at the Max Planck Institute in Frankfurt to explain them to me with some mental shortcuts. She’s the (neutral and independent) organizer of this adversarial collaboration.

To grasp Dehaene’s GWT, she told me, picture your nervous system as an enormous theater. At the outset, all the neurons sit in the dark, whispering and nudging each other — that is, firing and exchanging information — but not yet conscious of anything. But then somebody, the “workspace,” gets up on the stage. All the lights now shine on this entity, and it has the attention of all the neurons in the audience. This workspace broadcasts one message to the exclusion of the other chatter. What we call consciousness is simply what it feels like to perceive this broadcast.

In IIT, by contrast, consciousness is not a message but a causal structure, and a darned complex one at that. In the metaphor Tononi chose for me, it rests on a grid of neurons that, like a two-dimensional map of Manhattan, supports the three-dimensional city rising up from it. But all the neurons in the structure must be integrated to give me the experience of perceiving this city — where it starts and ends and so forth — so they must all be able to cause effects in one another.

The theories thus start with completely different frameworks. But, as Dehaene and Tononi agreed, they make certain predictions that conflict with each other. One is that in GWT the prefrontal cortex should show the most activity, whereas in IIT it’s the back of the brain that should light up in the same experiments. So one or the other must be wrong.
It takes a lot of courage and integrity to enter such a competition. Nobody enjoys finding out that their research career has been in vain. The only thing worse, however, is to stay wrong even longer. Adversarial collaboration is therefore a great way of focusing the mind.

It’s also — how else to put it — beautiful. Melloni described to me how inspiring it was to see both teams trying to fully understand the opposing theory to identify points of overlap. “You have to be totally honest about what the other says,” she told me. “You can’t just stay in your own bubble. You must listen really carefully.” And you have to want, and eventually submit to, the truth. Seems like an exercise worth doing in any sphere of life — even in politics.

Bloomberg