We know that they have the same sensors – called nociceptors – that cause us to flinch or cry when we are hurt. And they certainly behave like they are sensing something unpleasant. When a chef places them in boiling water, for example, they twitch their tails as if they are in agony.
But are they actually "aware" of the sensation?
When you or I perform an action, our minds are filled with a complex conscious experience. We can not just assume that this is also true for other animals, however – especially those with such different brains from our own. It is perfectly feasible – some scientists would even argue that it is probable – that a creature like a lobster lacks any kind of internal experience, compared to the rich world inside our head
"With a dog, who behaves pretty much like us , who is in a body that is not very different from ours, and who has a brain that is not very different from ours, it's much more plausible that it sees things and hears things very much like we do, than to say that it is completely 'dark inside', so to speak, "says Giulio Tononi, and neuroscientist at the University of Wisconsin-Madison. "But when it comes down to a lobster, all bets are off."
The question of whether other brains ̵
Tononi can have a solution to these puzzles. His "integrated information theory" is one of the most exciting theories of consciousness that has emerged over the past few years, and although it is not yet proven, it provides some testable hypotheses that may soon give a definitive answer
Knowing what Giulio Tononi
Tononi says his fascination has grown as a teenager with a "typically adolescent" preoccupation with ethics and philosophy . "I realized that knowing what consciousness is and how it came about is crucial to understanding our place in the universe and what we do with our lives," he says.
At that age, he did not know the best way to follow to pursue those questions – Would it be mathematics? Or philosophy? – but he eventually settled on medicine. And the clinical experience helped fertilize his young mind. "There is really something special about having a direct exposure to neurological cases and psychotic cases," he says. "It really forces you to face directly what happens to patients when they lose consciousness or lose the components of consciousness in ways that are really difficult to imagine if you did not see that it actually happens."
In his published research, However, he built his reputation with some pioneering work on sleep – a less controversial field. "At that time you could not even talk about consciousness," he says. But he kept on mulling over the question, and in 2004, he published his first iteration of his theory
You might also like:
It begins with a set of axioms that define what consciousness actually is . structured for example – if you look at the space around you, you can distinguish the position of objects relative to each other. It's also specific and " differentiated " – each experience will be different depending on the particular circumstances, meaning there is a huge number of possible experiences. And it is integrated . If you look at a red book on a table, its shape and color and location – although originally processed separately in the brain – are all held together at once in a single conscious experience. We even combine information from many different senses – what Virginia Woolf described as the "incessant shower of innumerable atoms" – into a single sense of here and now.
According to Tononi's theory, more information that is shared and processed between many different components then the higher the level of consciousness
From these axioms, Tononi suggests that we can identify a person's (or an animal's, or even a computer's) consciousness from the level of "information integration" that is possible in the brain (or CPU). According to his theory, the more information that is shared and processed between many different components to contribute to that single experience, then the higher the level of consciousness.
Perhaps the best way to understand what this means in practice is to compare brain's visual system to a digital camera. And the camera captures the light hitting each pixel of the image sensor – which is clearly a huge amount of total information. But the pixels are not "talking" to each other or sharing information: each one is independently recording a tiny part of the scene. And without that integration, it can not have a rich conscious experience.
Like the digital camera, the human retina contains many sensors that initially capture small elements of the scene. But that data is then shared and processed across many different regions of the brain. Some areas will work on the colors, adapting the raw data to make sense of the light levels so we can still recognize the colors even in very different conditions. Others examine the contours, which may involve guessing the parts of an object are obscured – if a coffee cup is in front of a part of the book, for example – so you still get a sense of the overall shape. Those regions will then share that information, passing it further up the hierarchy to combine the different elements – and out the conscious experience of all that is in front of us.
The same goes for our memories. Unlike a digital camera's library of photos, we do not store each experience separately. They are combined and cross-linked to form a meaningful narrative. Every time we experience something new, it is integrated with that previous information. It is the reason that the taste of a single madeleine can trigger a memory from our distant childhood – and it is all part of our conscious experience.
At least, that's the theory – and it's compatible with many observations and experiments across medicine .
One study, published in 2015, examined the brains of participants under various forms of anesthesia – including propofol and xenon. In order to integrate information, the team applied a magnetic field above the scalp to stimulate a small area of the cortex underneath – a standard non-invasive technique known as Transcranial Magnetic Stimulation (TMS). When awake, you would observe a complex ripple of activity as the brain responds to the TMS, with many different regions responding, which Tononi takes to be a sign of information integration between different groups of neurons
But the brains of people under propofol and xenon did not show that response – the brainwaves generated were much simpler than the hubbub of activity in the awake brain. By altering the levels of important neurotransmitters, drugs appear to have "broken down" the brain's information integration – and this corresponded to the participants' complete lack of awareness during the experiment. Their original experience had faded to black.
As a further comparison, the team also looked at participants under ketamine. Although the drug renders you unresponsive to the outside world – meaning that it is also used as an anesthetic – patients often report wild dreams, as opposed to the pure "blank" experienced under propofol or xenon. Sure enough, Tononi's team found that the responses to the TMS were far more complex than those under the other anesthetics, reflecting their altered state of consciousness. They were disconnected from the outside world, but their minds were still very much turned on during their drug-induced fantasies.
Tononi has found similar results when examining different sleep stages. During non-REM sleep – in which dreams are rarer – the responses to TMS were less complex; but during REM sleep, which often coincides with the dream consciousness, the information integration appears to be higher
He stresses that this is not "proof" that his theory is correct, but it shows that he could be working on the right lines. "
Some people lack a cerebellum – containing half of the neurons in the whole brain – yet they are still capable of conscious perception
Tononi's theory also coincides with the experiences of people with various forms of brain damage. The cerebellum, for instance, is the walnut-shaped, pinkish-gray mass at the base of the brain and its prime responsibility is coordinating our movements. It contains four times as many neurons as the cortex, the bark-like outer layer of the brain – about half the total number of neurons in the brain. Yet some people lack a cerebellum (either because they were born without it, or they lost it through brain damage) and they are still capable of conscious perception, leading to a relatively long and "normal" life without any loss of awareness
These cases would not make sense if you just consider the sheer number of neurons to be important for the creation of conscious experience. In line with Tononi's theory, however, cerebellum's processing mostly occurs locally rather than exchanging and integrating signals, meaning that it will have a minimal role in awareness.
Measures of the brain's responses to TMS also seem to predict the consciousness of patients in a non-communicative and vegetative state – a finding with potentially profound clinical applications.
Tononi's methods so far only (19659002) Daniel Toker, a neuroscientist at the University of the United States of America California Berkeley, says the idea that information integration is necessary for consciousness is very "intuitive" to other sc ientists, but much more evidence is required. "The broader perspective in the field is that it's an interesting idea, but it's pretty much completely untested," he says.
It all comes down to mathematics. Using previous techniques, the time taken to measure information integration across a network increases "super exponentially" with the number of nodes you are considering – meaning that, even with the best technology, the calculation could last longer than the life of the universe. But Toker has recently proposed an ingenious shortcut for these calculations, which may lead to a couple of minutes, which he has tested with measurements from a couple of macaques. This could be the first step to putting the theory on a much firmer experimental footing. "We are really in the early stages of all this," says Toker.
Only then can we begin to answer the really big questions – such as comparing the consciousness of different types of brain. Even if Tononi's theory does not prove true, however, Toker thinks it's helped push other neuroscientists to think more mathematically about the question of consciousness – which could inspire future theories
If integrated information theory is correct, computers could behave exactly like you and me, and still there would literally be no one there – Giulio Tononi
And should the information integration theory be right, it would be truly game changing – with implications far beyond neuroscience and medicine. Proof of consciousness in a creature, such as a lobster, could transform the fight for animal rights, for example.
It would also answer some long-standing questions about artificial intelligence. Tononi argues that the basic architecture of the computers we have today – made from networks of transistors – precludes the necessary level of information integration that is necessary for consciousness. Even if they can be programmed to behave like a human, they would never have our rich inner life
"There is a sense, according to some, that sooner than later computers can be cognitively as good as we are – not just in some tasks, such as playing Go, chess, or recognizing faces, or driving cars, but in everything, "says Tononi. "But if integrated information theory is correct, computers could behave exactly like you and me – indeed you might [even] be able to have a conversation with them that is as rewarding, or more rewarding than with you or me – and yet there would be literally nobody there. "Again, it comes down to the question of whether the intelligent behavior has to arise from consciousness – and Tononi's theory would suggest it's not
Although the concept of group consciousness may seem like a stretch, Tononi's theory we can help understand how large bodies of people sometimes begin to think, feel, remember, decide, and react as one entity
He emphasizes this is not just a question of computational power, or the kind of software that is used. "The physical architecture is always more or less the same, and that is always not at all conducive to consciousness." Well, thankfully, the kind of moral dilemmas seen in the series like Humans and Westworld may never become a reality
It could even help us understand the ways we interact with each other. Thomas Malone, director of the Massachusetts Institute of Technology's Center for Collective Intelligence and author of the book Superminds, recently applied the theory to teams of people – in the lab, and in real-world, including the editors of Wikipedia entries. He has shown that the estimates of the integrated information shared by team members could predict group performance on various tasks. Although the concept of "group consciousness" may seem like a stretch, he thinks that Tononi's theory might help us understand how large bodies of people sometimes begin to think, feel, remember, decide, and act as one entity
He This is still a great deal of speculation: we need to be sure that integrated information is a sign of consciousness in the individual. "But I think it's very intriguing to think what this might mean for the possibility of groups to be conscious."
For now, we still can not be sure if a lobster, computer or even a society is conscious or not , but in the future, Tononi's theory can help us understand the minds that are very alien to our own.
David Robson is a senior journalist at BBC Future. He is @d_a_robson on Twitter. This piece contains original artwork by Emmanuel Lafont, an Argetina-born visual artist currently working in Spain.
Join  follow us on Twitter or Instagram .
, called "If You Only Read 6 Things This Week". A handpicked selection of stories from BBC Future, Culture, Capital, and Travel, delivered to your inbox every Friday