Integrated Information Theory doesn’t Address the Hard Problem

Just in case you are not aware Hakwan Lau has started a blog, In Consciousness we Trust, where he is blogging his work on his upcoming book on consciousness. He has lately been taking fire at the Integrated Information Theory of Consciousness and has a nice (I think updated) version of his talk (mentioned previously here) in his post How to make IIT (and other Theories of Consciousness) Respectable. I have some small quibbles with some of what he says but overall we agree on a lot (surprised? 😉 At any rate I was led to this paper by Sasai, Boly, Menson, and Tononi arguing that they have achieved a “functional split brain” in an intact subject. This is very interesting, and I enjoyed the paper a lot but right at the beginning it has this troublesome set of sentences:

A remarkable finding in neuroscience is that after the two cerebral hemispheres are disconnected to reduce epileptic seizures through the surgical sectioning of around 200 million connections, patients continue to behave in a largely normal manner (1). Just as remarkably, subsequent experiments have shown that after the split-brain operation, two separate streams of consciousness coexist within a single brain, one per hemisphere (2, 3). For example, in many such studies, each hemisphere can successfully perform various cognitive tasks, including binary decisions (4) or visual attentional search (5), independent of the other, as well as report on what it experiences. Intriguingly, anatomical split brains can even perform better than controls in some dual-task conditions (6, 7).

Really?!?! Experiments have shown this? I was surprised to read such a bold statement of a rather questionable assumption. In the first place I think it is important to note that these patients do not verbally report on what it ‘experiences’. I have argued that these kinds of (anatomical) spit brains may have just one stream of consciousness (associated with the one capable of verbally reporting) and that the other ‘mute’ hemisphere is processing information non-consciousnesly.

This is one of the problems that I personally have with the approach that IIT takes. They start with ‘axioms’ which are really (question begging) assumptions about the way that consciousness is, and they tout his as a major advance in consciousness research because it takes the Hard Problem seriously. But does it? As they put it,

The reason why some neural mechanisms, but not others, should be associated with consciousness has been called ‘the hard problem’ because it seems to defy the possibility of a scientific explanation. In this Opinion article, we provide an overview of the integrated information theory (IIT) of consciousness, which has been developed over the past few years. IIT addresses the hard problem in a new way. It does not start from the brain and ask how it could give rise to experience; instead, it starts from the essential phenomenal properties of experience, or axioms, and infers postulates about the characteristics that are required of its physical substrate.

But this inversion doesn’t serve to address the Hard Problem, (by the way, I agree with the way the formulate it for the most part). I agree that the Hard Problem is one of trying to explain why a given neural activation is associated with a certain conscious experience rather than another one, or none at all. And I even agree that in order to address this problem we need a theory of what consciousness is but IIT isn’t that kind of theory.  And this is because of the ‘fundamental identity claim’ of IIT that an experience is identical to a conceptual structure, where ‘experience’ means phenomenally conscious experience and ‘conceptual structure’ is a technical term of Integrated Information Theory.

This is a postulated identity, and they do want to try to test it, but even if it was successfully confirmed would it really offer us an explanation of why the experiences are associated with a particular brain activity? To see that the answer is no consider their own example from Figure 1 of their paper and what they say about it. nrn.2016.44_IIT - From Consciousness to Physical Substrate

They begin,

The true physical substrate of the depicted experience (seeing one’s hands on the piano) and the associated conceptual structure are highly complex. To allow a complete analysis of conceptual structures, the physical substrate illustrated here was chosen to be extremely simple1,2: four logic gates (labelled A, B, C and D, where A is a Majority (MAJ) gate, B is an OR gate, and C and D are AND gates; the straight arrows indicate connections among the logic gates, the curved arrows indicate self-connections) are shown in a particular state (ON or OFF).

So far so good. We have a simplified cause-effect structure in order to make the claim clear.

The analysis of this system, performed according to the postulates of IIT, identifies a conceptual structure supported by a complex constituted of the elements A, B and C in their current ON states. The borders of the complex, which include elements A, B, and C but exclude element D, are indicated by the green circle. According to IIT, such a complex would be a physical substrate of consciousness

So, when A=B=C=1 (i.e. on) in this system it is having a conscious experience (!), as they say,

The fundamental identity postulated by IIT claims that the set of concepts and their relations that compose the conceptual structure are identical to the quality of the experience. This is how the experience feels — what it is like to be the complex ABC in its current state 111. The intrinsic irreducibility of the entire conceptual structure (Φmax, a non-negative number) reflects how much consciousness there is (the quantity of the experience). The irreducibility of each concept (φmax) reflects how much each phenomenal distinction exists within the experience. Different experiences correspond to different conceptual structures.

Ok then. Here we have a simple system that is having a conscious experience, ex hypothesi, and we know everything about this system. We know that it has these  concepts specified by IIT, but what is it’s conscious experience like? What it is like to be this simple system of 4 logic gates when its elements A, B, and C are on? We aren’t told and there doesn’t seem to be any way to figure it out based on IIT. It seems to me that there should be no conscious experience associated with this activity, so it is easy to ‘conceive of a physical duplicate of this system with no conscious experience’…is this a zombie system? That is tongue in cheek but I guess that IIT proponents will need to say that since the identity is necessary I can’t really conceive of it (or that I can but it is not really possible). Can’t we conceive of two of these systems with inverted conscious experiences (same conceptual structures)? Why or why not? I can’t see anything in IIT that would help to answer these questions.

If IIT is attempting to provide a solution to the Hard Problem of Consciousness then should allow us to know what the conscious experience of this system is like, but it seems like it could be having any, or none (how difficult would it then be to extend this to Nagel’s bat!?!?). There are some who might object that this is asking too much. Isn’t this more like Ned Block’s “Harder Problem” than Chalmers’ Hard Problem? Here I suppose that I disagree with the overly narrow way of putting the Hard Problem. It isn’t merely about how this brain state is associated with a particular phenomenal quality rather than none at all, it is how it is associated with any physical, functional state at all that os the Hard Problem. Sure brain states are one kind of physical state and so the problem arises there but more generally the Hard Problem is answering the question of why any physical state is associated with any qualitative state at all instead of another or none at all.

IIT, and Tononi in particular, seem committed to giving us an answer. For instance, in his Scholarpedia article on IIT Tononi says,

IIT employs the postulates to derive, for any particular system of elements in a state, whether it has consciousness, how much, and of which kind.

But how do we do this for the 4 logic gates?

How do we do it in our own case?

 

10 thoughts on “Integrated Information Theory doesn’t Address the Hard Problem

  1. I’m breathtakingly clueless about what 99.99% of what this post means, but am curious to know what branches of science/philosophy were involved in creating Figure 1.

    (1) The “experience” picture of the hands playing the piano I totally get. I’ve had that experience.

    (2) What discipline was responsible for generating the “physical substrate” circle? Neuroscience? Brain MRI’s of someone having the experience? But it just looks like a simple circuit board to me.

    (3) How was the the “associated conceptual structure” generated? By applying some mathematical formula or algorithm to the “physical substrate” circle?

    (4) Let’s say a bunch of people who understood this stuff were given just the “experience” picture and the “physical substrate” circle. If you asked them to draw the resulting “conceptual structure”, would they all come up with the same circle for it depicted in Figure 1? And would they be able to derive all the bar graphs on the right, or are they based on some undisclosed experimental data?

    (5) Sorry that my questions are so basic, but sometimes when I read things like this I wonder whether it’s me or someone’s just pulling my leg.

  2. “If IIT is attempting to provide a solution to the Hard Problem of Consciousness then should allow us to know what the conscious experience of this system is like, but it seems like it could be having any, or none (how difficult would it then be to extend this to Nagel’s bat!?!?).”

    Although I am very sympathetic to the view you develop here, I think they are aware of this problem. And in fact there is a recent article by Tsuchiya entitled ““ What is it like to be a bat ? ”— a pathway to the answer from the integrated information theory”. To answer this question (in a rather unsatisfying way), it seems that some proponents of IIT try to apply the category theory (a kind of set theory but used to characterize relationships between different domains) to link patterns of integrated information in the brain with various kinds of experiences.

    Tsuchiya develops this view in two papers: http://www.sciencedirect.com/science/article/pii/S0168010215002989 + http://onlinelibrary.wiley.com/doi/10.1111/phc3.12407/abstract
    Both papers, it seems to me, try to address the kind of problems you’re talking about here.

    • I just finished reading both of the articles and I think you are right that they are addressing the kind of problems I was talking about, and I also agree that it is rather unsatisfying….I will try to say what I find unsatisfactory in another post but I am interested to hear what you found to be lacking in their approach (which I found intriguing)

      • Well, I think IIT has many problems as a theory of consciousness. I didn’t particularly think about this aspect of the theory, so I’d just say the following:

        (1) As they recognize themselves, it’s not easy to characterize categories of consciousness. « The question of whether or not the domain of qualia in the narrow- and/or broad-sense can be considered as a category turns out not straightforward to answer. While we believe there are no fundamental problems in regarding the domain of qualia in the narrow and/or broad sense as a category, we need more research to address this question. » I honestly don’t have the slightest idea of the way in which one could do that, and, to me, it really sounds like a category mistake (just like saying that my grandma is similar to a prime number). Hence, it is difficult to see how mathematical formalisms and conscious experiences could be said to be « similar ».

        (2) Let’s say that they can solve this problem. It’s not because two patterns of integrated information are similar, say, between humans and bats, that the corresponding experiences must be similar. Patterns of integrated information between different animals could be similar for many reasons that have nothing to do with experiences being similar. In order to solve this problem, you can say that pattern of integrated information p always correlates with experiences of type e. But first, how do you know that ? Second, then what you have is a correlation between two domains (p and e), and that doesn’t address the hard problem, because you don’t explain *why* experience e correlates with pattern of integrated information p.

        (3) If problem (1) and (2) are solved, I don’t see why other theories of consciousness wouldn’t be able to do the exact same thing and claim that they solve the hard problem. Let’s say that I’m a global workspace theorist. I see a pattern of neural activations of the global workspace neurons + neurons that encode given contents, and I know that this pattern correlates with experiences of type e. Then, I can develop an abstract mathematical model describing this type of neural activations. And then, I can use category theory in the exact same way as IIT theorists do in order to describe the relations between my mathematical model and experiences. And therefore, I can use the global workspace theory to know whether bats have experiences that are similar to my own experiences. It seems to me that if IIT is allowed to do all of that, then other theories can do the same thing, and there’s nothing special about IIT. But maybe there’s something really special about IIT that I don’t get.

        Conclusion: There are several premises that are really difficult to accept. It’s not clear at all that it solves the hard problem, at best what you add is just a new level of correlations (instead of a correlation between brain states and experiences, you get the following: brain states – mathematical model – experiences). But *even if* we accept all these premises, then we have to accept these premises for other theories as well, and then I don’t see why IIT would be in a better position than other theories of consciousness to address the hard problem.

        • Ah I see you posted this while I was finishing up my post. I just skimmed your comments and I think we are largely in agreement…I have a couple of questions but I’ll quiet down and see if anyone wants to support IIT first 😉

        • I just read your post, and there’s a lot of overlap indeed ! We both agree that (1) and (2) are problems that the theory needs to solve.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s