Giulio Tononi on Consciousness as Integrated Information

On Wednesday I attended the inaugural lecture of the new NYU Center for Mind and Brain by Giulio Tononi. Tononi’s basic idea is that where there is information there is consciousness, but when you have integrated complex information then you get the kind of consciousness that matters to us. I don’t have time to lay out the thesis in detail but you can see something along these lines in this Youtube video or see the full paper here. (I also recommend this talk by Christof Koch especially around the 23 minute mark)

The discussion afterwards was very interesting in particular because he addressed Eric Schwitzgabel’s recent argument that if Tononi is right the the United States is (probably) conscious. His response was that this was rued out by one of his phenomenological axioms. In particular the one he called exclusion. The intuitive idea behind this is that the system with the highest amount of integrated information (what he calls phi) wins out and is the locus of phenomenal consciousness, all other information (and so low levels of consciousness) are excluded. So, in a system like the U.S. he denies that the U.S. has a higher phi than any of us individuals. Thus by the exclusion principle the U.S. is not conscious even though we are. He did admit that if it ever became the case that the U.S. came to have a higher phi than any of its individuals then it would become a conscious subject and that individual would cease to be conscious! Dave Chalmers asked him if some person were shrunk down and used to replace one of his (Dave’s) neurons would it then be the case that they (the person shrunk) would case to be conscious, and he said yes!

Another interesting idea that came out was that Tononi thinks that there can be consciousness in the absence of any nerual firing at all. This is because the mathematical formalization of phi works using the possible states the system could be in not the ones that it is actually in. In fact teh phi will be at its highest value when the system is in good working order but is not actually being used. This is because no actual states are ruling out possible past or future states. He mentioned contemplative states as one possible case where this happens, and it may help to explain certain studies that found deactivation of brain activity in association with intense hallucinogenic experience. I asked him if he thought that this would be the case even if there were absolutely no activity in the brain and he said yes. That is, yes, as long as it is the case that the neurons could be stimulated and function normally. So, according to him, if you take out on component of that system so that it is damaged instead of just not doig anything then you will lower the phi and so should expect a change in the conscious experience of the system. He went so far as to say that if we were able to do this experiment and there were no change in conscious experience then he would take his theory as empirically refuted.

I didn’t follow up with him on this but I wondered whether on his view this would account for near death experiences. I think it is pretty standardly thought that if there is conscious experience when there is no brain activity at all then this would show that physicalism was not true. Tononi would seem to have an interesting response to this. In the few moments when there is brain death but the neurons have not ceased being functional he would predict very vivid consciousness, (though perhaps without content?) I am not sure about this, as I tend to be on the other side of this issue. When I have looked at near death cases (I commented on a paper making this argument against physicalism a few years back and did some research) it seems fairly clear that the conscious experience is occurring just before and just after the stoping and starting of brain activity respectively. So I am unconvinced that there is any good evidence that there is indeed conscious experience in the total absence of brain activity. Still, very interesting…

There is a lot more interesting stuff that came up but I don’t have the time to talk about it now.

12 thoughts on “Giulio Tononi on Consciousness as Integrated Information

    • Hey Pete, I agree that it is wildly counter-intuitive but I guess I am more reluctant to see that as a knock down argument. I think the paradigm of this kind of stuff was the Einstein–Podolsky-Rosen objection to the Copenhagen interpretation of quantum mechanics. They argue that if it is right then it leads to the absurd conclusion that we can have entanglement, but now a 100 years later entanglement is empirically confirmed (even if you don’t think it has, or that there are other interpretations of the empirical results it is at least a viable option and not a knock down argument against that interpretation of QM)…in general I think it is a good idea to get all of the theoretical implications of a certain theory out on the table and then try to empirically test them…

      • i suppose if QM didn’t give us wonderful things like CD players, i.e. that it works empirically, it wouldn’t have been nearly as popular. so i guess it’s hard to tell at this point about IIT

        • Hi Richard, nice post and I agree it was a very interesting talk.

          I don’t think (in reply to Pete) that the Chalmers argument is such a slam dunk… Giulio asserted that if we could shrink a person so that they would be able to perform the function of a neuron then they would be cease to be conscious, because they would become part of a system with higher phi. But presumably this would only be the case if the person was just doing the job of a neuron, i.e. they had very simple functionality and no human-like consciousness. If, on the other hand, they were sitting there saying “fire”, “don’t fire”, “fire a lot” etc, and ALSO thinking to themselves “wow this is boring” with their own miniature brain, then they would have a whole consciousness and high phi of their own, but very little connection with the rest of the system except through their limited repertoire of instructions.

          My main worry about the theory is along the line of Ned’s, that it doesn’t (yet) distinguish between complexity, intelligence, etc, and consciousness. This debate was around some time ago when it came to measures of awareness – Hakwan and others pointed out that it’s important to dissociate these measures from task performance. This is now widely agreed upon. I feel that as IIT matures something similar is going to be needed there too – showing differences in phi after controlling for differences in performance, for instance…

  1. I think it’s very important to tease apart different aspects of IIT, and more pertinently distinguish between the concept a of maximal system (MS), and IIT’s specific realization of this concept.

    First, note that a concept like MS is pretty much a criterion a theory of consciousness has to provide, that is, offer a measurable quantity that a) singles out consciousness b) just as importantly singles out SYSTEMS.

    A mathematical formulation that fails to meet these conditions results in of course in a) everything being conscious to some extent b) every conceivable agglomerate of things being not only a system, but a conscious one.

    A particular unpalatable result of violating these principles is that it rules out personal identity: if every subgroup of neurons in your brain is a conscious system then you (the present what it’s like …) are the result of one such sub group, and will be snuffed out the instant a neuron from that group dies – in other words you being able to read these words is nothing short of a miracle.

    In short IIT is unique in that that it attempts to address this problem head on, and it’s pretty much one of the only theories out there to do that.

    If you go into the particulars then MS under IIT has a lot of problems stemming from the reliance on information theory which is descriptive and not generative (as for example is a set of differential equations), and worse still as any information based framework is entirely up to arbitrary extrinsic definitions where spatial and temporal grain are concerned, but that’s a different story.

  2. How I wish I could have gone to this! I’d love to hear Chalmers question Tononi.

    FWIW, I think the most complicated conceptual aspect of Tononi’s theory is the exclusion principle mentioned above. And Tononi has only recently doubled down on it, leaving the topic more ambiguous in earlier papers.

    Personally, I would guess that timescale plays a big part in exclusion such that the exclusion principle would hold only for phi-complexes which reach their maximum at certain timescale. At the timescale of milliseconds, neurons can be thought of as the “active” elements of the world, and exclusion holds; this is why there is only one “me” and not “layers” of me and also why there isn’t three minds generated when you have a conversation with another person. But, I suspect, at the timescale of a day (or year, or hour, it doesn’t matter) the complex that is America as a whole may generate it’s own maximal phi-value, probably a very low one, but a value none-the-less. Why not grant this system some mental autonomy? Or, to put it another way, why can’t a bee AND the hive both be conscious, at different timescales? I don’t see why this shouldn’t be the case.

    Also, why would the lower level complexes have to disappear when embedded in systems that generate their own phi-value at timescales different than the brain, as in Chalmer’s example. The neurons would generate no information in the “America-complex” because they are not (strictly speaking) elements of THAT system. At the “America” scale neurons become mere system noise. (I think Steve Fleming’s comment was making a similar point.)

    And even if, as Tononi thinks, a single person’s consciousness did “disappear” when they are shrunk down to a neuron, – the mind subsumed into the larger mind system – it seems to me an open question if the INFORMATION content of that consciousness, – the person’s memories, hopes, tastes, knowledge – could be integrated into the new supra-mind? Or would that information be withheld from the system, its content truly lost when subsumed?

    But, these issues are REALLY conceptually complex and often feel fraught with paradox. While I think Tononi is mostly right with the IIT (it really is a revolutionary theory), and also mostly correct with exclusion in important ways, I doubt the last word has been said about the topic. The fact that Chalmers zeroed in on this aspect of the theory is most telling; no one has thought through the mind/body problem with greater depth, except maybe Tononi that is. Quite a match-up.

    BTW, is the video or audio online anywhere?

  3. I see steves point as a valid defense to chalmers – but not a strong one…

    I imagine myself as unknowingly part of a greater system. I do everything as I do now – but every action conveys information to the system and every thing that causes them to act is information from the system (like the matrix). As it happens this information creates a higher level conciousness that might have higher phi than me or not. Im not sure, but apparently that marginal difference determines if I am concious or not… and yet I know Im concious by its definition…

    and then maybe similar to what phi is saying above…..

    what if on one time scale my phi is higher but on another time scale it is lower? or at a different instant in time it is lower.. how can I make sense of who i should be attributing conciousness to?

    I can see that there might tend to be a degree of exclusion draining the lesser levels of meaning, but I cant see it as a rule….

  4. as to the second part – Im finding it hard to imagine a contemplative state where there is not somthing limiting the options and thus keeping you in the contemplative state…

    Ie in a situation where you have a massive amount of options for states at the next instant in time entropy will ensure you are in one of those states in that next instant.

    And also if you are in a state where there is no activity you probably wont be recording anything for later recall to either explain to near death questions or in terms of HOT at the higher or lower levels.

  5. I trained with Stan Grof who is the foremost researcher into psychedelics and non-ordinary states of consciousness. Yet few academics doing modern studies into psychedelics and their effects have heard of him, despite the fact that he is still writing. Academics must keep up with the history of their topic.

    The effects of psycilocybin for example had been mapped out 40 years ago with much greater insight than those offered by scientists of, for example, Professor Nutts calibre. Grof charted the effects of sequential use of psychedelics. Participants start with sensory effects, then biographical, then birth, then transpersonal. Set and setting are key factors. Memories of events from deep anaesthesia are also recalled. All this upturns the medical model of drug affect/response, a model to which writers here I am sure are too easily commited.

    This is why I predict that there will be no advances in this topic. The realm of experiences initiated by these methods simply won’t fulfill the criteria for support of medical/scientific world views. Examination of brain biology is fruitless. Any brain maps are mapped according to the reports of peoples experiences, and it is the latter that set the conditions for identification of the former, not vice versa – unless we believe in animism, in the idea for example that the brain really does have states with values like a “hyperactive” state with regard to moods like depression.

  6. What if I only part-time sub for a neuron? Let’s say I get the neuronal tasks, i.e. inputs, to my desk (there’s no shrinkage necessary I think), and on my own time decide to complete them. Do I then loose consciousness whenever I start working on that particular stack of papers? (And if so, does this mean that I become part of a system with larger phi whenever I start working on my tax returns? Because it certainly seems like I loose an important part of myself when I do so…)

    But I still think this objection need not be fatal: consciousness may be wherever it finds itself, similarly to how the refrigerator light is always on when you look, or a flash light always concludes that there is light in the direction it sees, and thus, that there is light everywhere. So if consciousness is in such a manner dependent on introspection, then the whole system that one would be part of would, if it focuses its attention to the part you constitute, there find consciousness, thus leading to you finding yourself conscious whenever you check.

    I also wonder about the implications of this idea for the temporal aspect of consciousness: perhaps the higher-phi complex one evolves to whenever a complex with higher phi emerges is just the consciousness of the next moment in time? I.e. I am now some complex with some value of phi, but eventually, a different complex emerges with a higher value of phi. Since that will typically be something with just a slightly different structure from the complex that is me, it will have an only slightly different phenomenological experience; perhaps this is thus the next moment in time I perceive myself to be in?

Leave a comment