12 years!

I just realized that I recently passed the 12 year mark of blogging here at Philosophy Sucks! The top-5 most viewed post haven’t changed all that much from my 10 year reflections. Philosophy blogging isn’t what it used to be (which is both good and bad I would say) but this blog continues to be what it always has: A great way for me to work out ideas, jot down notes, and get excellent feedback really quickly (that isn’t facebook). Thanks to everyone who has contributed over these 12 years!

The five most viewed posts written since the ten year anniversary are below. 

5. Prefrontal Cortex, Consciousness, and….the Central Sulcus?

4. Do we live in a Westworld World?

3. Consciousness and Category Theory

2. Integrated Information Theory is not a Theory of Consciousness

  1. My issues with Dan Dennett 

 

Prefrontal Cortex, Consciousness, and…the Central Sulcus?

The question of whether the prefrontal cortex (PFC) is crucially involved in conscious experience is one that I have been interested in for quite a while. The issue has flared up again recently, especially with the defenders of the Integrated Information Theory of Consciousness defending an anti-PFC account of consciousness (as in Christof Koch’s piece in Nature). I have talked about IIT before (here, here, and here) and I won’t revisit it but I did want to address one issue in Koch’s recent piece. He says,

A second source of insights are neurological patients from the first half of the 20th century. Surgeons sometimes had to excise a large belt of prefrontal cortex to remove tumors or to ameliorate epileptic seizures. What is remarkable is how unremarkable these patients appeared. The loss of a portion of the frontal lobe did have certain deleterious effects: the patients developed a lack of inhibition of inappropriate emotions or actions, motor deficits, or uncontrollable repetition of specific action or words. Following the operation, however, their personality and IQ improved, and they went on to live for many more years, with no evidence that the drastic removal of frontal tissue significantly affected their conscious experience. Conversely, removal of even small regions of the posterior cortex, where the hot zone resides, can lead to a loss of entire classes of conscious content: patients are unable to recognize faces or to see motion, color or space.

So it appears that the sights, sounds and other sensations of life as we experience it are generated by regions within the posterior cortex. As far as we can tell, almost all conscious experiences have their origin there. What is the crucial difference between these posterior regions and much of the prefrontal cortex, which does not directly contribute to subjective content?

The assertion that loss of the prefrontal cortex does not affect conscious experience is one that is often leveled at theories that invoke activity in the prefrontal cortex as a crucial element of conscious experience (like the Global Workspace Theory and the higher-order theory of consciousness in its neuronal interpretation by Hakwan Lau and Joe LeDoux (which I am happy to have helped out a bit in developing)). But this is a misnomer or at least is subject to important empirical objections. Koch does not say which cases he has in mind (and he does not include any references in the Nature paper) but we can get some ideas from a recent exchange in the Journal of Neuroscience.

One case in particular is often cited as evidence that consciousness survives extensive damage to the frontal lobe. In their recent paper Odegaard, Knight, and Lau have argued that this is incorrect. Below is figure 1 from their paper.

Figure 1a from Odegaard, Knight, and Lau

This is brain of Patient A, who was reportedly the first patient to undergo bi-lateral frontal lobectomy.  In it the central sulcus is labeled in red along with Brodman’s areas 4, 6, 9, and 46. Labled in this way it is clear that there is an extensive amount of (the right) prefrontal cortex that is intact (basically everything anterior to area 6 would be preserved PFC). If that were the case then this would hardly be a complete bi-lateral lobectomy! There is more than enough preserved PFC to account for the preserved conscious experience of Patient A.

Boly et al have a companion piece in the journal of neuroscience and a response to the Odegaard paper (Odegaard et al responded to Boly as well and made these same points). Below is figure R1C from the response by Boly et al.

Figure R1C from response by Melanie Boly, Marcello Massimini, Naotsugu Tsuchiya, Bradley R. Postle, Christof Koch, and Giulio Tononi

Close attention to figure R1C shows that Boly et al have placed the central sulcus in a different location than Odegaard et al did. In the Odegaard et al paper they mark the central sulcus behind where the 3,1,2 white numbers occur in the Boly et al image. If Boly et al were correct then, as they assert, pretty much the entire prefrontal cortex is removed in the case of patient A, and if that is the case then of course there is strong evidence that there can be conscious experience in the absence of prefrontal activity.

So here we have some experts in neuroscience, among them Robert T. Knight and Christof Koch, disagreeing about the location of the central sulcus in the Journal of Neuroscience –As someone who cares about neuroscience and consciousness (and has to teach it to undergraduates) this is distressing! And as someone who is not an expert on neurophysiology I tend to go with Knight (surprised? he is on my side, after all!) but even if you are not convinced you should at least be convinced of one thing: it is not clear that there is evidence from “neurological patients in the first half of the 20th century” which suggests that the prefrontal cortex is not crucially involved in conscious experience. What is clear is that is seems a bit odd to keep insisting that there is while ignoring the empirical arguments of experts in the field.

On a different note, I thought it was interesting that Koch made this point.

IIT also predicts that a sophisticated simulation of a human brain running on a digital computer cannot be conscious—even if it can speak in a manner indistinguishable from a human being. Just as simulating the massive gravitational attraction of a black hole does not actually deform spacetime around the computer implementing the astrophysical code, programming for consciousness will never create a conscious computer. Consciousness cannot be computed: it must be built into the structure of the system.

This is a topic for another day but I would have thought you could have integrated information in a simulated system.

Papa don’t Teach (again!)

IMG_4628

The Brown Boys

2018 is off to an eventful start in the Brown household. My wife and I have just welcomed our newborn son Caden (pictured with older brother Ryland and myself to the right) and I will soon be going on Parental Leave until the end of April. Because of various reasons I had to finish the last two weeks of the short Winter semester after Caden was born (difficult!). That is all wrapped up now and there is just one thing left to do before officially clocking out.

Today I will be co-teaching a class with Joseph LeDoux at NYU. Joe is teaching a course on The Emotional Brain and he asked me to come in to discuss issues related to our recent paper. I initially recorded the below presentation to get a feel for how long the presentation was (I went a bit overboard I think) but I figured once it was done I would post it. The animations didn’t work out (I used powerpoint instead of Keynote), I lost some of the pictures, and I was heavily rushed and sleep-deprived (plus I seem to be talking very slow when I listen back to it) but at any rate any feedback is appreciated. Since this was to be presented to a neuroscience class I tried to emphasize some of the points made recently by Hakwan Lau at his blog.

Consciousness and Category Theory

In the comments on the previous post I was alerted, by Matthias Michel, to a couple of papers that I had not yet read. The first was a paper in Neuroscience Research which came out in 2016:

And the second was a paper in Philosophy Compass that came out in March 2017:

After reading these I realized that I had heard an early version of this stuff when I was part of a plenary session with Tsuchiya in Tucson back in April of 2016. The title of his talk is the same as the title of the Philosophy Compass paper and some of the ideas are floated. I had intended writing something about this after my talk but I apparently didn’t get to it (yet?). I am in the midst of battling a potty-training toddler so it may not be anytime soon but I did want to get out a few (inchoate) reactions to these papers now that I have read them.

Both of these papers were very interesting. The first was interesting because it is the first time I have seen proponents of IIT acknowledge that they need to examine their ‘axioms’ more carefully. Are these axioms self-evident? Not to many people! Might there be alternate formulations? Yes! At the very least there should be some discussion of higher-order awareness (or awareness at all). There ideally should be an axiom like:

Awareness: Consciousness is for one. If one is in no way aware of oneself as being in a mental state then one is not consciously in that mental state

Of course they don’t want to add anything like this because as it stands the theory clearly assumes (without argument) that higher-order theories of consciousness are false. This is a problem that will not go away for IIT. But I’ll come back to that (by the way, the first ‘axiom’ of IIT sometimes seems to me to suggest a higher-order interpretation so one might assimilate this to an unpacking of the first axiom).

The central, and very interesting, idea of these papers that they are presenting is that category theory can help IIT address the hard problem (and some of the issues I raised in the previous post). There are a lot of mathematical details that are not relevant (yet) but the basic idea is that category theory lets us look at the structures that mathematical objects have and compare it to the structure of other mathematical structures. They want to exploit this by making a category out of the integrated information cause-effect space and one for quaila and then use category theory to examine how similar these two categories are.

First, can qualia form a category? They address this issue in the first paper but (to use a low hanging pun) this looks like a category mistake. Qualia are not mathematical objects. I suppose you could form the set of qualia and that would be a mathematical (i.e. abstract) object. But if you show that this structure overlaps with IIT have you shown anything about qualia themselves? Only if the structure captured in this category exhausts  the nature of qualia, but that is highly controversial! My guess is that there will be many categories that we could construct that would have some functors to both the category of qualia and the category of IIT structures. So, take the category of the set of Munsel color chips (not the experience of them, the actual chips). Won’t they stand in relations to each other that can be mapped onto the IIT domain in pretty much exactly the same way as the set of qualia!? If so, then IIT is Naive Realism? That is a joke but the point is that one would not want to claim that this shows that IIT is a theory of color chips. All we have shown is that there is a similar structure that runs in common in these two mathematical structures that at first seemed unrelated. That is interesting, but I don’t see how it can help us.

To their credit they recognize that this is a bit controversial and here is what they say about the issue:

In the narrow sense, a quale refers to a particular content of consciousness, which can be compared or characterized as a particular aspect of one moment of experience or a quale in the broad sense (Balduzzi and Tononi, 2009; Kanai and Tsuchiya, 2012). Can category theory consider any qualia we experience as objects or arrows? Some qualia in the narrow sense are straightforward to consider as objects: a quale for a particular object or its particular aspect, such as color. There are, however, some aspects of experience that are apparently difficult to consider as objects. For example, we can experience a distance between the two cups, which is a relationship between the objects but itself has no physical object form. Such abstract conscious perception can be naturally regarded as a relationship between objects: an arrow. Further, there are some types of qualia that seem to emerge out of many parts, such as a face. A whole face is perceived as something more than a collection of its constituent parts; there is something special about a whole face. Psychological and neuroscientific studies of faces point to configural processing, that is, a web of spatial relationship among the constituent parts of a face is critical in perception of a whole face (Maurer et al., 2002). In category theory, a complicated object, like a quale for a face, can be considered as an object that contains many arrows. Considered this way, any quale in the narrow sense can be considered as either an object, an arrow, or an object or arrow that contains any combinations of them.

But even if this is ok with you (and you set aside questions about whether ‘to the right of’ can be an arrow in category theory (will it obey the axiom of composition?)) what goes into the qualia category? They seem to assume that (at least some of) it is non-controversial but that isn’t so clear to me. Even so, what about Nagel’s bat? In order to use this procedure we would have to already know what kinds of qualities, conscious experiences, the bat had in order to form the category. But we have no idea what kinds of ‘objects’ and ‘arrows’ to populate that category with! That was kinda Nagel’s point!

To hammer this point home recall the logic gates that serve as simple illustrations of IIT. How are we to use this approach on it? We know what IIT says and so we can form that category without any problems. But what goes into the category of ‘qualia’ for the logic gate system’? We have no idea. In response to a question about Scott Aaronson’s objection Tsuchiya says that the expander grid may have a huge conscious field but would not have any visual experience. But what justifies this assertion?

They conclude their paper with the following remarks:

We proposed the three steps to apply the category theory approach in consciousness studies. First, we need to characterize our own phenomenological experience with detailed and structured descriptions to the extent to accept the domain of qualia as a category.

This may prove to be a difficult task and not just for the reasons having to do with higher-order awareness. Phenomenology is tricky stuff and it is notoriously hard to get people to agree on it (N.B. this is an understatement!) and since that is the case this general strategy seems doomed.

 

Another frustrating assertion with minimal evidence comes in the second paper linked to above and it has to do with the No-Report paradigm.

Noreport paradigms have implied that certain parts of the brain areas, such as the prefrontal areas, may not be related to consciousness, but more to do with the act of the reports (Koch, Massimini, Boly, & Tononi, 2016).

IF one buys this then one will see the IIT irreducible ‘concepts’ as corresponding to phenomenally conscious states but if instead one thinks that these results are overrated then one will see these irreducible IIT ‘concepts’ as picking out mental representations that may or may not be conscious. Thus we cannot extrapolate from the results of IIT until the debate with higher-order theories is resolved.

And that cannot happen until the proponents of IIT actually address the empirical case for higher-order theories. This is something that they have been very reluctant to do and when they discuss other theories of consciousness they studiously avoid any mention or discussion of higher-order theories. Higher-order theories need to be taken as seriously as Global Workspace, local re-entry, and other theories one finds in neuroscience and for the same reasons; because there is a significant (not decisive) evidence in favor of the theory.

But ok, what about the limited claim that we could in principle know whether the bat’s phenomenology was more like our seeing or our hearing? If we could generate the relevant category for the human conscious visual experience versus auditory experience and then if we could generate the IIT category for the bat’s echolocation we could compare them and see if it resembles our visual or auditory categories. According to Tsuchiya if we found that it resembled the IIT category for our auditory experiences (instead of our visual) or vice versa then we would have some evidence that they experienced the world in the same way we did.

But this seems to me to be a fundamental misunderstanding of Nagel’s point. His point was that there is no reason to expect that the bat’s experience would be anything like our seeing or our hearing. To know what it is like for the bat requires that we take up the bat’s point of view (according to Nagel). It is not clear that this addresses this issue at all! Even if we found that the bat’s brain integrated information in the way our brain integrates auditory information, and which results in the conscious experience of hearing for us, even if (stress on the IF) we discovered that why should we think that the bat’s experience was just like our experience of hearing? The point that Nagel wanted to make was that conscious experience seems somehow essentially bound up with the idea of subjectivity, of being accessible only from one’s own point of view. This is entirely missed in the proposal by Tsuchiya et al.