Fanselow and Pennington on LeDoux and Colleagues

Things have been really hectic around here lately and I have been meaning to post something on this for a while now.

Recently Michael Fanselow and  Zachery Pennington, both at UCLA, have argued against the kind of position developed by LeDoux and Colleagues. This includes his paper with Daniel Pine who is a psychiatrist, his paper with me developing a higher-order theory of fear and anxiety, and his paper with Stefan Hofmann who is a cognitive behavioral therapist. The papers are linked to below.

LeDoux and Pine responded here:

There is also a a bit of a response in our recent general piece on these issues here:

I think the responses do a good job but there is one passage that I think needs more attention.

This is from the ‘Psychiatric Dark Ages’ paper where Fanselow and Pennington say,

5. A logical inconsistency within the two-system framework

The two-system framework formally states that fear as a subjective experience arises from the neural circuitry that gives rise to working memory and conscious recollection, and more specifically, to episodic memory (LeDoux & Brown, 2017; LeDoux, 2017). As an example of an episodic memory, I can recall the what, where and when of yesterday’s breakfast. This includes my memory for the flavors I experienced. I can use this memory to flexibly guide today’s choices—yesterday I had bacon, better stick to oatmeal today. The neural circuits that support such episodic memories are also the neural systems that allow animals to take alternate paths when the one normally used is blocked. And in the two-systems framework, they support the subjective emotion of fear. The question then becomes what is unique about fear that differentiates it from other cognitions? The answer to this question is immediately apparent if one looks at LeDoux and colleagues’ schematics [Figure 1b (LeDoux & Pine, 2016), Figure 2a (LeDoux, 2017) and Figure 5 (LeDoux & Brown, 2017)]: it is the input from the subcortical defensive system, and in the case of LeDoux and Brown, feedback from the behavioral responses generated by the subcortical defensive circuits. In other words, the unique qualities of subjective fear in the two-system framework reduce to the more parsimonious single generator model, where conscious fear reflects one component of an integrated response. Indeed, the additional machinery needed to generate subjective report probably adds additional noise, rendering it, as many previous to us have suggested, a less pure and objective measure of fear.

The central argument here seems to be that since we allow that activity of the amygdala and other lower-order areas influence the subjective experience of fear then it is the case that you could have had that same subjective experience without the higher-order activity. This simply doesn’t follow.

On the view LeDoux and I developed the unique qualities of subjective fear come from the unique contents of certain higher-order representations. It is entirely plausible that activity from the subcortical defensive system may cause the appropriate higher-order representations to have a specific kind of content, which in turn results in a specific subjective experience. When that activity is missing and fear is still felt it may be subjectively different because of the missing causal contribution from the subcortical defensive circuit. This does not collapse the view into a first-order view.

What the HOROR view is committed to, though, is that if it were to be the case that we could, via some other means than normal, mimic the causal input of the subcortical circuits, then we could produce the higher-order state with the appropriate content without any activity in the defensive survival circuits and that would result in the exact same subjective experience of fear.

 

Ian Phillips on Simple Seeing

A couple of weeks ago I attended Ian Phillips’ CogSci talk at CUNY. Things have been hectic but I wanted to get down a couple of notes before I forget.

He began by reviewing change blindness and inattentional blindness. In both of these phenomena subjects sometimes fail to recognize (or report) changes that occur right in front of their faces. These cases can be interpreted in two distinct ways. On one interpretation one is conscious only of what what is able to report on, or attend to. So if there is a doorway in the background that is flicking in and out of existence as one searches the two pictures looking for a difference and when one is asked one says that they see no difference between the two pictures one does not consciously experience the door way or its absence. This is often dubbed the ‘sparse’ view and it is interpreted as the claim that conscious perception contains a lot less detail in it than we naively assume.

Fred Dretske was a well known defender of a view on which distinguishes two components of seeing. There is what he called ‘epistemic seeing’ which, when a subject sees that p, “ascribes visually based knowledge (and so a belief) to [the subject]”. This was opposed to ‘simple seeing’ which “requires no knowledge or belief about the object seen” (all quoted material is from Phillips’ handout). This ‘simple seeing’ is phenomenally conscious but the subject fails to know that they have that conscious experience.

This debate is well known and been around for a while. In the form I am familiar with it it is a debate between first-order and higher-order theories of consciousness. If one is able to have a phenomenally conscious experience in the absence of any kind of belief about that state then the higher-order thought theory on which consciousness requires a kind of higher-order cognitive state about the first-order state for conscious perception to occur, is false. The response developed by Rosenthal, and that I find pretty plausible, is that in change blindness cases the subject may be consciously experiencing the changing element but not conceptualize it as the thing which is changing. This, to me, is just a higher-order version of the kinds of claims that Dretske is making, which is to say that this is not a ‘sparse’ view. Conscious perception can be as rich and detailed as one likes and this does not require ‘simple seeing’. Of course, the higher-order view is also compatible with the claim that conscious experience is sparse but that is another story.

At any rate, Phillips was not concerned with this debate. He was more concerned with the arguments that Dretske gave for simple seeing. He went through three of Dretske’s arguments and argued that each one had an easy rejoinder from the sparse camp (or the higher-order camp). The first he called ‘conditions’ and involved the claim that when someone looks at a (say) picture for 3-5 minutes scanning every detail to see if there is any difference between the two, we would ordinarily say that they have seen everything in the two pictures. I mean, they were looking right at it and their eyes are not defective! The problem with this line of argument is that it does not rule out the claim that they unconsciously saw the objects in question. The next argument, from blocking, meets the same objection. Dretske claims that if you are looking for your friend and no-one  is standing in front of them blocking them from your sight, then we can say that you did see your friend even if you deny it. The third argument involved that when searching the crowd for your friend you did saw no-one was naked. But this meets a similar objection to the previous two arguments. One could easily not have (consciously) seen one’s friend and just inferred that since you didn’t see anyone naked your friend was naked as well.

Phillips then when on to offer a different way of interpreting simple seeing based on signal detection theory. The basic intuition for simple seeing, as Phillips sees it, lies in the idea that the visual system delivers information to us and then there is what we do with the information. The basic metaphor was a letter being delivered. The delivery of the letter (the placing of it into the mailbox) is one thing, you getting the letter and understanding the contents, is another. Simple seeing can then be thought of as the informative part and the cognitive noticing, attending, higher-order thought, etc, can be thought of as a second independent stage. Signal detection theory, on his view, offers a way to capture this distinction.

Signal detection theory starts with treating the subject as an information channel. They then go on to quantify this, usually by having the subject perform a yes/no task and then looking at how many times they got it right (hits) versus how many times the got it wrong (false alarms). False alarms, specifically, involve the subject saying the see something but being wrong about it, because there was no visual stimulus. This is distinguished from ‘misses’ where there was a target but the subject did not report it. The ‘sensitivity to the world’ is called d’, pronounced “d prime”. On top of this there is another value which is computed called ‘c’. c, for criterion, is thought of as measuring a bias in the subjects response and is typically computed from the average of hits versus false alarms. One can think of the criterion as giving you a sense of how ‘liberal’ or ‘conservative’ the subjects’ response is. If they will say they saw something all the time then the seeming have a very liberal criterion for determine whether they saw something (that is to say they are biased towards saying ‘yes I saw it’ and is presumably mistaking noise for a signal). If they never say the say it then they are very conservative (they are biased towards saying ‘no I didn’t see it). This gives us a sense of how much of the noise in the system the subject treats as actually carrying information.

The suggestion made by Phillips was that this distinction could be used to save Dretske’s view if one took d’ to track simple seeing and c to track they subjects knowledge. He then went on to talk about empirical cases. The first involved memory across saccades and came from Hollingworth and Henderson, Accurate Visual Memory for Previously Attended Objects in Natural Scenes, the second f rom Mitroff and Levin Nothing Compares 2 Views: Change Blindness can occur despite preserved access to the changed information, and the third Ward and Scholl Inattentional blindness reflects limitation on perception, not memory. Each of these can be taken to suggest that there is “evidence of significant underlying sensitivity in [change blindness] and [inattentional blindness],”.

He concluded by talking about blindsight as a possible objection. Dretske wanted to avoid treating blindsight as a case of simple seeing (that is of there being phenomenal consciousness that the subject was unaware (in any cognitive sense) of having). Dretske proposed that what was missing was the availability of the relevant information to act as a justifying reason for their actions. He then went on to suggest various responses to this line of argument. Perhaps blindsight subjects who do not act on the relevant information (say by not grabbing the glass of water in the area of their scotoma) are having the relevant visual experience but are simply unwilling to move (how would we distinguish this from their not having the relevant visual experience)? Perhaps blindsight patients can be thought of as adjusting their criterion and so as choosing the interval with the strongest response and if so this can be thought of as reason responsive. Finally, perhaps, even though they are guessing, they really can be thought of as knowing that the stimulus is there?

In discussion afterwards I asked whether he though this line of argument was susceptible o the same criticism he had leveled against Dretske’s original arguments. One could interpret d’ as tracking conscious visual processing that the subject doesn’t know about, or one could interpret it as tracking the amount of information represented by the subjects mental states independently of what the subject was consciously experiencing (at leas to some extent). So, one might think, the d’ is good so the subject represents information about the stimulus that is able to guide its behavior, but that may be going on while the subject is conscious of some of it but not all of it, or different aspects of it, etc. So there is no real reason to think of d’ as tracking simple (i.e. unconceptualized, unnoticed, uncategorized, etc) content that is conscious as opposed to non-conscious. He responded that he did not think that this constituted an argument. Rather he was trying to offer a model that captured what he took to be Dretske’s basic intuition, which was that there was the information represented by the visual system, which was conscious, and then there was the way that we were aware of that information. This view was sometimes cast as unscientific and he thought of the signal detection material as proving a framework that, if interpreted in the way he suggested, could capture, and thus make scientifically acceptable, something like what Dretske (and other first-order theorists) want.

There was a lot of good discussion, a lot of which I don’t remember, but I do remember Ned Block asking about Phillips’ response to cases like the famous Dretske example of a wall, painted a certain color, having a piece of wallpaper in one spot. The little square of wallpaper has been painted and so is the same color as the wall. If one is looking at the wall and doesn’t see that there is a piece of wallpaper there, does one see (in the simple seeing kind of way) the wallpaper? Phillips seemed to be saying we did (but didn’t know it) and Block asked whether it wasn’t the case that when we se something we represent it visually and Phillips responded by saying that on the kind of view he was suggesting that wasn’t the case. Block didn’t follow up and didn’t come out after so I didn’t get the chance to follow up on that interesting change.

Afterwards I pressed him on the issue I raised. I wondered what he thought about the kinds of cases, discussed by Hakwan Lau (and myself) where the d’ is matched but subjects give differing answers to questions like ‘how confident are you that you saw it?’ or ‘rate the visibility of the thing seen’. In those cases we have, due to matched d’, the same information content (worldly sensitivity) and yet one subject says they are guessing while the other says they are confident they saw it (or rates its visibility lower while the other rates it higher (so as more visible)). Taking this seriously seems to suggest that there is a difference in what it is like for these subjects (a difference in phenomenal consciousness) while there is no difference in what they represent about the world (so at the first-order level). The difference in what it is like for them seems to track the way in which they are aware of the first-order information (as tracked by their visibility/confidence ratings). If so then this suggests that d’ doesn’t track phenomenal consciousness. Phillips responded by suggesting that there may be a way to talk about simple seeing involving differences in what it is like for the subject but didn’t elaborate.

I still am not sure how he responds to the argument Hakwan and I have given. If there is differing conscious experience with the same first-order states each in each case then the difference in conscious experience can only be captured (or is best captured) by some kind of difference in our (higher-order) awareness of those first-order states.

In addition, now that I have thought about it a bit, I wonder how he would respond to Hakwan’s argument (more stemming from his own version of higher-order thought theory) that the setting of the criterion in Phillips’ appeal to it in blindsight cases, depends on a higher-order process and so amounts to a cognitive state having a constitutive role in determining how the first-order state is experienced. This suggests that an ‘austere’ notion of simple seeing where there is no cognitive states involved in phenomenal consciousness is harder to find than Phillips originally thought.

Remembering Jerry Fodor

I was very sad to find out about the passing of Jerry Fodor today. He was obviously an iconic figure in philosophy and I had only a brief interaction with him but he made a big impact. I sat in on the Research Seminar in Mind and Language that he ran along with Christopher Peacock in the Spring of 2004 and I also took his class on Concepts at NYU in the Spring of 2005 (through the CUNY Consortium). Sadly this was before I started blogging and so don’t have anything on either one written up (I recall having some notes on paper but those have been lost).

I do remember that I was also taking David Armstrong’s class on Truthmakers at CUNY and David Rosenthal’s class on Consciousness, Thought, and Language. For my final paper I ended up writing a version of what became The Mark of the Mental that was 50-plus pages long! I saw it as a kind of walking the line between Fodor’s views and Rosenthal’s views. I sent a draft of it to Jerry before it was due and he asked to meet with me to talk about it. I remember being very surprised to have heard back from him at all, let alone that he wanted to meet with me one-on-one to discuss it. He came up to the Graduate Center and we spent hours arguing about the paper. I forget exactly what we argued about but I remember thinking that I could not believe that he would take the time to come and sit down with me at all. I took a lot of notes during the discussion (all lost now) but I remember he gave me very valuable feedback and I really enjoyed talking with him. I actually can’t find the original version of the paper anywhere (I must have lost it when my old computer crashed back in 2007/2008), which is too bad.

Since I thought the paper nicely straddled the line between issues raised in both Fodor and Rosenthal’s classes I ended up submitting the paper to both of them. I figured at 50-plus pages it was really like two papers and I wanted to get the feedback from both of them. About a week later I got a message from Rosenthal saying he needed to talk to me. It turns out that it had somehow come to light that I had submitted it to both of them for credit. David explained to me that I could not do that (I believe he said “you would not try to pay for two different things with numerically the same money, would you?”). I felt really bad after that as I had really thought it was not a big deal at all. After hashing out the matter I was informed that I would have to pick one of them to submit it to. I chose to submit it for David’s class and so I never did get to hear what Jerry thought of the final version of the paper. I never spent any time with him after that, though I saw him speak on several occasions, I was too embarrassed to go up and talk to him.

He could be very intimidating (and sometimes downright mean) but he was also very lively and I will always remember that he took the time to come and talk to a student that he didn’t know very well at all to provide excellent feedback on a paper he must have thought was very bad.

RIP.

The Biological Chinese Room (?)

I am getting ready to head out to New York Medical College to give Grand Rounds in the department of Psychiatry and Behavioral Sciences on the Neurobiology of Consciousness. I am leaving in just a bit but as I was getting ready I had a strange thought about Searle’s Chinese Room argument that I thought I would jot down very quickly. I assume we are all familiar with the traditional version of the argument. We have you (or Seattle) locked in a room receiving input in a foreign language and looking up proper responses in a giant rule book to return the proper output. In effect the person in the room is performing the job that a computer would taking syntactic representations and transforming them according to formally specified rules. The general idea is that since Searle doesn’t thereby understand Chinese that there must be more to understanding it than formal computation.

Now, I don’t want to get bogged down in going over the myriads of responses and counter responses that have appeared since  Seattle first gave this argument but it did occur to me that we could give a biological version of this that would target the biological nature of consciousness that Searle prefers. Indeed, I think it also would work against Block’s recent claim that some kind of analog computation suffices for phenomenal consciousness (see his talk at Google (and especially the questions at the end)). So the basic idea is this. Instead of having the person in the room implement formal computations, have them implement analog ones by playing the role of neurons. They would be sequestered in the room as usual and would receive input in the form of neurotransmitters. They would then respond with the appropriate neurotransmitters. We can imagine the entire room is hooked up in such a way that the Chinese speaker on the outside in speaking normally, or typing or whatever, and this gets translated into neural-chemical activity which is what the person in the room receives. They respond in kind and this gets translated into speech on the other end. Seattle still wouldn’t understand Chinese.

So it seems that either this refutes the biological view of consciousness or it suggests what is wrong with the original Chinese Room argument…any thoughts?

Revisiting my Dissertation

Nine years ago I defended My dissertation and then I promptly forgot about it. Part of the reason was that I was distracted with the Shombie Wars (believe me, I *never* expected to write a paper on zombies!) and starting Consciousness Online but the biggest part of the story was that I was sick of working on it. I had spent two years writing it officially but I had had the core idea for the dissertation in 2002 (developing ideas I had from my days as an undergraduate) and had written several versions of it for various seminars I had taken. By the time I had decided to pursue this as my dissertation project I had already been working on it (off and on) for 4 years. So after six years of reading, re-reading, writing, and re-writing I had a hard time even thinking about this material!

Looking back on it now I think the main “result” still stands up. Just after I defended hybrid expressionist views became popular and I thought that maybe I had been scooped  (more than I already had been by Blackstone!) but no one has developed, or even seemed to notice, the kind of hybrid view I formulate and defined (i.e. one where the speech act in moral discourse involves expressing an emotion and, at the same time, the belief that the emotion is the correct one to have towards the relevant state of affairs moral character, etc)…though to be honest I have grown more out of touch with the literature on metaethics…so maybe there is some devastating objection I am not aware of?

At some point I may try to look into it but in the meantime below are links to the blog posts I wrote while working on the dissertation.

  1. Introducing Frigidity
  2. What Kripke Really Thinks
  3. The Meaning and Use of ‘is True’
  4. Truth, Justification, and the Quasi-Realist Way
  5. Meaning and Justification
  6. A Simple Argument for Moral Realism
  7. Emotive Realism
  8. Truth and Necessity
  9. Varieties of Rigidity
  10. Devitt on the A Priori 
  11. Meta-Metaethics and the NJRPA
  12. Emotive Realism Ch. 1
  13. Emotive Realism Ch. 2
  14. Some Moral Truths are Analytic
  15. (Finally) Responding to Roman
  16. Moral Truthmakers
  17. Empiricism as the Default Position
  18.  Introducing Dr. Richard Brown

Cognitive Prosthetics and Mind Uploading

I am on record (in this old episode of Spacetime Mind where we talk to Eric Schwitzgebel) as being somewhat of a skeptic about mind uploading and artificial consciousness generally (especially for a priori reasons) but I also think this is largely an empirical matter (see this old draft of a paper that I never developed). So even though I am willing to be convinced I still have some non-minimal credence in the biological nature of consciousness and the mind generally, though in all honesty it is not as non-minimal as it used to be.

Those who are optimistic about mind uploading have often appealed to partial uploading as a practical convincing case. This point is made especially clearly by David Chalmers in his paper The Singularity: A Philosophical Analysis (a selection of which is reprinted as ‘Mind uploading: A Philosophical Analysis),

At the very least, it seems very likely that partial uploading will convince most people that uploading preserves consciousness. Once people are confronted with friends and family who have undergone limited partial uploading and are behaving normally, few people will seriously think that they lack consciousness. And gradual extensions to full uploading will convince most people that these systems are conscious at well. Of course it remains at least a logical possibility that this process will gradually or suddenly turn everyone into zombies. But once we are confronted with partial uploads, that hypothesis will seem akin to the hypothesis that people of different ethnicities or genders are zombies.

What is partial uploading? Uploading in general is never very well defined (that I know of) but it is often taken to involve in some way producing a functional isomorph to the human brain. Thus partial uploading would be the partial production of a functional isomorph to the human brain. In particular we would have to reproduce the function of the relevant neuron(s).

At this point we are not really able to do any kind of uploading as Chalmers’ or others describe but there are people who seem to be doing things that look like a bit like partial uploading. First one might think of cochlear implants. What we can do now is impressive but it doesn’t look like uploading in any significant way. We have computers analyze incoming sound waves and then stimulate the auditory nerves in (what we hope) are appropriate ways. Even leaving aside the fact that subjects seem to report a phenomenological difference, and leaving aside how useful this is for a certain kind of auditory deficit, it is not clear that the role of the computational device has anything to do with constituting the conscious experience, or of being part of the subject’s mind. It looks to me like these are akin to fancy glasses. They causally interact with the systems that produce consciousness but do not show that the mind can be replaced by a silicon computer.

The case of the artificial hippocampus gives us another nice test case. While still in its early development it certainly seems like it is a real possibility that the next generation of people with memory problems may have neural prosthetics as an option (there is even a startup trying to make it happen and here is a nice video of Theodore Berger presenting the main experimental work).

What we can do now is fundamentally limited by our lack of understanding about what all of the neural activity ‘means’ but even so there is impressive and suggestive evidence that homelike like a prosthetic hippocampus is possible. They record from an intact hippocampus (in rats) while performing some memory task and then have a computer analyze and predict what the output of the hippocampus would have been. When compared to actual output of hippocampal cells it is pretty good and the hope is that they can then use this to stimulate post-hippocampal neurons as they would have been if the hippocampus was intact. This has been done as proof of principle in rats (not in real time) and now in monkeys, and in real time and in the prefrontal cortex as well!

The monkey work was really interesting. They had the animal perform a task which involved viewing a picture and then waiting through a delay period. After the delay period the animal is shown many pictures and has to pick out the one it saw before (this is one version of a delayed match to sample task). While they were doing this they recorded activity of cells in the prefrontal cortex (specifically layers 2/3 and 5). When they introduced a drug into the region which was known to impair performance on this kind of task the animal’s performance was very poor (as expected) but if they stimulated the animal’s brain in the way that their computer program predicted that the deactivated region would respond (specifically they stimulated the layer 5 neurons (via the same electrode they previously used to record) in the way that the model predicted they would have been by layer 2/3) the animal’s performance returned to almost normal! Theodore Berger describes this as something like ‘putting the memory into memory for the animal’. He then shows that if you do this with an animal that has an intact brain they do better than they did before. This can be used to enhance the performance of a neuroscience-typical brain!

They say they are doing human trials but I haven’t heard anything about that. Even so this is impressive in that they use it successfully in rats for long term memory in the hippocampus and then they also use it in monkeys in the prefrontal cortex in working memory. In both cases they seem to get the same result. It starts to look like it is hard to deny that the computer is ‘forming’ the memory and transmitting it for storage. So something cognitive has been uploaded. Those sympathetic to the biological view will have to say that this is more like the cochlear implant case where we have a system causally interacting with the brain but it is the biological brain that stores the memory and recalls it and is responsible for any phenomenology or conscious experiences. It seems to me that they have to predict that in humans there will be a difference in the phenomenology that stands out to the subject (due to the silicon not being a functional isomorph) but if we get the same pattern of results for working memory in humans are we heading towards Chalmers’ acceptance scenario?

Consciousness and Category Theory

In the comments on the previous post I was alerted, by Matthias Michel, to a couple of papers that I had not yet read. The first was a paper in Neuroscience Research which came out in 2016:

And the second was a paper in Philosophy Compass that came out in March 2017:

After reading these I realized that I had heard an early version of this stuff when I was part of a plenary session with Tsuchiya in Tucson back in April of 2016. The title of his talk is the same as the title of the Philosophy Compass paper and some of the ideas are floated. I had intended writing something about this after my talk but I apparently didn’t get to it (yet?). I am in the midst of battling a potty-training toddler so it may not be anytime soon but I did want to get out a few (inchoate) reactions to these papers now that I have read them.

Both of these papers were very interesting. The first was interesting because it is the first time I have seen proponents of IIT acknowledge that they need to examine their ‘axioms’ more carefully. Are these axioms self-evident? Not to many people! Might there be alternate formulations? Yes! At the very least there should be some discussion of higher-order awareness (or awareness at all). There ideally should be an axiom like:

Awareness: Consciousness is for one. If one is in no way aware of oneself as being in a mental state then one is not consciously in that mental state

Of course they don’t want to add anything like this because as it stands the theory clearly assumes (without argument) that higher-order theories of consciousness are false. This is a problem that will not go away for IIT. But I’ll come back to that (by the way, the first ‘axiom’ of IIT sometimes seems to me to suggest a higher-order interpretation so one might assimilate this to an unpacking of the first axiom).

The central, and very interesting, idea of these papers that they are presenting is that category theory can help IIT address the hard problem (and some of the issues I raised in the previous post). There are a lot of mathematical details that are not relevant (yet) but the basic idea is that category theory lets us look at the structures that mathematical objects have and compare it to the structure of other mathematical structures. They want to exploit this by making a category out of the integrated information cause-effect space and one for quaila and then use category theory to examine how similar these two categories are.

First, can qualia form a category? They address this issue in the first paper but (to use a low hanging pun) this looks like a category mistake. Qualia are not mathematical objects. I suppose you could form the set of qualia and that would be a mathematical (i.e. abstract) object. But if you show that this structure overlaps with IIT have you shown anything about qualia themselves? Only if the structure captured in this category exhausts  the nature of qualia, but that is highly controversial! My guess is that there will be many categories that we could construct that would have some functors to both the category of qualia and the category of IIT structures. So, take the category of the set of Munsel color chips (not the experience of them, the actual chips). Won’t they stand in relations to each other that can be mapped onto the IIT domain in pretty much exactly the same way as the set of qualia!? If so, then IIT is Naive Realism? That is a joke but the point is that one would not want to claim that this shows that IIT is a theory of color chips. All we have shown is that there is a similar structure that runs in common in these two mathematical structures that at first seemed unrelated. That is interesting, but I don’t see how it can help us.

To their credit they recognize that this is a bit controversial and here is what they say about the issue:

In the narrow sense, a quale refers to a particular content of consciousness, which can be compared or characterized as a particular aspect of one moment of experience or a quale in the broad sense (Balduzzi and Tononi, 2009; Kanai and Tsuchiya, 2012). Can category theory consider any qualia we experience as objects or arrows? Some qualia in the narrow sense are straightforward to consider as objects: a quale for a particular object or its particular aspect, such as color. There are, however, some aspects of experience that are apparently difficult to consider as objects. For example, we can experience a distance between the two cups, which is a relationship between the objects but itself has no physical object form. Such abstract conscious perception can be naturally regarded as a relationship between objects: an arrow. Further, there are some types of qualia that seem to emerge out of many parts, such as a face. A whole face is perceived as something more than a collection of its constituent parts; there is something special about a whole face. Psychological and neuroscientific studies of faces point to configural processing, that is, a web of spatial relationship among the constituent parts of a face is critical in perception of a whole face (Maurer et al., 2002). In category theory, a complicated object, like a quale for a face, can be considered as an object that contains many arrows. Considered this way, any quale in the narrow sense can be considered as either an object, an arrow, or an object or arrow that contains any combinations of them.

But even if this is ok with you (and you set aside questions about whether ‘to the right of’ can be an arrow in category theory (will it obey the axiom of composition?)) what goes into the qualia category? They seem to assume that (at least some of) it is non-controversial but that isn’t so clear to me. Even so, what about Nagel’s bat? In order to use this procedure we would have to already know what kinds of qualities, conscious experiences, the bat had in order to form the category. But we have no idea what kinds of ‘objects’ and ‘arrows’ to populate that category with! That was kinda Nagel’s point!

To hammer this point home recall the logic gates that serve as simple illustrations of IIT. How are we to use this approach on it? We know what IIT says and so we can form that category without any problems. But what goes into the category of ‘qualia’ for the logic gate system’? We have no idea. In response to a question about Scott Aaronson’s objection Tsuchiya says that the expander grid may have a huge conscious field but would not have any visual experience. But what justifies this assertion?

They conclude their paper with the following remarks:

We proposed the three steps to apply the category theory approach in consciousness studies. First, we need to characterize our own phenomenological experience with detailed and structured descriptions to the extent to accept the domain of qualia as a category.

This may prove to be a difficult task and not just for the reasons having to do with higher-order awareness. Phenomenology is tricky stuff and it is notoriously hard to get people to agree on it (N.B. this is an understatement!) and since that is the case this general strategy seems doomed.

 

Another frustrating assertion with minimal evidence comes in the second paper linked to above and it has to do with the No-Report paradigm.

Noreport paradigms have implied that certain parts of the brain areas, such as the prefrontal areas, may not be related to consciousness, but more to do with the act of the reports (Koch, Massimini, Boly, & Tononi, 2016).

IF one buys this then one will see the IIT irreducible ‘concepts’ as corresponding to phenomenally conscious states but if instead one thinks that these results are overrated then one will see these irreducible IIT ‘concepts’ as picking out mental representations that may or may not be conscious. Thus we cannot extrapolate from the results of IIT until the debate with higher-order theories is resolved.

And that cannot happen until the proponents of IIT actually address the empirical case for higher-order theories. This is something that they have been very reluctant to do and when they discuss other theories of consciousness they studiously avoid any mention or discussion of higher-order theories. Higher-order theories need to be taken as seriously as Global Workspace, local re-entry, and other theories one finds in neuroscience and for the same reasons; because there is a significant (not decisive) evidence in favor of the theory.

But ok, what about the limited claim that we could in principle know whether the bat’s phenomenology was more like our seeing or our hearing? If we could generate the relevant category for the human conscious visual experience versus auditory experience and then if we could generate the IIT category for the bat’s echolocation we could compare them and see if it resembles our visual or auditory categories. According to Tsuchiya if we found that it resembled the IIT category for our auditory experiences (instead of our visual) or vice versa then we would have some evidence that they experienced the world in the same way we did.

But this seems to me to be a fundamental misunderstanding of Nagel’s point. His point was that there is no reason to expect that the bat’s experience would be anything like our seeing or our hearing. To know what it is like for the bat requires that we take up the bat’s point of view (according to Nagel). It is not clear that this addresses this issue at all! Even if we found that the bat’s brain integrated information in the way our brain integrates auditory information, and which results in the conscious experience of hearing for us, even if (stress on the IF) we discovered that why should we think that the bat’s experience was just like our experience of hearing? The point that Nagel wanted to make was that conscious experience seems somehow essentially bound up with the idea of subjectivity, of being accessible only from one’s own point of view. This is entirely missed in the proposal by Tsuchiya et al.