SEP Entry on Higher-Order Theories Gets Worse

I am currently very, very busy, and I am not just talking about my attempt to get the true secret ending in my NG+ play through of Black Myth: Wukong, or how many celestial ribbons I need to upgrade my gear 😉 But seriously things are pretty hectic for me all around. At LaGuardia I am teaching four classes in our short six week winter semester (General Psychology, Ethics and Moral Issues, Critical Thinking, and Introduction to Philosophy) and at New York University I am filling in as an adjunct for one semester teaching two undergraduate classes (Philosophical Applications of Cognitive Science and Minds and Machines). To top it off, I am teaching the Neuroscience and Philosophy of Consciousness class at the Graduate Center with Tony Ro. That is a lot, even for me! (LaGuardia’s spring semester starts in March so I’ll worry about that later!)…I am also working on a couple of papers, but that is pretty much going to have to wait until I am not teaching five days a week.

But even so, I just noticed (well I noticed a week or so ago but see above, I’ve been busy!) that there was an update to the Stanford Encyclopedia of Philosophy entry on higher-order theories of consciousness and I had to make a couple of comments about it.

There is a lot I would complain about in this article in general, and I have long used it as an example of the way in which the introductory material on higher-order theories is misleading and confusing, but I will set that aside for now and focus on the part directly relating to my views about the higher-order theory, which I quote below.

Brown (2015) challenges the common basic assumption that HOT theory is even a relational theory at all in the way that many have interpreted it (i.e., as including two distinct mental states related to each other). Instead, HOT theory is better construed as a HOROR theory, that is, higher-order representation of a representation, regardless of whether or not the target mental state exists. In this sense, HOT theory is perhaps better understood as a non-relational theory.

I have a lot of problems with this paragraph! First, it cites my paper on this from 10 years ago but nothing that I have written on this topic since then! It is true that when I wrote the cited paper I had not worked out out my position in all of its details but I have done a lot of work since then trying to do so. But even in that early paper I do not challenge the basic assumption that the higher-order thought theory is even a relational theory.

What I do is to argue that the Traditional Higher-Order Thought theory, as it is usually talked abut, is ambiguous as between a relational (like Gennaro) and non-relational (like Rosenthal) version. Rosenthal’s theory is a Traditional non-relational HOT theory and Gennaro has a Traditional relational theory. My actual argument is that higher-order theories needs to be mixed non-traditional theories that incorporate relational and non-relational elements for different jobs, but that is another story altogether! I also reject the theoretical posit of higher-order thoughts. I do not think of the kind of higher-order representations I posit as ordinary folk-psychological thoughts that I could think on my own at will. That is the Rosenthal view and I have always found it to be a bit strange. But ok, these hairs can be split another day.

Gennaro continues saying,

…if the qualitative conscious experience always goes with the HOT (including in cases of misrepresentation noted in section 4), then it seems the first-order state plays no relevant role in the theory.

Gennaro means this to be objecting to the HOROR theory and its claim that the higher-order state is itself the phenomenally conscious state. Amusingly, Gennaro here fails to realize that Rosenthal’s own theory is a non-relational theory! So if the objection he raises is supposed to work against my view then it also works against Rosenthal’s.

He then cites Rosenthal’s objection to me without realizing that the way he has set things up that amounts to Rosenthal objecting to Rosenthal! More importantly, I respond directly to the points being made in my book, including to the quote that he uses. He says, “…Rosenthal (2022) points out that Brown’s modified view conflates

a state’s being qualitatively conscious with a necessary condition for qualitative consciousness…there’s rarely anything it’s like to be in a HO state, and HO states are almost never conscious….[i]t’s the first-order state that’s qualitatively conscious. (Rosenthal 2022: 251–252)

On my view Gennaro and Rosenthal are here trying to identifying phenomenal consciousness with state-consciousness, something which I argue in my book itself stands in need of an argument in support. Yes, the higher-order representation is a necessary condition for the first-order mental state to be state-conscious but I argue that state consciousness should be separated from phenomenal consciousness. A state is phenomenally conscious when there is something that it is like for the subject to be in that state. Rosenthal and Gennaro seem to agree implicitly with this. Rosenthal says that there is rarely anything that it is like to be in the relevant higher-order state and it is clear that he is intending this to mean that we are rarely aware of ourselves as being in the higher-order state (I.e. the higher-order state is not usually state-conscious).

I agree with all of that! But in order for this to count as an objection to my view it must be the case that there cannot be something that it is like for one when one is not aware of the state one is in. This amounts to the transitivity principle, (which I argue is the uniting feature of the Traditional higher-order approach) and I explicitly reject the transitivity principle!

In my book I give what I call my HORORibly simple argument (as a nod to Lycan’s original simple argument as follows.

1. A phenomenally conscious state is one which, when one is in that state, there is something that it is like for one to be in it.

2. The state that, when one is in that state, there is something that there is like for one to be in it, is the state of inner awareness.

3. Thus, the phenomenally conscious state is the state of inner awareness.

4. Inner awareness is a representation of one’s own mind.

Thus, a phenomenally conscious state is a representation of one’s own mind.

Put this way we can see that Rosenthal and I disagree about premise 2 and maybe premise 1 but I am not confusing or conflating any kind of necessary condition for any thing else. I am denying that what Rosenthal calls ‘qualitative consciousness’ (insert eye roll) is phenomenal consciousness. It is state-consciousness and they are not the same thing (though they are related).

Let me stress that I feel weird about being somewhat indignant about someone not reading my book or being unaware of my views. I don’t usually expect that anyone will have any familiarity with my work before criticizing my views! However, it does seem to me in this case (of writing an entry for the most widely read online encyclopedia of philosophy) it should have been done. As it is this article pretty badly misunderstands my position and makes no attempt to get it correct. I might be overly suspicious but one can’t help but take this somewhat personally. This entry was originally written by Peter Carruthers and has recently been taken over by Rocco Gennaro, which explains a lot.

I have previously reviewed Gennaro’s book and papers for NDPR and written on this blog about the way in which I think he misunderstands Rosenthal’s approach to higher-order theories. He responded to my post and I invited him to come on Consciousness Live! and have a a discussion with me but he declined (the offer still stands on my end). Gennaro was at one of my talks at Tucson and afterwards asked me a question that directly pertains to the complaint above and we talked about it over dinner. A less paranoid person might think that the pattern of citations suggests that he wrote it before my book came out and the review process took a long time. Perhaps; but the general theme of my work has been made clear to Gennaro for some time. He also definitely knows my email/how to contact me if he wanted to clarify some of my views! All I can say is that had I botched something this bad I would want to correct it immediately.

Ah well, as a humble community college teacher I am still sort-of honored to be mentioned at all in this prestigious scholarly source (and one assumes some editorial heavy-handedness was applied to get even that given how ridiculous all of this is). Maybe someday someone will update that entry to reflect the actual landscape of the debate about higher-order theories of consciousness. The only real question is whether that might get done before we get GTA 6, haha, I mean before the whole debate is empirically mooted!

Animal Consciousness and the Unknown Power of the Unconscious Mind

Things are about to get really (I mean really) busy for me and so I probably won’t be doing much besides running around frantically until August 2026 (seriously even by my standards it’s going to be a rough ride for a while). Of course I will post the Consciousness Live! discussions once they start (Sept 18) and I am looking forward to Block’s presentation at the NYU Philosophy of Mind discussion group so may try to get to something here and there. At any rate we have been having some very interesting discussions in the philosophy of animal consciousness and society class. We have been discussing the markers and ‘tests’ approach and we read Bayne et al Tests for Consciousness in Humans and Beyond and Hakwan Lau’s The End of Consciousness (there was another paper but I’ll leave it aside for now). There were a lot of good points that came up in the class but I want to focus on the issue that is important to me, which is the methodological/evidential one I discussed in the previous post on this class.

Andrews seems to be trying to frame things by making a distinction between two positions you might have towards animals. The first is that we assume that animals, or a particular organism, is not conscious at all and then we look for markers that would raise our credence that the animal was conscious. So, we look at fish and see if they behave a certain way with respect to tissue damage, etc. If the fish is damaged and seeks a pain reliever then probably that indicates it is conscious and if it doesn’t then not. The second issue assumes that animals are conscious but that we need to establish that they have this or that specific conscious experience. As I am understanding this at this point she sees that marker approach as belonging to the first camp and the tests approach belonging to the second, though I might have misunderstood that point.

I can see why, if you are arguing with a certain type of philosopher/scientist, this may be how you are thinking of tings but I do not think it helps with the methodological challenge to studying animal consciousness. This can bee seen by the response to the argument that I gave in the previous post. That argument relied on the empirical claim that anything that you associated with consciousness could likely be done without consciousness. So when I point out that blindsight seems to suggest that you can have sophisticated behavior without consciousness one response was to say, ‘yeah but that doesn’t show that the blindsight patient has no conscious experience’. Another was ‘yeah but the blindsight subject is a conscious subject’. These are subtly different.

The first is taking the blindsight argument to be suggesting the conclusion that animals are not conscious. The second is suggesting the conclusion that being conscious played an important role in the process that led to the now unconscious behavior. So, the blindsight subject was normally sighted for a period of their life and they had normal visual perception and consciousness. Perhaps that played an important role in their learning how to do what they did and now, even though the process is automatic and can be done unconsciously, that doesn’t mean it could always be done unconsciously. These are good and interesting points but they do not defuse the methodological tension that I am pressing.

As I have said before, I don’t take the issue to be whether animals are conscious or not since I take that to be intuitively obvious; and you may take it to be intuitively obvious that they are not conscious. That is irrelevant since I do not base my beliefs in animal consciousness on science. If you were to ask me if science does support my belief about animals I would say that we at this point do not have scientific evidence that animals are, or are not, conscious because of this methodological issue.

Suppose there is a behavior, neural process, or function, which you take to be associated with consciousness (as either a test, marker, or whatever). Suppose that you think this is a marker or a test, or whatever. I will take as my example a certain pattern of neural activation in the fusiform face area. Suppose that we found that pattern when people looked at faces but not when they looked at houses. Does that indicate that finding that pattern is good evidence that they consciously saw the face? No. The reason is that we have found that same pattern of activation in cases that we have good reason to think are unconscious. (side note: that could be disputed and it is interesting to think about those arguments but lets save that for later). So, this pattern shows up when the subject consciously sees the face and also when the subject does not consciously see the face (but the face is present). Now suppose that someone finds this kind of pattern in a non-human animal. Is that evidence that the animal consciously sees a face? Or is it evidence that this process occurs unconsciously as it did in some human cases? Unless we had some way of telling the two kinds of neural activations apart we should conclude neither that the animal consciously nor that it unconsciously saw the face.

More to the point it would be irresponsible to loudly proclaim that this is evidence that the animal did consciously see the face until the issue above was resolved. None of this suggests that the animal is unconscious. It only suggests that the proposed marker/test is insufficient to establish that until we know the extent of the unconscious mind.

From there one might want to mount the more general argument that anything could be done unconsciously. That is an empirical question that the field should take seriously. Most reasonable people I know of are not saying we should think animals are unconscious, or that science suggests that only mammals/birds are conscious. We are saying that we don’t really know how powerful the unconscious mind is, this hasn’t fully investigated empirically. We have some reason to think it is quite powerful indeed, and some reason to think maybe not. Until we resolve this issue we should be cautious about grand declarations about what science has shown about animals and seriously address these methodological issues.