SEP Entry on Higher-Order Theories Gets Worse

I am currently very, very busy, and I am not just talking about my attempt to get the true secret ending in my NG+ play through of Black Myth: Wukong, or how many celestial ribbons I need to upgrade my gear 😉 But seriously things are pretty hectic for me all around. At LaGuardia I am teaching four classes in our short six week winter semester (General Psychology, Ethics and Moral Issues, Critical Thinking, and Introduction to Philosophy) and at New York University I am filling in as an adjunct for one semester teaching two undergraduate classes (Philosophical Applications of Cognitive Science and Minds and Machines). To top it off, I am teaching the Neuroscience and Philosophy of Consciousness class at the Graduate Center with Tony Ro. That is a lot, even for me! (LaGuardia’s spring semester starts in March so I’ll worry about that later!)…I am also working on a couple of papers, but that is pretty much going to have to wait until I am not teaching five days a week.

But even so, I just noticed (well I noticed a week or so ago but see above, I’ve been busy!) that there was an update to the Stanford Encyclopedia of Philosophy entry on higher-order theories of consciousness and I had to make a couple of comments about it.

There is a lot I would complain about in this article in general, and I have long used it as an example of the way in which the introductory material on higher-order theories is misleading and confusing, but I will set that aside for now and focus on the part directly relating to my views about the higher-order theory, which I quote below.

Brown (2015) challenges the common basic assumption that HOT theory is even a relational theory at all in the way that many have interpreted it (i.e., as including two distinct mental states related to each other). Instead, HOT theory is better construed as a HOROR theory, that is, higher-order representation of a representation, regardless of whether or not the target mental state exists. In this sense, HOT theory is perhaps better understood as a non-relational theory.

I have a lot of problems with this paragraph! First, it cites my paper on this from 10 years ago but nothing that I have written on this topic since then! It is true that when I wrote the cited paper I had not worked out out my position in all of its details but I have done a lot of work since then trying to do so. But even in that early paper I do not challenge the basic assumption that the higher-order thought theory is even a relational theory.

What I do is to argue that the Traditional Higher-Order Thought theory, as it is usually talked abut, is ambiguous as between a relational (like Gennaro) and non-relational (like Rosenthal) version. Rosenthal’s theory is a Traditional non-relational HOT theory and Gennaro has a Traditional relational theory. My actual argument is that higher-order theories needs to be mixed non-traditional theories that incorporate relational and non-relational elements for different jobs, but that is another story altogether! I also reject the theoretical posit of higher-order thoughts. I do not think of the kind of higher-order representations I posit as ordinary folk-psychological thoughts that I could think on my own at will. That is the Rosenthal view and I have always found it to be a bit strange. But ok, these hairs can be split another day.

Gennaro continues saying,

…if the qualitative conscious experience always goes with the HOT (including in cases of misrepresentation noted in section 4), then it seems the first-order state plays no relevant role in the theory.

Gennaro means this to be objecting to the HOROR theory and its claim that the higher-order state is itself the phenomenally conscious state. Amusingly, Gennaro here fails to realize that Rosenthal’s own theory is a non-relational theory! So if the objection he raises is supposed to work against my view then it also works against Rosenthal’s.

He then cites Rosenthal’s objection to me without realizing that the way he has set things up that amounts to Rosenthal objecting to Rosenthal! More importantly, I respond directly to the points being made in my book, including to the quote that he uses. He says, “…Rosenthal (2022) points out that Brown’s modified view conflates

a state’s being qualitatively conscious with a necessary condition for qualitative consciousness…there’s rarely anything it’s like to be in a HO state, and HO states are almost never conscious….[i]t’s the first-order state that’s qualitatively conscious. (Rosenthal 2022: 251–252)

On my view Gennaro and Rosenthal are here trying to identifying phenomenal consciousness with state-consciousness, something which I argue in my book itself stands in need of an argument in support. Yes, the higher-order representation is a necessary condition for the first-order mental state to be state-conscious but I argue that state consciousness should be separated from phenomenal consciousness. A state is phenomenally conscious when there is something that it is like for the subject to be in that state. Rosenthal and Gennaro seem to agree implicitly with this. Rosenthal says that there is rarely anything that it is like to be in the relevant higher-order state and it is clear that he is intending this to mean that we are rarely aware of ourselves as being in the higher-order state (I.e. the higher-order state is not usually state-conscious).

I agree with all of that! But in order for this to count as an objection to my view it must be the case that there cannot be something that it is like for one when one is not aware of the state one is in. This amounts to the transitivity principle, (which I argue is the uniting feature of the Traditional higher-order approach) and I explicitly reject the transitivity principle!

In my book I give what I call my HORORibly simple argument (as a nod to Lycan’s original simple argument as follows.

1. A phenomenally conscious state is one which, when one is in that state, there is something that it is like for one to be in it.

2. The state that, when one is in that state, there is something that there is like for one to be in it, is the state of inner awareness.

3. Thus, the phenomenally conscious state is the state of inner awareness.

4. Inner awareness is a representation of one’s own mind.

Thus, a phenomenally conscious state is a representation of one’s own mind.

Put this way we can see that Rosenthal and I disagree about premise 2 and maybe premise 1 but I am not confusing or conflating any kind of necessary condition for any thing else. I am denying that what Rosenthal calls ‘qualitative consciousness’ (insert eye roll) is phenomenal consciousness. It is state-consciousness and they are not the same thing (though they are related).

Let me stress that I feel weird about being somewhat indignant about someone not reading my book or being unaware of my views. I don’t usually expect that anyone will have any familiarity with my work before criticizing my views! However, it does seem to me in this case (of writing an entry for the most widely read online encyclopedia of philosophy) it should have been done. As it is this article pretty badly misunderstands my position and makes no attempt to get it correct. I might be overly suspicious but one can’t help but take this somewhat personally. This entry was originally written by Peter Carruthers and has recently been taken over by Rocco Gennaro, which explains a lot.

I have previously reviewed Gennaro’s book and papers for NDPR and written on this blog about the way in which I think he misunderstands Rosenthal’s approach to higher-order theories. He responded to my post and I invited him to come on Consciousness Live! and have a a discussion with me but he declined (the offer still stands on my end). Gennaro was at one of my talks at Tucson and afterwards asked me a question that directly pertains to the complaint above and we talked about it over dinner. A less paranoid person might think that the pattern of citations suggests that he wrote it before my book came out and the review process took a long time. Perhaps; but the general theme of my work has been made clear to Gennaro for some time. He also definitely knows my email/how to contact me if he wanted to clarify some of my views! All I can say is that had I botched something this bad I would want to correct it immediately.

Ah well, as a humble community college teacher I am still sort-of honored to be mentioned at all in this prestigious scholarly source (and one assumes some editorial heavy-handedness was applied to get even that given how ridiculous all of this is). Maybe someday someone will update that entry to reflect the actual landscape of the debate about higher-order theories of consciousness. The only real question is whether that might get done before we get GTA 6, haha, I mean before the whole debate is empirically mooted!

Animal Consciousness and the Unknown Power of the Unconscious Mind

Things are about to get really (I mean really) busy for me and so I probably won’t be doing much besides running around frantically until August 2026 (seriously even by my standards it’s going to be a rough ride for a while). Of course I will post the Consciousness Live! discussions once they start (Sept 18) and I am looking forward to Block’s presentation at the NYU Philosophy of Mind discussion group so may try to get to something here and there. At any rate we have been having some very interesting discussions in the philosophy of animal consciousness and society class. We have been discussing the markers and ‘tests’ approach and we read Bayne et al Tests for Consciousness in Humans and Beyond and Hakwan Lau’s The End of Consciousness (there was another paper but I’ll leave it aside for now). There were a lot of good points that came up in the class but I want to focus on the issue that is important to me, which is the methodological/evidential one I discussed in the previous post on this class.

Andrews seems to be trying to frame things by making a distinction between two positions you might have towards animals. The first is that we assume that animals, or a particular organism, is not conscious at all and then we look for markers that would raise our credence that the animal was conscious. So, we look at fish and see if they behave a certain way with respect to tissue damage, etc. If the fish is damaged and seeks a pain reliever then probably that indicates it is conscious and if it doesn’t then not. The second issue assumes that animals are conscious but that we need to establish that they have this or that specific conscious experience. As I am understanding this at this point she sees that marker approach as belonging to the first camp and the tests approach belonging to the second, though I might have misunderstood that point.

I can see why, if you are arguing with a certain type of philosopher/scientist, this may be how you are thinking of tings but I do not think it helps with the methodological challenge to studying animal consciousness. This can bee seen by the response to the argument that I gave in the previous post. That argument relied on the empirical claim that anything that you associated with consciousness could likely be done without consciousness. So when I point out that blindsight seems to suggest that you can have sophisticated behavior without consciousness one response was to say, ‘yeah but that doesn’t show that the blindsight patient has no conscious experience’. Another was ‘yeah but the blindsight subject is a conscious subject’. These are subtly different.

The first is taking the blindsight argument to be suggesting the conclusion that animals are not conscious. The second is suggesting the conclusion that being conscious played an important role in the process that led to the now unconscious behavior. So, the blindsight subject was normally sighted for a period of their life and they had normal visual perception and consciousness. Perhaps that played an important role in their learning how to do what they did and now, even though the process is automatic and can be done unconsciously, that doesn’t mean it could always be done unconsciously. These are good and interesting points but they do not defuse the methodological tension that I am pressing.

As I have said before, I don’t take the issue to be whether animals are conscious or not since I take that to be intuitively obvious; and you may take it to be intuitively obvious that they are not conscious. That is irrelevant since I do not base my beliefs in animal consciousness on science. If you were to ask me if science does support my belief about animals I would say that we at this point do not have scientific evidence that animals are, or are not, conscious because of this methodological issue.

Suppose there is a behavior, neural process, or function, which you take to be associated with consciousness (as either a test, marker, or whatever). Suppose that you think this is a marker or a test, or whatever. I will take as my example a certain pattern of neural activation in the fusiform face area. Suppose that we found that pattern when people looked at faces but not when they looked at houses. Does that indicate that finding that pattern is good evidence that they consciously saw the face? No. The reason is that we have found that same pattern of activation in cases that we have good reason to think are unconscious. (side note: that could be disputed and it is interesting to think about those arguments but lets save that for later). So, this pattern shows up when the subject consciously sees the face and also when the subject does not consciously see the face (but the face is present). Now suppose that someone finds this kind of pattern in a non-human animal. Is that evidence that the animal consciously sees a face? Or is it evidence that this process occurs unconsciously as it did in some human cases? Unless we had some way of telling the two kinds of neural activations apart we should conclude neither that the animal consciously nor that it unconsciously saw the face.

More to the point it would be irresponsible to loudly proclaim that this is evidence that the animal did consciously see the face until the issue above was resolved. None of this suggests that the animal is unconscious. It only suggests that the proposed marker/test is insufficient to establish that until we know the extent of the unconscious mind.

From there one might want to mount the more general argument that anything could be done unconsciously. That is an empirical question that the field should take seriously. Most reasonable people I know of are not saying we should think animals are unconscious, or that science suggests that only mammals/birds are conscious. We are saying that we don’t really know how powerful the unconscious mind is, this hasn’t fully investigated empirically. We have some reason to think it is quite powerful indeed, and some reason to think maybe not. Until we resolve this issue we should be cautious about grand declarations about what science has shown about animals and seriously address these methodological issues.

Philosophy of Animal Consciousness

The fall 2025 semester is off and running. I have a lot going on this semester, with Consciousness Live! kicking off in September, and teaching my usual 5 classes at LaGuardia. Since the Graduate Center Philosophy Program recently hired Kristen Andrews I have been sitting in on her philosophy of animal consciousness and society class she is offering. We are very early in the the semester but the class is very interesting and I think that Andrews will have a positive impact on the culture at the Grad Center, which is very nice!

It also allows me to address some issues that have long bothered me. As those who know me are aware, I was raised vegetarian and am now vegan. I strongly believe in animal rights and yet also reluctantly accept the role that animals play in scientific research (at least for now). I have always considered it beyond obvious that animals are conscious and that vegetarianism/vegainsim is required on moral grounds because of the suffering of animals (but also I would say there are other reasons to not eat meat).

At the same time I have long argued that we have a conundrum on our hands when it comes to animals. All we have are third-person methods to address their psychological states and they cannot verbally report. In addition we know that many things that seemingly involve consciousness can be done unconsciously. More specifically we can see in the human case that there seem to be instances where people can do things without being able to report on them (like blindsight). Given this the question opens up as to whether any particular piece of evidence one offers in support of the claim that animals are conscious truly supports that claim (given that it might be done unconsciously).

These two claims are not in tension since the first is a moral claim and the second is an epistemic/evidential/methodological claim.

To be honest I have largely avoided talking about animals and consciousness since to me it is hot-button topic that has caused many fights and loss of friends over the years. When one grow up the way I did one sees a great moral tragedy taking place right out in the open as though it is perfectly normal. It is mind-numbingly hard to “meet people where they are” on this issue (for me; to be clear I view this as a shortcoming on my part). Trying to convince people that animals are conscious or trying to convince them that since they are they should be treated in a certain way, and to met with the lowest level of response over and over takes a very special personality type to endure (and I lack it).

Then I met and started working with Joe LeDoux, who has very different views about animals. When I first met Joe he seemed to think that animals did not have experience at all. He also seemed to think that people like Peter Carruthers and Daniel Dennett shared his view, and so that it was somewhat mainstream in philosophy. I remember once he said “there is no evidence that any rat has ever felt fear,” and I was like, but you study fear in rats, so…uh, ????

Over the course of much discussion (and only slightly less whiskey) we gradually clarified that his view was that mammals are most likely conscious but we cannot say what their consciousness is like since they done’t have language. In particular they don’t have the concept ‘fear’ and so can’t be aware of themselves as being afraid. So, whatever their experience is like in a threatening condition it is probably wrong to say that it is fear, since that does seem to involve an awareness of oneself as being in danger. Joe thinks rats can’t have this kind of mental state but I am not so sure. This is an interesting question and I’ll return to it below.

Joe and I largely agreed on the methodological issue, even if we disagreed on which animals might be conscious. The way this has shown up in my own thinking is that I have tried to use this methodological argument to suggest that we won’t learn much about human consciousness from animal models. This suggests we should stop using them in this kind of research until we have a theory of phenomenal consciousness in the human case. Then we can see how far it extends.

This now brings me to Andrews. She has been arguing that we need to change the default assumption in science from one that holds we need to demonstrate that animals are conscious to just accepting this as the background default view: All animals are conscious. Her argument for this is, in part, that we don’t have any good way to determine if animals are conscious (i.e. the marker approach fails). She also argues that we need what she calls a “secure” theory of consciousness which could answer these questions. Since we don’t have that we should just assume that animals are consciousness. This, she continues, would allow us to make progress on other issues in the science of consciousness.

So it seems we agree on quite a bit. We both think that only a well-established “secure” theory of consciousness would allow us to definitively answer the question about animals. We both agree that the marker approach isn’t successful (though for slightly different reasons). We also both agree that the “demarcation” problem of trying to figure out which animals are conscious or where to draw the line between animals that are and are not conscious should be put aside for now.

But I don’t agree that we should change the default assumption. This is because I don’t think the default assumption is that animals are not conscious. The default assumption is this: any behavior that can be associated with consciousness can be produced without consciousness. That should not be changed without good empirical reason because we have good empirical reasons to accept it. However, even if we did change that default assumption we would still face the methodological challenge above with respect to the particular qualities, or what it is like for the animal. So, for now at least, I still think the science of consciousness is best done in humans.

Consciousness Without First-Order Representations

I am getting ready to head out to San Diego for the Association for the Scientific Study of Consciousness 17th Annual meeting. I have organized a symposium on the Role of the Prefrontal Cortex in Conscious Experience which will feature talks by Rafi Malach, Joe Levine, Doby Rahnev, and me! A rehearsal of my talk is below. As usual any feedback appreciated.

Also relevant are the following papers:

1.(Lau & Brown) The Emperor’s New Phenomenology? The Empirical Case for Conscious Experience without First-Order Representations

2. (Brown 2012) The Brain and its States

The ASSC Students have also set up the following online debate forum: http://theasscforum.blogspot.com/2013/06/symposium-1-prefrontal-cortex.html

A longer video explaining the Rahnev results can be found here: http://www.youtube.com/watch?v=_gQYdGRbkpE

Lot’s of ways to get involved in the discussion!

[cross-posted at Brains]