SEP Entry on Higher-Order Theories Gets Worse

I am currently very, very busy, and I am not just talking about my attempt to get the true secret ending in my NG+ play through of Black Myth: Wukong, or how many celestial ribbons I need to upgrade my gear 😉 But seriously things are pretty hectic for me all around. At LaGuardia I am teaching four classes in our short six week winter semester (General Psychology, Ethics and Moral Issues, Critical Thinking, and Introduction to Philosophy) and at New York University I am filling in as an adjunct for one semester teaching two undergraduate classes (Philosophical Applications of Cognitive Science and Minds and Machines). To top it off, I am teaching the Neuroscience and Philosophy of Consciousness class at the Graduate Center with Tony Ro. That is a lot, even for me! (LaGuardia’s spring semester starts in March so I’ll worry about that later!)…I am also working on a couple of papers, but that is pretty much going to have to wait until I am not teaching five days a week.

But even so, I just noticed (well I noticed a week or so ago but see above, I’ve been busy!) that there was an update to the Stanford Encyclopedia of Philosophy entry on higher-order theories of consciousness and I had to make a couple of comments about it.

There is a lot I would complain about in this article in general, and I have long used it as an example of the way in which the introductory material on higher-order theories is misleading and confusing, but I will set that aside for now and focus on the part directly relating to my views about the higher-order theory, which I quote below.

Brown (2015) challenges the common basic assumption that HOT theory is even a relational theory at all in the way that many have interpreted it (i.e., as including two distinct mental states related to each other). Instead, HOT theory is better construed as a HOROR theory, that is, higher-order representation of a representation, regardless of whether or not the target mental state exists. In this sense, HOT theory is perhaps better understood as a non-relational theory.

I have a lot of problems with this paragraph! First, it cites my paper on this from 10 years ago but nothing that I have written on this topic since then! It is true that when I wrote the cited paper I had not worked out out my position in all of its details but I have done a lot of work since then trying to do so. But even in that early paper I do not challenge the basic assumption that the higher-order thought theory is even a relational theory.

What I do is to argue that the Traditional Higher-Order Thought theory, as it is usually talked abut, is ambiguous as between a relational (like Gennaro) and non-relational (like Rosenthal) version. Rosenthal’s theory is a Traditional non-relational HOT theory and Gennaro has a Traditional relational theory. My actual argument is that higher-order theories needs to be mixed non-traditional theories that incorporate relational and non-relational elements for different jobs, but that is another story altogether! I also reject the theoretical posit of higher-order thoughts. I do not think of the kind of higher-order representations I posit as ordinary folk-psychological thoughts that I could think on my own at will. That is the Rosenthal view and I have always found it to be a bit strange. But ok, these hairs can be split another day.

Gennaro continues saying,

…if the qualitative conscious experience always goes with the HOT (including in cases of misrepresentation noted in section 4), then it seems the first-order state plays no relevant role in the theory.

Gennaro means this to be objecting to the HOROR theory and its claim that the higher-order state is itself the phenomenally conscious state. Amusingly, Gennaro here fails to realize that Rosenthal’s own theory is a non-relational theory! So if the objection he raises is supposed to work against my view then it also works against Rosenthal’s.

He then cites Rosenthal’s objection to me without realizing that the way he has set things up that amounts to Rosenthal objecting to Rosenthal! More importantly, I respond directly to the points being made in my book, including to the quote that he uses. He says, “…Rosenthal (2022) points out that Brown’s modified view conflates

a state’s being qualitatively conscious with a necessary condition for qualitative consciousness…there’s rarely anything it’s like to be in a HO state, and HO states are almost never conscious….[i]t’s the first-order state that’s qualitatively conscious. (Rosenthal 2022: 251–252)

On my view Gennaro and Rosenthal are here trying to identifying phenomenal consciousness with state-consciousness, something which I argue in my book itself stands in need of an argument in support. Yes, the higher-order representation is a necessary condition for the first-order mental state to be state-conscious but I argue that state consciousness should be separated from phenomenal consciousness. A state is phenomenally conscious when there is something that it is like for the subject to be in that state. Rosenthal and Gennaro seem to agree implicitly with this. Rosenthal says that there is rarely anything that it is like to be in the relevant higher-order state and it is clear that he is intending this to mean that we are rarely aware of ourselves as being in the higher-order state (I.e. the higher-order state is not usually state-conscious).

I agree with all of that! But in order for this to count as an objection to my view it must be the case that there cannot be something that it is like for one when one is not aware of the state one is in. This amounts to the transitivity principle, (which I argue is the uniting feature of the Traditional higher-order approach) and I explicitly reject the transitivity principle!

In my book I give what I call my HORORibly simple argument (as a nod to Lycan’s original simple argument as follows.

1. A phenomenally conscious state is one which, when one is in that state, there is something that it is like for one to be in it.

2. The state that, when one is in that state, there is something that there is like for one to be in it, is the state of inner awareness.

3. Thus, the phenomenally conscious state is the state of inner awareness.

4. Inner awareness is a representation of one’s own mind.

Thus, a phenomenally conscious state is a representation of one’s own mind.

Put this way we can see that Rosenthal and I disagree about premise 2 and maybe premise 1 but I am not confusing or conflating any kind of necessary condition for any thing else. I am denying that what Rosenthal calls ‘qualitative consciousness’ (insert eye roll) is phenomenal consciousness. It is state-consciousness and they are not the same thing (though they are related).

Let me stress that I feel weird about being somewhat indignant about someone not reading my book or being unaware of my views. I don’t usually expect that anyone will have any familiarity with my work before criticizing my views! However, it does seem to me in this case (of writing an entry for the most widely read online encyclopedia of philosophy) it should have been done. As it is this article pretty badly misunderstands my position and makes no attempt to get it correct. I might be overly suspicious but one can’t help but take this somewhat personally. This entry was originally written by Peter Carruthers and has recently been taken over by Rocco Gennaro, which explains a lot.

I have previously reviewed Gennaro’s book and papers for NDPR and written on this blog about the way in which I think he misunderstands Rosenthal’s approach to higher-order theories. He responded to my post and I invited him to come on Consciousness Live! and have a a discussion with me but he declined (the offer still stands on my end). Gennaro was at one of my talks at Tucson and afterwards asked me a question that directly pertains to the complaint above and we talked about it over dinner. A less paranoid person might think that the pattern of citations suggests that he wrote it before my book came out and the review process took a long time. Perhaps; but the general theme of my work has been made clear to Gennaro for some time. He also definitely knows my email/how to contact me if he wanted to clarify some of my views! All I can say is that had I botched something this bad I would want to correct it immediately.

Ah well, as a humble community college teacher I am still sort-of honored to be mentioned at all in this prestigious scholarly source (and one assumes some editorial heavy-handedness was applied to get even that given how ridiculous all of this is). Maybe someday someone will update that entry to reflect the actual landscape of the debate about higher-order theories of consciousness. The only real question is whether that might get done before we get GTA 6, haha, I mean before the whole debate is empirically mooted!

Philosophy of Animal Consciousness

The fall 2025 semester is off and running. I have a lot going on this semester, with Consciousness Live! kicking off in September, and teaching my usual 5 classes at LaGuardia. Since the Graduate Center Philosophy Program recently hired Kristen Andrews I have been sitting in on her philosophy of animal consciousness and society class she is offering. We are very early in the the semester but the class is very interesting and I think that Andrews will have a positive impact on the culture at the Grad Center, which is very nice!

It also allows me to address some issues that have long bothered me. As those who know me are aware, I was raised vegetarian and am now vegan. I strongly believe in animal rights and yet also reluctantly accept the role that animals play in scientific research (at least for now). I have always considered it beyond obvious that animals are conscious and that vegetarianism/vegainsim is required on moral grounds because of the suffering of animals (but also I would say there are other reasons to not eat meat).

At the same time I have long argued that we have a conundrum on our hands when it comes to animals. All we have are third-person methods to address their psychological states and they cannot verbally report. In addition we know that many things that seemingly involve consciousness can be done unconsciously. More specifically we can see in the human case that there seem to be instances where people can do things without being able to report on them (like blindsight). Given this the question opens up as to whether any particular piece of evidence one offers in support of the claim that animals are conscious truly supports that claim (given that it might be done unconsciously).

These two claims are not in tension since the first is a moral claim and the second is an epistemic/evidential/methodological claim.

To be honest I have largely avoided talking about animals and consciousness since to me it is hot-button topic that has caused many fights and loss of friends over the years. When one grow up the way I did one sees a great moral tragedy taking place right out in the open as though it is perfectly normal. It is mind-numbingly hard to “meet people where they are” on this issue (for me; to be clear I view this as a shortcoming on my part). Trying to convince people that animals are conscious or trying to convince them that since they are they should be treated in a certain way, and to met with the lowest level of response over and over takes a very special personality type to endure (and I lack it).

Then I met and started working with Joe LeDoux, who has very different views about animals. When I first met Joe he seemed to think that animals did not have experience at all. He also seemed to think that people like Peter Carruthers and Daniel Dennett shared his view, and so that it was somewhat mainstream in philosophy. I remember once he said “there is no evidence that any rat has ever felt fear,” and I was like, but you study fear in rats, so…uh, ????

Over the course of much discussion (and only slightly less whiskey) we gradually clarified that his view was that mammals are most likely conscious but we cannot say what their consciousness is like since they done’t have language. In particular they don’t have the concept ‘fear’ and so can’t be aware of themselves as being afraid. So, whatever their experience is like in a threatening condition it is probably wrong to say that it is fear, since that does seem to involve an awareness of oneself as being in danger. Joe thinks rats can’t have this kind of mental state but I am not so sure. This is an interesting question and I’ll return to it below.

Joe and I largely agreed on the methodological issue, even if we disagreed on which animals might be conscious. The way this has shown up in my own thinking is that I have tried to use this methodological argument to suggest that we won’t learn much about human consciousness from animal models. This suggests we should stop using them in this kind of research until we have a theory of phenomenal consciousness in the human case. Then we can see how far it extends.

This now brings me to Andrews. She has been arguing that we need to change the default assumption in science from one that holds we need to demonstrate that animals are conscious to just accepting this as the background default view: All animals are conscious. Her argument for this is, in part, that we don’t have any good way to determine if animals are conscious (i.e. the marker approach fails). She also argues that we need what she calls a “secure” theory of consciousness which could answer these questions. Since we don’t have that we should just assume that animals are consciousness. This, she continues, would allow us to make progress on other issues in the science of consciousness.

So it seems we agree on quite a bit. We both think that only a well-established “secure” theory of consciousness would allow us to definitively answer the question about animals. We both agree that the marker approach isn’t successful (though for slightly different reasons). We also both agree that the “demarcation” problem of trying to figure out which animals are conscious or where to draw the line between animals that are and are not conscious should be put aside for now.

But I don’t agree that we should change the default assumption. This is because I don’t think the default assumption is that animals are not conscious. The default assumption is this: any behavior that can be associated with consciousness can be produced without consciousness. That should not be changed without good empirical reason because we have good empirical reasons to accept it. However, even if we did change that default assumption we would still face the methodological challenge above with respect to the particular qualities, or what it is like for the animal. So, for now at least, I still think the science of consciousness is best done in humans.

A Higher-Order Theory of Emotional Consciousness

I am very happy to be able to say that the paper I have been writing with Joseph E. LeDoux is out in PNAS (Proceeding of the National Academy of the Sciences of the United States). In this paper we develop a higher-order theory of conscious emotional experience.

I have been interested in the emotions for quite some time now. I wrote my dissertation trying to show that it was possible to take seriously the role that the emotions play in our moral psychology which is seemingly revealed by contemporary cognitive neuroscience, and which I take to suggest that one of the basic premises of emotivism is true. But at the same time I wanted to preserve the space for one to also take seriously some kind of moral realism. In the dissertation I was more concerned with the philosophy of language than with the nature of the emotions but I have always been attracted to a rather simplistic view on which the differing conscious emotions differ with respect to the way in which they feel subjectively (I explore this as a general approach to the propositional attitudes in The Mark of the Mental). The idea that emotions are feelings is an old one in philosophy but has fallen out of favor in recent years. I also felt that in fleshing out such an account the higher-order approach to consciousness would come in handy. This idea was really made clear when I reviewed the book Feelings and Emotions: The Amsterdam Symposium. I felt that it would be a good idea to approach the science of emotions with the higher-order theory of consciousness in mind.

That was back in 2008 and since then I have not really followed up on any of the ideas in my dissertation. I have always wanted to but have always found something else at the moment to work on and that is why it is especially nice to have been working with Joseph LeDoux explicitly combining the two. I am very happy with the result and look forward to any discussion.

cfp: Fifth Online Consciousness Conference

UPDATE: The deadline is fast approaching!

I am pleased to announce the call for papers for the Fifth Online Consciousness Conference, which is scheduled for February 15-March 1, 2013.

Invited talk by Daniel C. Dennett, Tufts University

Special Session on Self-Consciousness organized by John Schwenkler, Mount St. Mary’s University. Invited participants include Katja Crone and Joel Smith. This session will also include two submitted talks by graduate students or recent PhDs. If you want your paper to be considered specifically for this special session please indicate so in your submission email.

Papers in any area of consciousness studies are welcome (construed widely so as to include philosophy of mind and philosophy of cognitive science, as well as the cognitive sciences) and should be roughly 3,000-4,000 words. Submissions made suitable for blind review should be sent to consciousnessonline@gmail.com by January 5th 2013.

For papers that are accepted, an audio/visual presentation (e.g. narrated powerpoint or video of talk) is strongly encouraged but not required.

Program Committee:
Richard Brown
David Chalmers
Adam Pautz
Susanna Siegel

For more information visit the conference website at http://consciousnessonline.com

Find Consciousness Online on Facebook!

Peter Godfrey-Smith on Evolution And Memory

On friday I attended the first session of the CUNY Cognitive Science Speaker Series. The talk seemed to me to be based largely on this paper. I only have a few moments but I thought I would jot down the gist of the talk while it is fresh in my mind.

Godfrey-Smith wanted to take the ‘sender-receiver’ model of communication develop by David Lewis and apply it to debates about memory. On the Lewis model we have a sender that has access to a part of the world that the receiver does not, a sign that is passed between them, and a receiver that is able to take the sign and produce an action. Godfrey-Smith’s guiding idea is that when you have this kind of set up in the psychology of an organism and the signaling takes place over time, then you have memory.

One of the main points he wanted to make in the first half of his talk was that the idea of episodic memory as ‘constructive’ does not show that one of the main functions of episodic memory is to be truth preserving. He was aiming to oppose a group of scientist working on memory who hold what he called the ;future first’ hypothesis about episodic memory. Roughly speaking the idea is this. Our ability to imagine future events, and their outcomes is crucial for us and gives us an evolutionary advantage against those that cannot do it. What we have found out is that the neural areas that underlie our ability to do this are also largely the same ones involved in our ability to remember our past experiences. The ‘future first’ hypothesis is the idea that our ability to remember our own past experiences is simply a by-product of our ability to imagine future events and their outcomes. This is supposed to be further supported by the fact that episodic memory is thought of as ‘constructive’ in the sense that it is often wrong about the details of past experiences and tends to ‘construct’ memories along the most likely scenarios. Godfrey-Smith argued that if we think of memory in terms of the sender-receiver model then we should not immediately expect that the contrastive nature of episodic memory means that it is not truth-tracking. It is perfectly conceivable that the senders produce truth-tracking representations and that the receivers, who may be a bit smarter, ‘improvise’ from there. We can then go on and ask just how much deviation from the facts there is in the sender’s signal.

In the second half of his talk he went to discuss the controversy in cognitive science of whether there is any kind of reader in the brain. That is, is there anything in the brain which is akin to the ‘head’ in a turing machine. Something which interprets whatever message has been sent by the sender. He argued that the dominant view in the sciences is that there is no such reader. Godfrey-Smith went on to argue that there must be a reader, but that there is no sender. DNA, for instance, on this view is not an instance of information being sent. It is information, that happens to be able to be read (as a result of natural selection), but it has not been ‘written’ because, to put it crudely, nothing had the purpose of sending the message. In discussion he wanted to back away from talk of intention and purpose as ‘shorthand’ for the longer answer but I couldn’t make out what the longer answer was supposed to be.