Animal Consciousness and the Unknown Power of the Unconscious Mind

Things are about to get really (I mean really) busy for me and so I probably won’t be doing much besides running around frantically until August 2026 (seriously even by my standards it’s going to be a rough ride for a while). Of course I will post the Consciousness Live! discussions once they start (Sept 18) and I am looking forward to Block’s presentation at the NYU Philosophy of Mind discussion group so may try to get to something here and there. At any rate we have been having some very interesting discussions in the philosophy of animal consciousness and society class. We have been discussing the markers and ‘tests’ approach and we read Bayne et al Tests for Consciousness in Humans and Beyond and Hakwan Lau’s The End of Consciousness (there was another paper but I’ll leave it aside for now). There were a lot of good points that came up in the class but I want to focus on the issue that is important to me, which is the methodological/evidential one I discussed in the previous post on this class.

Andrews seems to be trying to frame things by making a distinction between two positions you might have towards animals. The first is that we assume that animals, or a particular organism, is not conscious at all and then we look for markers that would raise our credence that the animal was conscious. So, we look at fish and see if they behave a certain way with respect to tissue damage, etc. If the fish is damaged and seeks a pain reliever then probably that indicates it is conscious and if it doesn’t then not. The second issue assumes that animals are conscious but that we need to establish that they have this or that specific conscious experience. As I am understanding this at this point she sees that marker approach as belonging to the first camp and the tests approach belonging to the second, though I might have misunderstood that point.

I can see why, if you are arguing with a certain type of philosopher/scientist, this may be how you are thinking of tings but I do not think it helps with the methodological challenge to studying animal consciousness. This can bee seen by the response to the argument that I gave in the previous post. That argument relied on the empirical claim that anything that you associated with consciousness could likely be done without consciousness. So when I point out that blindsight seems to suggest that you can have sophisticated behavior without consciousness one response was to say, ‘yeah but that doesn’t show that the blindsight patient has no conscious experience’. Another was ‘yeah but the blindsight subject is a conscious subject’. These are subtly different.

The first is taking the blindsight argument to be suggesting the conclusion that animals are not conscious. The second is suggesting the conclusion that being conscious played an important role in the process that led to the now unconscious behavior. So, the blindsight subject was normally sighted for a period of their life and they had normal visual perception and consciousness. Perhaps that played an important role in their learning how to do what they did and now, even though the process is automatic and can be done unconsciously, that doesn’t mean it could always be done unconsciously. These are good and interesting points but they do not defuse the methodological tension that I am pressing.

As I have said before, I don’t take the issue to be whether animals are conscious or not since I take that to be intuitively obvious; and you may take it to be intuitively obvious that they are not conscious. That is irrelevant since I do not base my beliefs in animal consciousness on science. If you were to ask me if science does support my belief about animals I would say that we at this point do not have scientific evidence that animals are, or are not, conscious because of this methodological issue.

Suppose there is a behavior, neural process, or function, which you take to be associated with consciousness (as either a test, marker, or whatever). Suppose that you think this is a marker or a test, or whatever. I will take as my example a certain pattern of neural activation in the fusiform face area. Suppose that we found that pattern when people looked at faces but not when they looked at houses. Does that indicate that finding that pattern is good evidence that they consciously saw the face? No. The reason is that we have found that same pattern of activation in cases that we have good reason to think are unconscious. (side note: that could be disputed and it is interesting to think about those arguments but lets save that for later). So, this pattern shows up when the subject consciously sees the face and also when the subject does not consciously see the face (but the face is present). Now suppose that someone finds this kind of pattern in a non-human animal. Is that evidence that the animal consciously sees a face? Or is it evidence that this process occurs unconsciously as it did in some human cases? Unless we had some way of telling the two kinds of neural activations apart we should conclude neither that the animal consciously nor that it unconsciously saw the face.

More to the point it would be irresponsible to loudly proclaim that this is evidence that the animal did consciously see the face until the issue above was resolved. None of this suggests that the animal is unconscious. It only suggests that the proposed marker/test is insufficient to establish that until we know the extent of the unconscious mind.

From there one might want to mount the more general argument that anything could be done unconsciously. That is an empirical question that the field should take seriously. Most reasonable people I know of are not saying we should think animals are unconscious, or that science suggests that only mammals/birds are conscious. We are saying that we don’t really know how powerful the unconscious mind is, this hasn’t fully investigated empirically. We have some reason to think it is quite powerful indeed, and some reason to think maybe not. Until we resolve this issue we should be cautious about grand declarations about what science has shown about animals and seriously address these methodological issues.

Philosophy of Animal Consciousness

The fall 2025 semester is off and running. I have a lot going on this semester, with Consciousness Live! kicking off in September, and teaching my usual 5 classes at LaGuardia. Since the Graduate Center Philosophy Program recently hired Kristen Andrews I have been sitting in on her philosophy of animal consciousness and society class she is offering. We are very early in the the semester but the class is very interesting and I think that Andrews will have a positive impact on the culture at the Grad Center, which is very nice!

It also allows me to address some issues that have long bothered me. As those who know me are aware, I was raised vegetarian and am now vegan. I strongly believe in animal rights and yet also reluctantly accept the role that animals play in scientific research (at least for now). I have always considered it beyond obvious that animals are conscious and that vegetarianism/vegainsim is required on moral grounds because of the suffering of animals (but also I would say there are other reasons to not eat meat).

At the same time I have long argued that we have a conundrum on our hands when it comes to animals. All we have are third-person methods to address their psychological states and they cannot verbally report. In addition we know that many things that seemingly involve consciousness can be done unconsciously. More specifically we can see in the human case that there seem to be instances where people can do things without being able to report on them (like blindsight). Given this the question opens up as to whether any particular piece of evidence one offers in support of the claim that animals are conscious truly supports that claim (given that it might be done unconsciously).

These two claims are not in tension since the first is a moral claim and the second is an epistemic/evidential/methodological claim.

To be honest I have largely avoided talking about animals and consciousness since to me it is hot-button topic that has caused many fights and loss of friends over the years. When one grow up the way I did one sees a great moral tragedy taking place right out in the open as though it is perfectly normal. It is mind-numbingly hard to “meet people where they are” on this issue (for me; to be clear I view this as a shortcoming on my part). Trying to convince people that animals are conscious or trying to convince them that since they are they should be treated in a certain way, and to met with the lowest level of response over and over takes a very special personality type to endure (and I lack it).

Then I met and started working with Joe LeDoux, who has very different views about animals. When I first met Joe he seemed to think that animals did not have experience at all. He also seemed to think that people like Peter Carruthers and Daniel Dennett shared his view, and so that it was somewhat mainstream in philosophy. I remember once he said “there is no evidence that any rat has ever felt fear,” and I was like, but you study fear in rats, so…uh, ????

Over the course of much discussion (and only slightly less whiskey) we gradually clarified that his view was that mammals are most likely conscious but we cannot say what their consciousness is like since they done’t have language. In particular they don’t have the concept ‘fear’ and so can’t be aware of themselves as being afraid. So, whatever their experience is like in a threatening condition it is probably wrong to say that it is fear, since that does seem to involve an awareness of oneself as being in danger. Joe thinks rats can’t have this kind of mental state but I am not so sure. This is an interesting question and I’ll return to it below.

Joe and I largely agreed on the methodological issue, even if we disagreed on which animals might be conscious. The way this has shown up in my own thinking is that I have tried to use this methodological argument to suggest that we won’t learn much about human consciousness from animal models. This suggests we should stop using them in this kind of research until we have a theory of phenomenal consciousness in the human case. Then we can see how far it extends.

This now brings me to Andrews. She has been arguing that we need to change the default assumption in science from one that holds we need to demonstrate that animals are conscious to just accepting this as the background default view: All animals are conscious. Her argument for this is, in part, that we don’t have any good way to determine if animals are conscious (i.e. the marker approach fails). She also argues that we need what she calls a “secure” theory of consciousness which could answer these questions. Since we don’t have that we should just assume that animals are consciousness. This, she continues, would allow us to make progress on other issues in the science of consciousness.

So it seems we agree on quite a bit. We both think that only a well-established “secure” theory of consciousness would allow us to definitively answer the question about animals. We both agree that the marker approach isn’t successful (though for slightly different reasons). We also both agree that the “demarcation” problem of trying to figure out which animals are conscious or where to draw the line between animals that are and are not conscious should be put aside for now.

But I don’t agree that we should change the default assumption. This is because I don’t think the default assumption is that animals are not conscious. The default assumption is this: any behavior that can be associated with consciousness can be produced without consciousness. That should not be changed without good empirical reason because we have good empirical reasons to accept it. However, even if we did change that default assumption we would still face the methodological challenge above with respect to the particular qualities, or what it is like for the animal. So, for now at least, I still think the science of consciousness is best done in humans.

Gottlieb on Brown

I have been interested in the relationship between the transitivity principle and transparency for quite a while now. This issue has come up again in a recent paper  by Joseph Gottlieb fittingly called Transitivity and Transparency. This paper came out in Analytic Philosophy in 2016 but he actually sent me the paper beforehand. I read it and we had some email conversation about it (and this influenced my Introspective Consciousness paper (here is the Academia.edu session I had on it)) but I never got the chance to formulate any clear thoughts on it. So I figured I would give it a shot now.

There is a lot going on in the paper and so I will focus for the most part on his response to some of my early work on what will become HOROR theory. He argues that what he calls Non-State-Relational Transitivity, is not an ‘acceptable consistency gloss’ on the transitivity principle. So what is a consistency gloss? The article is technical (it did come out in Analytic Philosophy, after all!). For Gottlieb this amounts to giving a precisification of the transitivity principle that renders it compatible with what he calls Weak Transparency. He defines these terms as follows,

TRANSITIVITY: Conscious mental states are mental states we are aware of in some way.

W-TRANSPARENCY: For at least one conscious state M, it is impossible to:

(a) TRANSPARENCY-DIRECT: Stand in a direct awareness relation to M, or; (b) TRANSPARENCY-DE RE: Stand in a de re awareness relation to M, or; (c) TRANSPARENCY-INT: Stand in an introspective awareness relation to M,

His basic claim, then, is that there is no way of making precise the statement of transitivity above in such a way as to render it consistent with the weak version of transparency that he thinks should count as a truism or platitude.

Of course my basic claim, one that I have made since the beginning of thinking about these issues, is that there is a way of doing this but it requires a proper understanding of what the transitivity principle says. If we do not interpret the theory as claiming that a first-order state is made conscious by the higher-order state (as Gottlieb does in TRANSITIVITY above) but instead think of transitivity as telling us that a conscious experience is one that makes me aware of myself as being in first-order states then we have a way to satisfy Weak Transparency.

So what is Gottlieb’s problem with this way of interpreting the transitivity principle? He has a section of the paper discussing this kind of move. He says,

4.3 Non-State-Relational Transitivity

As it stands, TRANSITIVITY posits a relation between a higher-order state and a first-order state. But not all Higher-Order theorists construe TRANSITIVITY this way. Instead, some advance:

  • NON-STATE-RELATIONAL TRANSITIVITY: A conscious mental state is a mental state whose subject is aware of itself as being in that state.

NON-STATE-RELATIONAL TRANSITIVITY is an Object-Side Precisification. And it appears promising. For it says that we are aware of ourselves as being in conscious states, not simply that we are aware of our conscious states. These are different claims.

I agree that this is an importantly different way of thinking about the transitivity principle. However, I do not think that I actually endorse this version of the transitivity principle. As it is stated here NON-STATE-RELATIONAL TRANSITIVITY is still cast in terms of the first-order state.

What I mean by that is when we ask the question ‘which metal state is phenomenally conscious?’ the current proposal would answer ‘the mental state the subject is aware of itself as being in’. Now, I do think that this is most likely the way that Rosenthal and Weisberg think of non-state-relational transitivity but this is not the way that I think about it.

I have not put this in print yet (though it is in a paper in draft stage) but the way I would reformulate the transitivity principle would be as follows (or at least along these general lines),

  • A mental state is phenomenally conscious only if it appropriately makes one aware of oneself as being in some first-order mental state

This way of putting things emphasizes the claim that the higher-order state itself is the phenomenally conscious state.

Part of what I think is going on here is that there is an ambiguity in terms like ‘awareness’. When we say that we are aware of a first-order state, or whatever, what we should mean, from the higher-order perspective, is that the higher-order state aims at or targets or represents or whatever the first-order state. I have toyed with the idea that the ‘targeting’ relation boils down to a kind of causal-reference relation. But then we can also ask ‘how does it appear to the subject?’ and there it is not the case that we should say that it appears to the subject that they are aware of the first-order state. The subject will seemingly be aware of the items in the environment and this is because of the higher-order content of the higher-order representation.

Gottlieb thinks that non-state-relational transitivity,

 …will do nothing with respect to W-TRANSPARENCY…For presumably there will be (many!) cases where I am in the conscious state I am aware of myself as being in, and so cases where we will still need to ask in what sense I am aware of those states, and whether that sense comports with W-TRANSPARENCY. NON-STATE-RELATIONAL TRANSITIVITY doesn’t obviously speak to this latter question, though; the awareness we have of ourselves is de re, and presumably direct, but whether that’s also true of the awareness we have of our conscious states is another issue. So as it stands, NON-STATE-RELATIONAL TRANSITIVITY is not a consistency gloss.

I think it should be clear by now that this may apply to the kind of view he discusses, and that this view may even be one you could attribute to Rosenthal or Weisberg, but it is not the kind of view that I have advocated.

According to my view the higher-order state is itself the phenomenally conscious state, it is the one that there is something that it is like for one to be in. What, specifically, it is like, will depend on the content of the higher-order representation. That is to say, the way the state describes one’s own self determined what it is like for you. When the first order state is there, it, the first-order state, will be accurately described but that is besides the point. W-transparency is clearly met by the HOROR version of higher-order theory. And if what I said above can hold water then it is still a higher-order theory which endorses a version of the transitivity principle but it is able to simultaneously capture many of the intuitions touted as evidence for first-order theories.

Chalmers on Brown on Chalmers

I just found out that the double special issue of the Journal of Consciousness Studies devoted to David Chalmers’ paper The Singularity: A Philosophical Analysis recently came out as a book! I had a short paper in that collection that stemmed from some thoughts I had about zombies and simulated worlds (I posted about them here and here). Dave responded to all of the articles (here) and I just realized that I never wrote anything about that response!

I have always had a love/hate relationship with this paper. On the one hand I felt like there was an idea worth developing, one that started to take shape back in 2009. On the other hand there was a pretty tight deadline for the special issue and I did not feel like I had really got ahold of what the main idea was supposed to be, in my own thinking. I felt rushed and secretly wished I could wait a year or two to think about it. But this was before I had tenure and I thought it would be a bad move to miss this opportunity. The end result is that I think the paper is flawed but I still feel like there is an interesting idea lurking about that needs to be more fully developed. Besides, I thought, the response from Dave would give me an opportunity to think more deeply about these issues and would be something I could respond to…that was five years ago! Well, I guess better late than never so here goes.

My paper was divided into two parts. As Dave says,

First, [Brown] cites my 1990 discussion piece “How Cartesian dualism might have been true”, in which I argued that creatures who live in simulated environments with separated simulated cognitive processes would endorse Cartesian dualism. The cognitive processes that drive their behavior would be entirely distinct from the processes that govern their environment, and an investigation of the latter would reveal no sign of the former: they will not find brains inside their heads driving their behavior, for example. Brown notes that the same could apply even if the creatures are zombies, so this sort of dualism does not essentially involve consciousness. I think this is right: we might call it process dualism, because it is a dualism of two distinct sorts of processes. If the cognitive processes essentially involve consciousness, then we have something akin to traditional Cartesian dualism; if not, then we have a different sort of interactive dualism.

Looking back on this now I think that I can say that part of the idea I had was that what Dave here calls ‘process dualism’ is really what lies behind the conceivability of zombies. Instead of testing whether (one thinks that) dualism or physicalism is true about consciousness the two-dimensional argument against materialism is really testing whether one thinks that consciousness is  grounded in biological or functional/computational properties. This debate is distinct and orthogonal to the debate about physicalism/dualism.

In the next part of the response Dave addresses my attempted extension of this point to try to reconcile the conceivability of zombies with what I called ‘biologism’. Biologism was supposed to be a word to distinguish the debate between the physicalist and the dualist from the debate between the biologically-oriented views of the mind as against the computationally oriented views. At the time I thought this term was coined by me and it was supposed to be an umbrella term that would have biological materialism as a particular variant. I should note before going on that it was only after the paper was published that I became aware that this term has a history and is associated with certain views about ‘the use of biological explanations in the analysis of social situations‘. This is not what I intended and had I known that beforehand I would have tried to coin a different term.

The point was to try to emphasize that this debate was supposed to be distinct from the debate about physicalism and that one could endorse this kind of view even if one rejected biological materialism. The family of views I was interested in defending can be summed up as holding that consciousness is ultimately grounded in or caused by some biological property of the brain and that a simulation of the brain would lack that property. This is compatible with materialism (=identity theory) but also dualism. One could be a dualist and yet hold that only biological agents could have the required relation to the non-physical mind. Indeed I would say that in my experience this is the view of the vast majority of those who accept dualism (by which I mostly mean my students). Having said that it is true that in my own thinking I lean towards physicalism (though as a side-side note I do not think that physicalism is true, only that we have no good reason to reject it) and it is certainly true that in the paper I say that this can be used to make the relevant claim about biological materialism.

At any rate, here is what Dave says about my argument.

Brown goes on to argue that simulated worlds show how one can reconcile biological materialism with the conceivability and possibility of zombies. If biological materialism is true, a perfect simulation of a biological conscious being will not be conscious. But if it is a perfect simulation in a world that perfectly simulates our physics, it will be a physical duplicate of the original. So it will be a physical duplicate without consciousness: a zombie.

I think Brown’s argument goes wrong at the second step. A perfect simulation of a physical system is not a physical duplicate of that system. A perfect simulation of a brain on a computer is not made of neurons, for example; it is made of silicon. So the zombie in question is a merely functional duplicate of a conscious being, not a physical duplicate. And of course biological materialism is quite consistent with functional duplicates.

It is true that from the point of view of beings in the simulation, the simulated being will seem to have the same physical structure that the original being seems to us to have in our world. But this does not entail that it is a physical duplicate, any more than the watery stuff on Twin Earth that looks like water really is water. (See note 7 in “The Matrix as metaphysics” for more here.) To put matters technically (nonphilosophers can skip!), if P is a physical specification of the original being in our world, the simulated being may satisfy the primary intension of P (relative to an inhabitant of the simulated world), but it will not satisfy the secondary intension of P. For zombies to be possible in the sense relevant to materialism, a being satisfying the secondary intension of P is required. At best, we can say that zombies are (primarily) conceivable and (primarily) possible— but this possibility mere reflects the (secondary) possibility of a microfunctional duplicate of a conscious being without consciousness, and not a full physical duplicate. In effect, on a biological view the intrinsic basis of the microphysical functions will make a difference to consciousness. To that extent the view might be seen as a variant of what is sometimes known as Russellian monism, on which the intrinsic nature of physical processes is what is key to consciousness (though unlike other versions of Russellian monism, this version need not be committed to an a priori entailment from the underlying processes to consciousness).

I have to say that I am sympathetic with Dave in the way he diagnoses the flaw in the argument in the paper. It is a mistake to think of the simulated world, with its simulated creatures, as being a physical duplicate of our world in the right way; especially if this simulation is taking place in the original non-simulated world. If the biological view is correct then it is just a functional duplicate, true a microfunctional duplicate, but not a physical duplicate.

While I think this is right I also think the issues are complicated. For example take the typical Russellian pan(proto)psychism that is currently being explored by Chalmers and others. This view is touted as being compatible with the conceivability of zombies because we can conceive of a duplicate of our physics as long as we mean the structural, non-intrinsic properties. Since physics, on this view, describes only these structural features we can count the zombie world as having our physics in the narrow sense. The issues here are complex but this looks superficially just like the situation described in my paper. The simulated world captures all of the structural features of physics but leaves out whatever biological properties are necessary and in this sense the reasoning of the paper holds up.

This is why I think the comparison with Russellian monism invoked by Dave is helpful. In fact when I pitched my commentary to Dave I included this comparison with Russellian monism but it did not get developed in the paper. At any rate, I think what it helps us to see is the many ways in which we can *almost* conceive of zombies. This is a point that I have made going back to some of my earliest writings about zombies.  If the identity theory is true, or if some kind of biological view about consciousness is true, then there is some (as yet to be discovered) property/properties of biological neural states which necessitate/cause /just are the existence of phenomenal consciousness. Since we don’t know what this property is (yet) and since we don’t yet understand how it could necessitate/cause/etc phenomenal consciousness, we may fail to include it in our conceptualization of a ‘zombie world’. Or we may include it and fail to recognize that this entails a contradiction. I am sympathetic to both of these claims.

On the one hand, we can certainly conceive of a world very nearly physically just like ours. This world may have all/most of the same physical properties, excepting certain necessary biological properties, and as a result the creatures will behave in indistinguishable ways from us (given certain other assumptions). On the other hand we may conceive of the zombie twin as a biologically exact duplicate in which case we do not see that this is not actually a conceivable situation. If we knew the full biological story we would be, or at least could be, in a position to see that we had misdescribed the situation in just the same way as someone who did not know enough chemistry might think they could conceive of h2o failing to be water (in a world otherwise physically just like ours). This is what I take to be the essence of the Krpkean strategy. We allow that the thing in question is a metaphysical possibility but then argue that it is actually misdescribed in the original argument. While misdescribing it we think (mistakenly) we have conceived of a certain situation being true but really we have conceived of a slightly different situation being true and this one is compatible with physicalism.

Thus while I think the issues are complex and that I did not get them right in the paper I still think the paper is morally correct. To the extent that biological materialism resembles Russellian monism is the extent to which the zombie argument is irrelevant.

A Higher-Order Theory of Emotional Consciousness

I am very happy to be able to say that the paper I have been writing with Joseph E. LeDoux is out in PNAS (Proceeding of the National Academy of the Sciences of the United States). In this paper we develop a higher-order theory of conscious emotional experience.

I have been interested in the emotions for quite some time now. I wrote my dissertation trying to show that it was possible to take seriously the role that the emotions play in our moral psychology which is seemingly revealed by contemporary cognitive neuroscience, and which I take to suggest that one of the basic premises of emotivism is true. But at the same time I wanted to preserve the space for one to also take seriously some kind of moral realism. In the dissertation I was more concerned with the philosophy of language than with the nature of the emotions but I have always been attracted to a rather simplistic view on which the differing conscious emotions differ with respect to the way in which they feel subjectively (I explore this as a general approach to the propositional attitudes in The Mark of the Mental). The idea that emotions are feelings is an old one in philosophy but has fallen out of favor in recent years. I also felt that in fleshing out such an account the higher-order approach to consciousness would come in handy. This idea was really made clear when I reviewed the book Feelings and Emotions: The Amsterdam Symposium. I felt that it would be a good idea to approach the science of emotions with the higher-order theory of consciousness in mind.

That was back in 2008 and since then I have not really followed up on any of the ideas in my dissertation. I have always wanted to but have always found something else at the moment to work on and that is why it is especially nice to have been working with Joseph LeDoux explicitly combining the two. I am very happy with the result and look forward to any discussion.

Consciousness Without First-Order Representations

I am getting ready to head out to San Diego for the Association for the Scientific Study of Consciousness 17th Annual meeting. I have organized a symposium on the Role of the Prefrontal Cortex in Conscious Experience which will feature talks by Rafi Malach, Joe Levine, Doby Rahnev, and me! A rehearsal of my talk is below. As usual any feedback appreciated.

Also relevant are the following papers:

1.(Lau & Brown) The Emperor’s New Phenomenology? The Empirical Case for Conscious Experience without First-Order Representations

2. (Brown 2012) The Brain and its States

The ASSC Students have also set up the following online debate forum: http://theasscforum.blogspot.com/2013/06/symposium-1-prefrontal-cortex.html

A longer video explaining the Rahnev results can be found here: http://www.youtube.com/watch?v=_gQYdGRbkpE

Lot’s of ways to get involved in the discussion!

[cross-posted at Brains]