Gottlieb on Brown

I have been interested in the relationship between the transitivity principle and transparency for quite a while now. This issue has come up again in a recent paper  by Joseph Gottlieb fittingly called Transitivity and Transparency. This paper came out in Analytic Philosophy in 2016 but he actually sent me the paper beforehand. I read it and we had some email conversation about it (and this influenced my Introspective Consciousness paper (here is the Academia.edu session I had on it)) but I never got the chance to formulate any clear thoughts on it. So I figured I would give it a shot now.

There is a lot going on in the paper and so I will focus for the most part on his response to some of my early work on what will become HOROR theory. He argues that what he calls Non-State-Relational Transitivity, is not an ‘acceptable consistency gloss’ on the transitivity principle. So what is a consistency gloss? The article is technical (it did come out in Analytic Philosophy, after all!). For Gottlieb this amounts to giving a precisification of the transitivity principle that renders it compatible with what he calls Weak Transparency. He defines these terms as follows,

TRANSITIVITY: Conscious mental states are mental states we are aware of in some way.

W-TRANSPARENCY: For at least one conscious state M, it is impossible to:

(a) TRANSPARENCY-DIRECT: Stand in a direct awareness relation to M, or; (b) TRANSPARENCY-DE RE: Stand in a de re awareness relation to M, or; (c) TRANSPARENCY-INT: Stand in an introspective awareness relation to M,

His basic claim, then, is that there is no way of making precise the statement of transitivity above in such a way as to render it consistent with the weak version of transparency that he thinks should count as a truism or platitude.

Of course my basic claim, one that I have made since the beginning of thinking about these issues, is that there is a way of doing this but it requires a proper understanding of what the transitivity principle says. If we do not interpret the theory as claiming that a first-order state is made conscious by the higher-order state (as Gottlieb does in TRANSITIVITY above) but instead think of transitivity as telling us that a conscious experience is one that makes me aware of myself as being in first-order states then we have a way to satisfy Weak Transparency.

So what is Gottlieb’s problem with this way of interpreting the transitivity principle? He has a section of the paper discussing this kind of move. He says,

4.3 Non-State-Relational Transitivity

As it stands, TRANSITIVITY posits a relation between a higher-order state and a first-order state. But not all Higher-Order theorists construe TRANSITIVITY this way. Instead, some advance:

  • NON-STATE-RELATIONAL TRANSITIVITY: A conscious mental state is a mental state whose subject is aware of itself as being in that state.

NON-STATE-RELATIONAL TRANSITIVITY is an Object-Side Precisification. And it appears promising. For it says that we are aware of ourselves as being in conscious states, not simply that we are aware of our conscious states. These are different claims.

I agree that this is an importantly different way of thinking about the transitivity principle. However, I do not think that I actually endorse this version of the transitivity principle. As it is stated here NON-STATE-RELATIONAL TRANSITIVITY is still cast in terms of the first-order state.

What I mean by that is when we ask the question ‘which metal state is phenomenally conscious?’ the current proposal would answer ‘the mental state the subject is aware of itself as being in’. Now, I do think that this is most likely the way that Rosenthal and Weisberg think of non-state-relational transitivity but this is not the way that I think about it.

I have not put this in print yet (though it is in a paper in draft stage) but the way I would reformulate the transitivity principle would be as follows (or at least along these general lines),

  • A mental state is phenomenally conscious only if it appropriately makes one aware of oneself as being in some first-order mental state

This way of putting things emphasizes the claim that the higher-order state itself is the phenomenally conscious state.

Part of what I think is going on here is that there is an ambiguity in terms like ‘awareness’. When we say that we are aware of a first-order state, or whatever, what we should mean, from the higher-order perspective, is that the higher-order state aims at or targets or represents or whatever the first-order state. I have toyed with the idea that the ‘targeting’ relation boils down to a kind of causal-reference relation. But then we can also ask ‘how does it appear to the subject?’ and there it is not the case that we should say that it appears to the subject that they are aware of the first-order state. The subject will seemingly be aware of the items in the environment and this is because of the higher-order content of the higher-order representation.

Gottlieb thinks that non-state-relational transitivity,

 …will do nothing with respect to W-TRANSPARENCY…For presumably there will be (many!) cases where I am in the conscious state I am aware of myself as being in, and so cases where we will still need to ask in what sense I am aware of those states, and whether that sense comports with W-TRANSPARENCY. NON-STATE-RELATIONAL TRANSITIVITY doesn’t obviously speak to this latter question, though; the awareness we have of ourselves is de re, and presumably direct, but whether that’s also true of the awareness we have of our conscious states is another issue. So as it stands, NON-STATE-RELATIONAL TRANSITIVITY is not a consistency gloss.

I think it should be clear by now that this may apply to the kind of view he discusses, and that this view may even be one you could attribute to Rosenthal or Weisberg, but it is not the kind of view that I have advocated.

According to my view the higher-order state is itself the phenomenally conscious state, it is the one that there is something that it is like for one to be in. What, specifically, it is like, will depend on the content of the higher-order representation. That is to say, the way the state describes one’s own self determined what it is like for you. When the first order state is there, it, the first-order state, will be accurately described but that is besides the point. W-transparency is clearly met by the HOROR version of higher-order theory. And if what I said above can hold water then it is still a higher-order theory which endorses a version of the transitivity principle but it is able to simultaneously capture many of the intuitions touted as evidence for first-order theories.

Chalmers on Brown on Chalmers

I just found out that the double special issue of the Journal of Consciousness Studies devoted to David Chalmers’ paper The Singularity: A Philosophical Analysis recently came out as a book! I had a short paper in that collection that stemmed from some thoughts I had about zombies and simulated worlds (I posted about them here and here). Dave responded to all of the articles (here) and I just realized that I never wrote anything about that response!

I have always had a love/hate relationship with this paper. On the one hand I felt like there was an idea worth developing, one that started to take shape back in 2009. On the other hand there was a pretty tight deadline for the special issue and I did not feel like I had really got ahold of what the main idea was supposed to be, in my own thinking. I felt rushed and secretly wished I could wait a year or two to think about it. But this was before I had tenure and I thought it would be a bad move to miss this opportunity. The end result is that I think the paper is flawed but I still feel like there is an interesting idea lurking about that needs to be more fully developed. Besides, I thought, the response from Dave would give me an opportunity to think more deeply about these issues and would be something I could respond to…that was five years ago! Well, I guess better late than never so here goes.

My paper was divided into two parts. As Dave says,

First, [Brown] cites my 1990 discussion piece “How Cartesian dualism might have been true”, in which I argued that creatures who live in simulated environments with separated simulated cognitive processes would endorse Cartesian dualism. The cognitive processes that drive their behavior would be entirely distinct from the processes that govern their environment, and an investigation of the latter would reveal no sign of the former: they will not find brains inside their heads driving their behavior, for example. Brown notes that the same could apply even if the creatures are zombies, so this sort of dualism does not essentially involve consciousness. I think this is right: we might call it process dualism, because it is a dualism of two distinct sorts of processes. If the cognitive processes essentially involve consciousness, then we have something akin to traditional Cartesian dualism; if not, then we have a different sort of interactive dualism.

Looking back on this now I think that I can say that part of the idea I had was that what Dave here calls ‘process dualism’ is really what lies behind the conceivability of zombies. Instead of testing whether (one thinks that) dualism or physicalism is true about consciousness the two-dimensional argument against materialism is really testing whether one thinks that consciousness is  grounded in biological or functional/computational properties. This debate is distinct and orthogonal to the debate about physicalism/dualism.

In the next part of the response Dave addresses my attempted extension of this point to try to reconcile the conceivability of zombies with what I called ‘biologism’. Biologism was supposed to be a word to distinguish the debate between the physicalist and the dualist from the debate between the biologically-oriented views of the mind as against the computationally oriented views. At the time I thought this term was coined by me and it was supposed to be an umbrella term that would have biological materialism as a particular variant. I should note before going on that it was only after the paper was published that I became aware that this term has a history and is associated with certain views about ‘the use of biological explanations in the analysis of social situations‘. This is not what I intended and had I known that beforehand I would have tried to coin a different term.

The point was to try to emphasize that this debate was supposed to be distinct from the debate about physicalism and that one could endorse this kind of view even if one rejected biological materialism. The family of views I was interested in defending can be summed up as holding that consciousness is ultimately grounded in or caused by some biological property of the brain and that a simulation of the brain would lack that property. This is compatible with materialism (=identity theory) but also dualism. One could be a dualist and yet hold that only biological agents could have the required relation to the non-physical mind. Indeed I would say that in my experience this is the view of the vast majority of those who accept dualism (by which I mostly mean my students). Having said that it is true that in my own thinking I lean towards physicalism (though as a side-side note I do not think that physicalism is true, only that we have no good reason to reject it) and it is certainly true that in the paper I say that this can be used to make the relevant claim about biological materialism.

At any rate, here is what Dave says about my argument.

Brown goes on to argue that simulated worlds show how one can reconcile biological materialism with the conceivability and possibility of zombies. If biological materialism is true, a perfect simulation of a biological conscious being will not be conscious. But if it is a perfect simulation in a world that perfectly simulates our physics, it will be a physical duplicate of the original. So it will be a physical duplicate without consciousness: a zombie.

I think Brown’s argument goes wrong at the second step. A perfect simulation of a physical system is not a physical duplicate of that system. A perfect simulation of a brain on a computer is not made of neurons, for example; it is made of silicon. So the zombie in question is a merely functional duplicate of a conscious being, not a physical duplicate. And of course biological materialism is quite consistent with functional duplicates.

It is true that from the point of view of beings in the simulation, the simulated being will seem to have the same physical structure that the original being seems to us to have in our world. But this does not entail that it is a physical duplicate, any more than the watery stuff on Twin Earth that looks like water really is water. (See note 7 in “The Matrix as metaphysics” for more here.) To put matters technically (nonphilosophers can skip!), if P is a physical specification of the original being in our world, the simulated being may satisfy the primary intension of P (relative to an inhabitant of the simulated world), but it will not satisfy the secondary intension of P. For zombies to be possible in the sense relevant to materialism, a being satisfying the secondary intension of P is required. At best, we can say that zombies are (primarily) conceivable and (primarily) possible— but this possibility mere reflects the (secondary) possibility of a microfunctional duplicate of a conscious being without consciousness, and not a full physical duplicate. In effect, on a biological view the intrinsic basis of the microphysical functions will make a difference to consciousness. To that extent the view might be seen as a variant of what is sometimes known as Russellian monism, on which the intrinsic nature of physical processes is what is key to consciousness (though unlike other versions of Russellian monism, this version need not be committed to an a priori entailment from the underlying processes to consciousness).

I have to say that I am sympathetic with Dave in the way he diagnoses the flaw in the argument in the paper. It is a mistake to think of the simulated world, with its simulated creatures, as being a physical duplicate of our world in the right way; especially if this simulation is taking place in the original non-simulated world. If the biological view is correct then it is just a functional duplicate, true a microfunctional duplicate, but not a physical duplicate.

While I think this is right I also think the issues are complicated. For example take the typical Russellian pan(proto)psychism that is currently being explored by Chalmers and others. This view is touted as being compatible with the conceivability of zombies because we can conceive of a duplicate of our physics as long as we mean the structural, non-intrinsic properties. Since physics, on this view, describes only these structural features we can count the zombie world as having our physics in the narrow sense. The issues here are complex but this looks superficially just like the situation described in my paper. The simulated world captures all of the structural features of physics but leaves out whatever biological properties are necessary and in this sense the reasoning of the paper holds up.

This is why I think the comparison with Russellian monism invoked by Dave is helpful. In fact when I pitched my commentary to Dave I included this comparison with Russellian monism but it did not get developed in the paper. At any rate, I think what it helps us to see is the many ways in which we can *almost* conceive of zombies. This is a point that I have made going back to some of my earliest writings about zombies.  If the identity theory is true, or if some kind of biological view about consciousness is true, then there is some (as yet to be discovered) property/properties of biological neural states which necessitate/cause /just are the existence of phenomenal consciousness. Since we don’t know what this property is (yet) and since we don’t yet understand how it could necessitate/cause/etc phenomenal consciousness, we may fail to include it in our conceptualization of a ‘zombie world’. Or we may include it and fail to recognize that this entails a contradiction. I am sympathetic to both of these claims.

On the one hand, we can certainly conceive of a world very nearly physically just like ours. This world may have all/most of the same physical properties, excepting certain necessary biological properties, and as a result the creatures will behave in indistinguishable ways from us (given certain other assumptions). On the other hand we may conceive of the zombie twin as a biologically exact duplicate in which case we do not see that this is not actually a conceivable situation. If we knew the full biological story we would be, or at least could be, in a position to see that we had misdescribed the situation in just the same way as someone who did not know enough chemistry might think they could conceive of h2o failing to be water (in a world otherwise physically just like ours). This is what I take to be the essence of the Krpkean strategy. We allow that the thing in question is a metaphysical possibility but then argue that it is actually misdescribed in the original argument. While misdescribing it we think (mistakenly) we have conceived of a certain situation being true but really we have conceived of a slightly different situation being true and this one is compatible with physicalism.

Thus while I think the issues are complex and that I did not get them right in the paper I still think the paper is morally correct. To the extent that biological materialism resembles Russellian monism is the extent to which the zombie argument is irrelevant.

A Higher-Order Theory of Emotional Consciousness

I am very happy to be able to say that the paper I have been writing with Joseph E. LeDoux is out in PNAS (Proceeding of the National Academy of the Sciences of the United States). In this paper we develop a higher-order theory of conscious emotional experience.

I have been interested in the emotions for quite some time now. I wrote my dissertation trying to show that it was possible to take seriously the role that the emotions play in our moral psychology which is seemingly revealed by contemporary cognitive neuroscience, and which I take to suggest that one of the basic premises of emotivism is true. But at the same time I wanted to preserve the space for one to also take seriously some kind of moral realism. In the dissertation I was more concerned with the philosophy of language than with the nature of the emotions but I have always been attracted to a rather simplistic view on which the differing conscious emotions differ with respect to the way in which they feel subjectively (I explore this as a general approach to the propositional attitudes in The Mark of the Mental). The idea that emotions are feelings is an old one in philosophy but has fallen out of favor in recent years. I also felt that in fleshing out such an account the higher-order approach to consciousness would come in handy. This idea was really made clear when I reviewed the book Feelings and Emotions: The Amsterdam Symposium. I felt that it would be a good idea to approach the science of emotions with the higher-order theory of consciousness in mind.

That was back in 2008 and since then I have not really followed up on any of the ideas in my dissertation. I have always wanted to but have always found something else at the moment to work on and that is why it is especially nice to have been working with Joseph LeDoux explicitly combining the two. I am very happy with the result and look forward to any discussion.

Clip Show ‘011

It’s that time of year again! Here are the top posts of 2011 (see last year’s clip show and the best of all time)

–Runner Up– News Flash: Philosophy Sucks!

Philosophy is unavoidable; that is part of why it sucks!

10. Epiphenomenalism and Russellian Monism

Is Russellian Monism committed to epiphenomenalism about consciousness? Dave Chalmers argues that it is not.

9. Bennett on Non-Reductive Physicalism

Karen Bennett argues that the causal exclusion argument provides an argument for physicalism and that non-reductive physicalism is not ruled out by it. I argue that she is wrong and that the causal exclusion argument does cut against non-reductive physicalism.

8. The Zombie Argument Requires Phenomenal Transparency

Chalmers argues that the zombie argument goes through even without an appeal to the claim that the primary and secondary intension of ‘consciousness’ coincide. I argue that it doesn’t. Without an appeal to transparency we cannot secure the first premise of the zombie argument.

7. The Problem of Zombie Minds

Does conceiving of zombies require that we be able to know that zombies lack consciousness? It seems like we can’t know this so there may be a problem conceiving of zombies. I came to be convinced that this isn’t quite right, but still a good post (plus I think we can use the response here in a way that helps the physicalist who wants to say that the truth of physicalism is conceivable…more on that later, though)

6. Stazicker on Attention and Mental Paint

Can we have phenomenology that is indeterminate? James Stazicker thinks so.

5. Consciousness Studies in 1000 words (more) or less

I was asked to write a short piece highlighting some of the major figures and debates in the philosophical study of consciousness for an intro textbook. This is what I came up with

4. Cohen and Dennett’s Perfect Experiment

Dennett’s response to the overflow argument and why I think it isn’t very good

3. My Musical Autobiography

This was big year for me in that I came into possession of some long-lost recordings of my death metal band from the 1990’s as well as some pictures. This prompted me to write up a brief autobiography of my musical ‘career’

2. You might be a Philosopher

A collection of philosophical jokes that I wrote plus some others that were prompted by mine.

1. Phenomenally HOT

Some reflections on Ned Block and Jake Berger’s response to my claim that higher-order thoughts just are phenomenal consciousness