Sensory Qualities, the Meta-Problem of Consciousness, and the Relocation Story

I have been so swamped lately with teaching, research and Consciousness Live! that I haven’t been able to do much else, but I have had a couple of blog posts kicking around in my head that I wanted to get to. I’ll try to jot them down when I get the chance.

Chalmers’ Meta-Problem of Consciousness is at this point well known. The central issue there is: why do we think there is a Hard Problem of Consciousness? Chalmers’ takes the main physicalist response to be some kind of Illusionism. The source of the ‘problem judgements’ (i.e. about Mary and zombies, and inverts, etc) is some kind introspective illusion. Consciousness seems introspectively to have certain properties that it does not in fact have. When I first read Dave’s paper I suggested an alternative account in terms of our (tacitly) having a bad theory of what phenomenal consciousness is. An account like this can be seen in the work of David Rosenthal (which is why I was surprised his commentary on Chalmers’ paper did not bring up these issues directly).

The account basically proceeds as follows. We begin with the common sense fact that experience seems to present objects in the environment as having properties like color. These properties seem to peskily resist mathematization and so in the modern period they are moved into the head. However, they are moved into the head as we consciously experience them. Thus we arrive at the idea that we have this simple phenomenal property because that is how the physical object seemed to be when we consciously experienced seeing it. But now when we come to theorize about this simple property we find that there is not much to say. It seems simple, or primitive, because we are thinking of it as we first encountered it in experience. We thus arrive at a view where consciousness is itself ‘built into’ the mental qualities and that the only way to know about these mental qualities is via introspecting our first-personal experience.

This re-location-story-based explanation of the problem judgements (as involving a bad theoretical conception of what consciousness is) seems to me different from the one involving introspective error. If we don’t see phenomenal consciousness as some primitive property built into every mental quality, then we can try to construct independent theories of each. On the one hand we construct a theory of the mental qualities (independently of whether they are conscious) and on the other hand we can construct a theory of phenomenal consciousness.

Since we have separated phenomenal consciousness from mental quality we can see that phenomenal consciousness just is an awareness of mental qualities. That in turn suggests that we look for an account of that kind of awareness. Perhaps it is a cognitive kind of higher-order awareness, or some kind of deflationary first-order awareness, or maybe even some kind of first-order acquaintance.

When I floated this idea to Dave his response was that we would encounter the very same problem once we tried to explain our awareness of mental qualities and so this isn’t really a solution to the meta-problem. After all, the whole thing started because of phenomenal consciousness! There is a sense in which I agree with this but also a sense in which I don’t. I don’t agree with it because the problem seems different now. If we really can separate mental qualities from phenomenal consciousness and give independent accounts of each then we can construct theories and evaluate them. Are inverts possible according to the theory? What about zombies? Suppose it turns out that we could construct a plausible theory on which they weren’t?

True, some would find these theories implausible but now we can ask: is the reason they find it implausible because of an implicit acceptance of an alternative theory of what phenomenal consciousness is? So, instead of a theory of consciousness having to explain why people find consciousness puzzling, I see the right strategy as one where we explain why people find consciousness puzzling by attributing to them a (possibly implicitly-held) bad theory of consciousness.

We solve the meta-problem the same way we solve the ‘regular’ Hard Problem on this view, which is by getting people to think of consciousness differently (not in the sense of thinking of it as an illusion but in the sense of coming to hold a different theory about what it is).

I am not sure I 100% agree with this response to the Meta-Problem but it is one that I haven’t seen explicitly explored and I think it deserves one attention!

Consciousness Live! Season 4!

I am very excited about my upcoming guests on Consciousness Live! There are more possibly in the works so stay tuned, either by checking back here, or follow me on Twitter for updates @onemorebrown

All times listed in Eastern Standard Time.

To be scheduled. Check back for updates!

2021 here we come!

2020 was a rough year and I am ready to put it in the rear view! Luckily for me and my family we were able to carry on online and the year went by for the most part like normal.

I taught 17 classes in 2020, almost all (13) of them completely online! That’s up from last year. I taught online before so with the addition of zoom I explored synchronous online classes. I have been at CUNY for 17 years, (four years at Brooklyn College and) LaGuardia for 13, and I’ve never seen anything like this!

Since we were all trapped at home for quite a while I redoubled my efforts on Consciousness Live! having 28 conversations over the course of the year. I learn a lot from these discussions and I have been very glad to be able to have them…I have plans to do more in 2021, though likely not at the same volume!

I feel like I got a fair bit of writing done but only managed to get two papers to see the light of day. One with Jake Berger for a book edited by Josh Weisberg in honor of David Rosenthal. The other with Joe LeDoux was a short comment on a paper by Graziano et al.

My blogging was light. I have been attending a lot of talks and also a seminar with Romina Padro and Saul Kripke at the Graduate Center on the Adoption Problem and the Epistemology of Logic. I have been meaning to write a series of blog posts about the class but have not been able to get to it.

Coming up on Consciousness Live!

It has been an exhausting, year but one of the things that has kept me going is having some great conversations with an amazingly diverse group of people who share my love of all things consciousness. I have already done 19 this year (more than either of the previous two years!) but I have at least nine more coming up to round out the year! Check back here for updates or follow me on twitter @onemorebrown

September

October

November

December

Introspection and the Content of Higher-Order Thoughts

I am finally getting around to working on a paper on higher-order thought theory and introspection. I started thinking about this back in 2015 and presented an early version of it at the CUNY Cognitive Science Speaker Series (draft of paper here…sadly it is at Academia.edu which I do not use anymore). That was just a month before my son was born and I don’t think I had it quite nailed down. I put it one the back burner and then got caught up doing all kinds of other things.

But I’m back on it now, and am working with Adriana Renero. I am excited about this project because I don’t think that this aspect of the theory has been given enough attention. The basic idea that I had was that the traditional model of introspection offered by higher-order theorists -that one has a conscious higher-order state- needed to be supplemented. It seems to me that when one has an ordinary conscious experience of blue one is representing the first-order state as presenting a property of the physical world and when one introspects one represents the first-order state as presenting a property of one’s mind. In ordinary conscious experience it seems to me like I am being presented with objects which have colors, or make sounds, etc but when I introspect I seem to be presented with properties of my own experience. Thus conscious experience and introspection of this sort both rely on second-order thoughts that represent the relevant first-order states. Both of these second-order thoughts deploy a concept of the relevant mental quality: one concept attributes it to the physical object and the other attributes it to one’s own experience.

Rosenthal seems to agree with this, for example saying

When one sees a red tomato consciously but unreflectively, one conceptualizes the quality one is aware of as a property of the tomato. So that is how one is conscious of that quality. One thinks of the quality differently when one’s attention has shifted from the tomato to one’s experience of it. One then reconceptualizes the quality one is aware of as a property of the experience: one then becomes conscious of that quality as the qualitative aspect of an experience in virtue of which that experience represents a red tomato

Consciousness and Mind page 121

However, the way he fleshes out this distinction is in terms of *conscious* thoughts. So, when one is conceptualizing the mental quality as a property of the tomato, on his view, this amounts to one having, in addition to the higher-order state which renders one conscious, conscious thoughts abut the tomato. When one ‘reconceptualizes’ it as a property of experience, one’s higher-order state is itself conscious. Thus the difference for him is one of what one’s conscious thoughts are representing. But this doesn’t seem to me to do the trick.

The reason for this is that it seems to me that this would still be the case even if I had no conscious thoughts about the object I am perceiving. Suppose I am consciously perceiving a blue box and yet I am not consciously thinking about the blue box. In such a case it still seems to me that my conscious experience presents the blueness of the box as a property of the box itself. To bring this out even more we can consider the case of an animal that does not have any conscious thoughts, ay a squirrel. Our squirrel may nonetheless have conscious experiences and it seems to me strange to think that the squirrel’s experience does not present the blueness of the box as a property of the box.

Another issue here is that the relevant higher-order thought is the same throughout on Rosenthal’s account. So it must conceptualize the blueness of the box as a property of my experience the entire time. So why think that I ‘reconceptualize’ it when I have a conscious higher-order thought?

The same seems true for the case of introspection. If I am introspecting my experience of the box then it seems to me that the blueness is a property of the experience even if I am not having any conscious thoughts about the mental blue quality. I am not denying that I ever consciously think about my experience, only that this is required for introspection.

So what, on my view, is the content of these higher-order states? My current thinking is that in the case of typical conscious experience one has a higher-order thought with the content ‘I am seeing blue’ and when one introspects one has a higher-order state with the content ‘I am in a blue* state’ or ‘I am experiencing mental blue’. Of course to see blue is just to be in a blue* state and these two intentional contents are different ways of saying the same thing but they still seem to me to result in different experiences.

I am still thinking through this and any feedback would be appreciated!

No Euthyphro Dilemma for Higher-order Theories

I just came across Daniel Stoljar’s forthcoming paper A Euthyphro Dilemma for Higher-order theories. In it he tries to present a kind of dilemma for the higher-order thought theory but I find his reasoning highly suspect.

He assumes throughout that the higher-order theory is offering a definition of ‘consciousness,’ which is not exactly right. At least as I understand the theory it is an empirical conjecture about the nature of phenomenal consciousness and so not in the business of offering a definition. However, if we mean by definition something like what Socrates is seeking, viz., the thing which all conscious states have in common in virtue of which they count as conscious states, there there is a sense in which the higher-order view is after a definition, so I will go along with him on this.

The basic thrust of the paper is that we can ask two questions, one is ‘are we aware of ourselves as being in the state because the state is conscious?’ and the other is ‘is the state conscious because we are aware of ourselves as being in it?” Obviously the first ‘horn’ is not going to be taken as it effectively assumes that the higher-order theory is in fact false. The second ‘horn’ is the one the higher-order theories will take. So, what is the problem with it? Here is what Stoljar says:

Alternatively, if you say the second, that the state is conscious because you believe you are in it, you need to deal with the possibility of being in the state and yet failing to believe that you are. On the higher-order thought theory, the state is in that case no longer conscious. But as before that is questionable. Suppose you are so consumed by the fox that you completely forget (and so have no beliefs about) what you are doing, at least for a short interval. On the face of it, you remain conscious of the fox, and so your state of perceiving the fox remains conscious. If so, it can’t be the case that the state is conscious because you believe that you are in it. After all, you do not believe this, having temporarily forgotten completely what you are doing.

I am not sure how ‘on the face of it’ is supposed to work! It seems as though he is just assuming that the theory is false and then saying ‘ahah! The theory could be false!’ Even if we interpret him charitably it seems like he is assuming that the higher-order states in question would be like conscious beliefs. Calling the higher-order thoughts beliefs is a bit of a misnomer since I take beliefs to be dispositions to have occurrent assortoric thoughts. But as long as one means by ‘belief’ something like an occurrent thought then we can go along with this as well. If one is ‘so absorbed in the fox’ that one forgets (consciously) what one is doing it does not follow that one has no unconscious thoughts about oneself.

Stoljar recognizes this and goes on to say:

Friends of the theory may insist that you do hold the belief in question. Maybe the belief is not so demanding. Or maybe it is suppressed or inarticulate, not the sort of belief that you could formulate in words if asked. Maybe, but it doesn’t matter. For even if you do believe you are in the state of perceiving the fox, it doesn’t follow that this state is conscious because you believe this. Further, even if you do believe this, it remains as true as ever that, if you didn’t, the state of perceiving would nevertheless be conscious. After all, even if you didn’t believe that you are in the state of perceiving the fox, you would still focus on the fox, and so be conscious of it, as much as before.

I find this passage to be extremely puzzling and I am not sure how to interpret it. There are arguments given for the higher-order theory and this does not address any of them. Further, there is no justification given for the final claim, that even if one did not have the relevant higher-order thought one would still be (phenomenally) conscious of the fox in the same way. What reason is there to accept this? It is just assumed by fiat. So there is no dilemma for higher-order theories here. There is just someone with differing intuitions about what conscious states are.

Stoljar goes on to consider a version of the view that os closer to what is actually defended by Rosenthal. he says:

Rosenthal says you must believe that you are in the state in a way that is non-perceptual and non-inferential (Rosenthal 2005).

This is incorrect. What Rosenthal says is that the relevant higher-order state must be arrived at in a way that does not subjectively seem to be inferential. That is compatible with its actually being the product of inference. But ok, subtle points aside what is the issue? He goes on to say:

But even this is not sufficient. Suppose again you are in and an amazing and unlikely thing happens. Before you even open Linguistic Inquiry, you get banged on the head and freakishly come to believe that you are in S. In this case, three things are true: you are in S, you believe you are in S, and you came to believe this in a way that is neither perceptual nor inferential. Even so it does not follow that is conscious; on the contrary, it remains as unconscious as it was before.

But again what reason is there to think this? If one is in a higher-order state to the effect that one is in S and this is arrived at in a way that subjectively seems to be non-inferential then according to the theory on will be in a conscious state! That is just what the theory claims. So there is no need to use introspection in the way that Stoljar claims.

Stoljar also briefly discusses the argument from empty higher-order thoughts, saying:

It is worth noting that many proponents of the higher-order theory insist on a different response to this objection. They say the belief can be empty but that the state that is conscious exists not as such but only according to the belief, rather as certain things may exist not as such but only according to the National Inquirer. I won’t attempt to discuss this idea here, since it is extensively discussed elsewhere; see, e.g., (Rosenthal 2011, Weisberg 2011, Berger 2014, Brown 2015, Gottlieb 2020). But it is worth noting that interpreting the view this way has the consequence that it is no longer a definition of a conscious state in the way that it is normally taken to be, and as I have taken it to be throughout this discussion. After all, adefinition of a conscious state either is or entails something of the form ‘x is a conscious state if and only if x is…’. This entails in turn that the state that is conscious must turn up on the right-hand side of the definition. But if you say that something is a conscious state if and only if you believe such and such, and if the belief in question does not entail the existence of the relevant state, then the state does not turn up as it should on the right-hand side; hence you have not defined anything.

But again, this is incorrect. According to Rosenthal the state which turns up on the right hand side is the state you represent yourself as being in, -whether or not one is actually in that state is irrelevant!-

There is a lot more to say about these issues, and other issues in Stoljar’s paper but I have to help get the kids their lunch!

Shombies vs. Zombies vs. Anti-Zombies and Popular Sessions from the Online Consciousness Conference

Ten years ago, way back in February 2010, the 2nd online consciousness conference would have been just starting and the papers from the first conference were coming out in the Journal of Consciousness Studies.

Even though I would change some things if I could, I am still very happy with my paper Deprioritizing the A Priori Arguments Against Physicalism . I think it is especially cool that this paper is cited by both the Stanford Encyclopedia of Philosophy’s entry on Zombies as well as the Wikipedia entry on Philosophical Zombies. In addition I have yet to see a good response to the argument I developed there. David Chalmers assimilates the objection to a ‘meta-modal’ objection involving conceiving that physicalism is true (or that necessarily (P –> Q) is possibly true). I went to Tucson in 2012 to talk about this and we talked about it a bit here (and I wrote up a version here) but I have never seen a real response to the actual argument.

If the best response, as the SEP and Dave’s 2D argument against Materialism paper/chapter suggest (though to be fair they are talking about conceiving that physicalism is true, which is not what I am talking about), is that they find shombies inconceivable then they have revealed that the a priori arguments should be deprioritized (that’s always been my point). I find zombies inconceivable and they find shombies inconceivable. How can we tell who is doing it right? These thought experiments can give an individual who finds the first premise plausible (the conceivability of zombies/shombies) some reason to think that their view (physicalism, dualism, whatever) is rational to hold but they cannot be used as a way to show that some metaphysical view about the mind/conscious is actually true. In this sense they are sort of like the ‘victorious’ Ontological Argument of Plantinga.

I would also say that I am more convinced than ever that shombies are not Frankish’s Anti-Zombies. In fact given Keith’s views on illusionism I am pretty sure he is committed to the claim that shombies, as I envision them, must be inconceivable (or not possible).

Oh yeah, this was supposed to be a post about the Online Consciousness Conference 🙂 Below are links to the most viewed sessions from the five conferences as well as to the most commented on sessions.

Most viewed sessions

Most commented on sessions

Consciousness Live! Season 3

I am happy to announce the opening line-up for the new season of Consciousness Live! I originally intended to try to limit these to the summer but then I realized I am just as busy then as now so why not let people pick when is best for them? There may be more to come and I will announce timing info when I have them scheduled.

Sounds like a lot of fun!!

…And the Conscious State is…

No too long ago Jake Berger and I presented a paper we are working on at the NYU philosophy of mind discussion session. There was a lot of very interesting discussion and there are a couple of themes I plan on writing about (if I ever get the chance I am teaching four classes in our short six week winter semester and it is a bit much).

One very interesting objection that came up, and was discussed in email afterwards, was whether HOT theory has the resources to say which first-order state is the conscious state. Ned Block raised this objection in the following way. Suppose I have two qualitative first-order states that are, say, slightly different shades of red. When these states are unconscious there is nothing that it is like for the subject to be in them (ex hypothesi). Now suppose I have an appropriate higher-order thought to the effect that I am seeing red (but not some particular shade of red). The content of the higher-order thought does not distinguish between the two first-order states so there is no good reason to think that one of them is consciousness and the other is not. Yet common sense seems to indicate that one of them could be conscious and the other non-conscious, so there is a problem for higher-order thought theory.

The basic idea behind the objection is that there could be two first-order states that are somewhat similar in some way, and there could be a fact of the matter about which of the two first-order states is conscious while there is a higher-order thought that does not distinguish between the two states. David’s views about intentional content tend toward descriptivism and so he thinks that the way in which a higher-order thought refers to its target first-order state is via describing it. I tend to have more sympathy with causal/historical accounts of intentional content (I even wrote about this back in 2007: Two Concepts of Transitive Consciousness) than David does but I think in this kind of case he does think that these kinds of considerations will answer Block’s challenge.

But stepping back from the descriptivism vs. causal theories of reference for a second, I this objection helps to bring out the differences between the way in which David thinks abut higher-order thought theory and they way that I tend to think about it.

David has presented the higher-order thought theory as a theory of conscious states. It is presented as giving an answer to the following question:

  • How can the very same first-order state occur consciously and also non-consciously?

The difference between these two cases is that when the state is conscious it is accompanied by a higher-order thought to the effect that one is currently in the state. Putting things this way makes Block’s challenge look pressing. We want to know which first-order state is conscious!

I trend to think of the higher-order thought theory as a theory of phenomenal consciousness. It makes the claim that phenomenal consciousness consists in having the appropriate higher-order thought. By phenomenal consciousness I mean that there is something that it is like for the organism in question. I want to distinguish phenomenal consciousness from state consciousness. A state is state-conscious when it is the target of an appropriate higher-order awareness. A state is phenomenally conscious when there is something that it is like for one to be in the state. A lot of confusion is caused because people use ‘conscious state’ for both of these notions. A state of which I am aware is naturally called a conscious state but so to is a state which there is something that it is like to be in.

Block’s challenge thus has two different interpretations. On one he is asking how the higher-order awareness refers to its target state. That is, he wants to know which first-order state am I aware of in his case. On the other interpretation he is asking which first-order state is there something that it is like for the subject to be in. The way I understand Rosenthal’s view is that he wants to give the same answer to both questions. The target of the higher-order state is the one that is ‘picked out’ by the higher-order state. And what it is like for the subject to be in that target first-order state consists in there being the right kind of higher-order awareness. Having the appropriate higher-order state is all there is to there being something that it is like to be in the first-order state.

I tend to think that maybe we want to give different answers to these two challenges. Regardless of which first-order state is targeted by the higher-order awareness the state which there is something that it is like for the subject to be in is the higher-order state itself. This higher-order state makes one aware of being in a first-order state, and that is just what phenomenal consciousness is. Thus it will seem to you as though you are in a first-order state (it will seem to you as though you are seeing red when you consciously see red). For that reason I think it is natural to say that the higher-order state is itself phenomenally conscious (by which I mean it is the state which there is something that it is like to be in). I agree that we intuitively think it is the first-order states which are phenomenally conscious but I don’t think that carries much weight when we get sufficiently far into theorizing.

While I agree that it does sound strange to say that the first-order state is not phenomenally conscious I think this is somewhat mitigated by the fact that we can none the less say that the first-order state is a conscious state when it is targeted by the appropriate higher-order awareness. This is because all there is to being a conscious state, as I use the term here, is that the state is targeted by an appropriate higher-order awareness. The advantage to putting things in this way is that it makes it clear what the higher-order theory is a theory of and that the objection from Block is clearly assuming that first-order states must be phenomenally conscious.