Coming up on Consciousness Live!

It has been an exhausting, year but one of the things that has kept me going is having some great conversations with an amazingly diverse group of people who share my love of all things consciousness. I have already done 19 this year (more than either of the previous two years!) but I have at least nine more coming up to round out the year! Check back here for updates or follow me on twitter @onemorebrown





Introspection and the Content of Higher-Order Thoughts

I am finally getting around to working on a paper on higher-order thought theory and introspection. I started thinking about this back in 2015 and presented an early version of it at the CUNY Cognitive Science Speaker Series (draft of paper here…sadly it is at which I do not use anymore). That was just a month before my son was born and I don’t think I had it quite nailed down. I put it one the back burner and then got caught up doing all kinds of other things.

But I’m back on it now, and am working with Adriana Renero. I am excited about this project because I don’t think that this aspect of the theory has been given enough attention. The basic idea that I had was that the traditional model of introspection offered by higher-order theorists -that one has a conscious higher-order state- needed to be supplemented. It seems to me that when one has an ordinary conscious experience of blue one is representing the first-order state as presenting a property of the physical world and when one introspects one represents the first-order state as presenting a property of one’s mind. In ordinary conscious experience it seems to me like I am being presented with objects which have colors, or make sounds, etc but when I introspect I seem to be presented with properties of my own experience. Thus conscious experience and introspection of this sort both rely on second-order thoughts that represent the relevant first-order states. Both of these second-order thoughts deploy a concept of the relevant mental quality: one concept attributes it to the physical object and the other attributes it to one’s own experience.

Rosenthal seems to agree with this, for example saying

When one sees a red tomato consciously but unreflectively, one conceptualizes the quality one is aware of as a property of the tomato. So that is how one is conscious of that quality. One thinks of the quality differently when one’s attention has shifted from the tomato to one’s experience of it. One then reconceptualizes the quality one is aware of as a property of the experience: one then becomes conscious of that quality as the qualitative aspect of an experience in virtue of which that experience represents a red tomato

Consciousness and Mind page 121

However, the way he fleshes out this distinction is in terms of *conscious* thoughts. So, when one is conceptualizing the mental quality as a property of the tomato, on his view, this amounts to one having, in addition to the higher-order state which renders one conscious, conscious thoughts abut the tomato. When one ‘reconceptualizes’ it as a property of experience, one’s higher-order state is itself conscious. Thus the difference for him is one of what one’s conscious thoughts are representing. But this doesn’t seem to me to do the trick.

The reason for this is that it seems to me that this would still be the case even if I had no conscious thoughts about the object I am perceiving. Suppose I am consciously perceiving a blue box and yet I am not consciously thinking about the blue box. In such a case it still seems to me that my conscious experience presents the blueness of the box as a property of the box itself. To bring this out even more we can consider the case of an animal that does not have any conscious thoughts, ay a squirrel. Our squirrel may nonetheless have conscious experiences and it seems to me strange to think that the squirrel’s experience does not present the blueness of the box as a property of the box.

Another issue here is that the relevant higher-order thought is the same throughout on Rosenthal’s account. So it must conceptualize the blueness of the box as a property of my experience the entire time. So why think that I ‘reconceptualize’ it when I have a conscious higher-order thought?

The same seems true for the case of introspection. If I am introspecting my experience of the box then it seems to me that the blueness is a property of the experience even if I am not having any conscious thoughts about the mental blue quality. I am not denying that I ever consciously think about my experience, only that this is required for introspection.

So what, on my view, is the content of these higher-order states? My current thinking is that in the case of typical conscious experience one has a higher-order thought with the content ‘I am seeing blue’ and when one introspects one has a higher-order state with the content ‘I am in a blue* state’ or ‘I am experiencing mental blue’. Of course to see blue is just to be in a blue* state and these two intentional contents are different ways of saying the same thing but they still seem to me to result in different experiences.

I am still thinking through this and any feedback would be appreciated!

No Euthyphro Dilemma for Higher-order Theories

I just came across Daniel Stoljar’s forthcoming paper A Euthyphro Dilemma for Higher-order theories. In it he tries to present a kind of dilemma for the higher-order thought theory but I find his reasoning highly suspect.

He assumes throughout that the higher-order theory is offering a definition of ‘consciousness,’ which is not exactly right. At least as I understand the theory it is an empirical conjecture about the nature of phenomenal consciousness and so not in the business of offering a definition. However, if we mean by definition something like what Socrates is seeking, viz., the thing which all conscious states have in common in virtue of which they count as conscious states, there there is a sense in which the higher-order view is after a definition, so I will go along with him on this.

The basic thrust of the paper is that we can ask two questions, one is ‘are we aware of ourselves as being in the state because the state is conscious?’ and the other is ‘is the state conscious because we are aware of ourselves as being in it?” Obviously the first ‘horn’ is not going to be taken as it effectively assumes that the higher-order theory is in fact false. The second ‘horn’ is the one the higher-order theories will take. So, what is the problem with it? Here is what Stoljar says:

Alternatively, if you say the second, that the state is conscious because you believe you are in it, you need to deal with the possibility of being in the state and yet failing to believe that you are. On the higher-order thought theory, the state is in that case no longer conscious. But as before that is questionable. Suppose you are so consumed by the fox that you completely forget (and so have no beliefs about) what you are doing, at least for a short interval. On the face of it, you remain conscious of the fox, and so your state of perceiving the fox remains conscious. If so, it can’t be the case that the state is conscious because you believe that you are in it. After all, you do not believe this, having temporarily forgotten completely what you are doing.

I am not sure how ‘on the face of it’ is supposed to work! It seems as though he is just assuming that the theory is false and then saying ‘ahah! The theory could be false!’ Even if we interpret him charitably it seems like he is assuming that the higher-order states in question would be like conscious beliefs. Calling the higher-order thoughts beliefs is a bit of a misnomer since I take beliefs to be dispositions to have occurrent assortoric thoughts. But as long as one means by ‘belief’ something like an occurrent thought then we can go along with this as well. If one is ‘so absorbed in the fox’ that one forgets (consciously) what one is doing it does not follow that one has no unconscious thoughts about oneself.

Stoljar recognizes this and goes on to say:

Friends of the theory may insist that you do hold the belief in question. Maybe the belief is not so demanding. Or maybe it is suppressed or inarticulate, not the sort of belief that you could formulate in words if asked. Maybe, but it doesn’t matter. For even if you do believe you are in the state of perceiving the fox, it doesn’t follow that this state is conscious because you believe this. Further, even if you do believe this, it remains as true as ever that, if you didn’t, the state of perceiving would nevertheless be conscious. After all, even if you didn’t believe that you are in the state of perceiving the fox, you would still focus on the fox, and so be conscious of it, as much as before.

I find this passage to be extremely puzzling and I am not sure how to interpret it. There are arguments given for the higher-order theory and this does not address any of them. Further, there is no justification given for the final claim, that even if one did not have the relevant higher-order thought one would still be (phenomenally) conscious of the fox in the same way. What reason is there to accept this? It is just assumed by fiat. So there is no dilemma for higher-order theories here. There is just someone with differing intuitions about what conscious states are.

Stoljar goes on to consider a version of the view that os closer to what is actually defended by Rosenthal. he says:

Rosenthal says you must believe that you are in the state in a way that is non-perceptual and non-inferential (Rosenthal 2005).

This is incorrect. What Rosenthal says is that the relevant higher-order state must be arrived at in a way that does not subjectively seem to be inferential. That is compatible with its actually being the product of inference. But ok, subtle points aside what is the issue? He goes on to say:

But even this is not sufficient. Suppose again you are in and an amazing and unlikely thing happens. Before you even open Linguistic Inquiry, you get banged on the head and freakishly come to believe that you are in S. In this case, three things are true: you are in S, you believe you are in S, and you came to believe this in a way that is neither perceptual nor inferential. Even so it does not follow that is conscious; on the contrary, it remains as unconscious as it was before.

But again what reason is there to think this? If one is in a higher-order state to the effect that one is in S and this is arrived at in a way that subjectively seems to be non-inferential then according to the theory on will be in a conscious state! That is just what the theory claims. So there is no need to use introspection in the way that Stoljar claims.

Stoljar also briefly discusses the argument from empty higher-order thoughts, saying:

It is worth noting that many proponents of the higher-order theory insist on a different response to this objection. They say the belief can be empty but that the state that is conscious exists not as such but only according to the belief, rather as certain things may exist not as such but only according to the National Inquirer. I won’t attempt to discuss this idea here, since it is extensively discussed elsewhere; see, e.g., (Rosenthal 2011, Weisberg 2011, Berger 2014, Brown 2015, Gottlieb 2020). But it is worth noting that interpreting the view this way has the consequence that it is no longer a definition of a conscious state in the way that it is normally taken to be, and as I have taken it to be throughout this discussion. After all, adefinition of a conscious state either is or entails something of the form ‘x is a conscious state if and only if x is…’. This entails in turn that the state that is conscious must turn up on the right-hand side of the definition. But if you say that something is a conscious state if and only if you believe such and such, and if the belief in question does not entail the existence of the relevant state, then the state does not turn up as it should on the right-hand side; hence you have not defined anything.

But again, this is incorrect. According to Rosenthal the state which turns up on the right hand side is the state you represent yourself as being in, -whether or not one is actually in that state is irrelevant!-

There is a lot more to say about these issues, and other issues in Stoljar’s paper but I have to help get the kids their lunch!

Shombies vs. Zombies vs. Anti-Zombies and Popular Sessions from the Online Consciousness Conference

Ten years ago, way back in February 2010, the 2nd online consciousness conference would have been just starting and the papers from the first conference were coming out in the Journal of Consciousness Studies.

Even though I would change some things if I could, I am still very happy with my paper Deprioritizing the A Priori Arguments Against Physicalism . I think it is especially cool that this paper is cited by both the Stanford Encyclopedia of Philosophy’s entry on Zombies as well as the Wikipedia entry on Philosophical Zombies. In addition I have yet to see a good response to the argument I developed there. David Chalmers assimilates the objection to a ‘meta-modal’ objection involving conceiving that physicalism is true (or that necessarily (P –> Q) is possibly true). I went to Tucson in 2012 to talk about this and we talked about it a bit here (and I wrote up a version here) but I have never seen a real response to the actual argument.

If the best response, as the SEP and Dave’s 2D argument against Materialism paper/chapter suggest (though to be fair they are talking about conceiving that physicalism is true, which is not what I am talking about), is that they find shombies inconceivable then they have revealed that the a priori arguments should be deprioritized (that’s always been my point). I find zombies inconceivable and they find shombies inconceivable. How can we tell who is doing it right? These thought experiments can give an individual who finds the first premise plausible (the conceivability of zombies/shombies) some reason to think that their view (physicalism, dualism, whatever) is rational to hold but they cannot be used as a way to show that some metaphysical view about the mind/conscious is actually true. In this sense they are sort of like the ‘victorious’ Ontological Argument of Plantinga.

I would also say that I am more convinced than ever that shombies are not Frankish’s Anti-Zombies. In fact given Keith’s views on illusionism I am pretty sure he is committed to the claim that shombies, as I envision them, must be inconceivable (or not possible).

Oh yeah, this was supposed to be a post about the Online Consciousness Conference 🙂 Below are links to the most viewed sessions from the five conferences as well as to the most commented on sessions.

Most viewed sessions

Most commented on sessions

Consciousness Live! Season 3

I am happy to announce the opening line-up for the new season of Consciousness Live! I originally intended to try to limit these to the summer but then I realized I am just as busy then as now so why not let people pick when is best for them? There may be more to come and I will announce timing info when I have them scheduled.

Sounds like a lot of fun!!

…And the Conscious State is…

No too long ago Jake Berger and I presented a paper we are working on at the NYU philosophy of mind discussion session. There was a lot of very interesting discussion and there are a couple of themes I plan on writing about (if I ever get the chance I am teaching four classes in our short six week winter semester and it is a bit much).

One very interesting objection that came up, and was discussed in email afterwards, was whether HOT theory has the resources to say which first-order state is the conscious state. Ned Block raised this objection in the following way. Suppose I have two qualitative first-order states that are, say, slightly different shades of red. When these states are unconscious there is nothing that it is like for the subject to be in them (ex hypothesi). Now suppose I have an appropriate higher-order thought to the effect that I am seeing red (but not some particular shade of red). The content of the higher-order thought does not distinguish between the two first-order states so there is no good reason to think that one of them is consciousness and the other is not. Yet common sense seems to indicate that one of them could be conscious and the other non-conscious, so there is a problem for higher-order thought theory.

The basic idea behind the objection is that there could be two first-order states that are somewhat similar in some way, and there could be a fact of the matter about which of the two first-order states is conscious while there is a higher-order thought that does not distinguish between the two states. David’s views about intentional content tend toward descriptivism and so he thinks that the way in which a higher-order thought refers to its target first-order state is via describing it. I tend to have more sympathy with causal/historical accounts of intentional content (I even wrote about this back in 2007: Two Concepts of Transitive Consciousness) than David does but I think in this kind of case he does think that these kinds of considerations will answer Block’s challenge.

But stepping back from the descriptivism vs. causal theories of reference for a second, I this objection helps to bring out the differences between the way in which David thinks abut higher-order thought theory and they way that I tend to think about it.

David has presented the higher-order thought theory as a theory of conscious states. It is presented as giving an answer to the following question:

  • How can the very same first-order state occur consciously and also non-consciously?

The difference between these two cases is that when the state is conscious it is accompanied by a higher-order thought to the effect that one is currently in the state. Putting things this way makes Block’s challenge look pressing. We want to know which first-order state is conscious!

I trend to think of the higher-order thought theory as a theory of phenomenal consciousness. It makes the claim that phenomenal consciousness consists in having the appropriate higher-order thought. By phenomenal consciousness I mean that there is something that it is like for the organism in question. I want to distinguish phenomenal consciousness from state consciousness. A state is state-conscious when it is the target of an appropriate higher-order awareness. A state is phenomenally conscious when there is something that it is like for one to be in the state. A lot of confusion is caused because people use ‘conscious state’ for both of these notions. A state of which I am aware is naturally called a conscious state but so to is a state which there is something that it is like to be in.

Block’s challenge thus has two different interpretations. On one he is asking how the higher-order awareness refers to its target state. That is, he wants to know which first-order state am I aware of in his case. On the other interpretation he is asking which first-order state is there something that it is like for the subject to be in. The way I understand Rosenthal’s view is that he wants to give the same answer to both questions. The target of the higher-order state is the one that is ‘picked out’ by the higher-order state. And what it is like for the subject to be in that target first-order state consists in there being the right kind of higher-order awareness. Having the appropriate higher-order state is all there is to there being something that it is like to be in the first-order state.

I tend to think that maybe we want to give different answers to these two challenges. Regardless of which first-order state is targeted by the higher-order awareness the state which there is something that it is like for the subject to be in is the higher-order state itself. This higher-order state makes one aware of being in a first-order state, and that is just what phenomenal consciousness is. Thus it will seem to you as though you are in a first-order state (it will seem to you as though you are seeing red when you consciously see red). For that reason I think it is natural to say that the higher-order state is itself phenomenally conscious (by which I mean it is the state which there is something that it is like to be in). I agree that we intuitively think it is the first-order states which are phenomenally conscious but I don’t think that carries much weight when we get sufficiently far into theorizing.

While I agree that it does sound strange to say that the first-order state is not phenomenally conscious I think this is somewhat mitigated by the fact that we can none the less say that the first-order state is a conscious state when it is targeted by the appropriate higher-order awareness. This is because all there is to being a conscious state, as I use the term here, is that the state is targeted by an appropriate higher-order awareness. The advantage to putting things in this way is that it makes it clear what the higher-order theory is a theory of and that the objection from Block is clearly assuming that first-order states must be phenomenally conscious.

2019 in Review

I had a busy year in 2019!

I taught three less classes than I usually do and so I only taught 13 classes in 2019 (Spring, Summer, Fall, Winter ‘semesters’). That is the same number as the year I was on parental leave.

I teach a lot of courses but I also get to teach a lot of different classes. Besides my typical philosophy courses (Intro to Phil, Phil Religion, Logic, Ethics) I get to teach a variety of science classes including Introduction to Neuroscience, General Psychology, and a capstone seminar that I usually do on consciousness and some aspect of science. In the Spring semester I am teaching Life in the Universe which is really cool. In addition this semester I got to co-teach a class on neuroscience and philosophy of consciousness at the Gradate Center with Tony Ro, which was awesome!

I like teaching but I feel like I would be better at it if I did less of it. And I do wish I could do more teaching at the graduate level.

I also had my paper with hakwan Lau and Joe LeDoux come out in Trends in Cognitive Science. This journal works very quickly and we spent most of early 2019 working on this paper, which came out in September 2019! The whole experience of writing that paper was intense and a bit surreal.

On the other side of the spectrum, my paper with Hakwan Lau for the Ned Block volume was written a long time ago but just came out along with Block’s response (see my response here)

In addition I wrote some book reviews, gave a couple of talks, wrote a couple of blog posts, and of course completed my second season of Consciousness Live!

Looking back further, back in December 2009 I was a newly-married Assistant Professor and was organizing the very first jam session at the Parkside Lounge (after the American Philosophical Association meeting in Times Square December 28th 2009). (I also was just realizing that the way things had been presented to me at CUNY had slightly skewed my intuitions about philosophy over-all). (In addition I was also in the middle of organizing the second Online Consciousness Conference). I recently found an old recording of one of the tunes we played. I don’t remember who was playing what, or when it was recorded (maybe sometime in 2008) but it is some combination of core-NC/DC members. Next year in December 2020 we will be coming up on the 10 year anniversary of the Qualia Fest; might the world be ready for another one?

10000 lies performed by NC/DC
The New York Consciousness Collective at the Parkside Lounge December 28th 2009

Since that timely life has changed a lot! I don’t get to play as much music but I have been promoted to Full Professor, celebrated my 10 year wedding anniversary, welcomed two sons, bought a house, and moved out of Brooklyn. It’s been an eventful decade both personally and professionally! I wonder what I’ll be writing in 2029?

Consciousness is (Probably) a Biological Phenomenon

In light of the very interesting interview with Dave Chalmers in the Opinionator I thought I would revisit some of my objections to the notion of artificial consciousness (AC). I am somewhat of a skeptic about artificial consciousness in a way that I am not about AGI (artificial general intelligence). My suspicion is that intelligence and consciousness are indeed dissociable and so we could end up with artificial systems that were intelligent in the sense of being able to solve problems, but which would not be conscious (nothing that it is like for them to solve problems or ‘think’ about them). In the past I have called the view I am interested in Biologism about consciousness but then I became aware of the problematic history of that word and I have not come up with something better…Biologicalism maybe? Or just go with Searle’s Biological Naturalism? Here I’ll just call it the Biological view. Below is a draft I wrote a while ago exploring the case against the biological view…I still think it is pretty good and so would appreciate any feedback. (by the way, just to be clear I am open to being convinced that the biological view is wrong, it may just be my last common sense prejudice about consciousness refusing to succumb to good arguments, if so what are these good arguments?)

Whatever you call it, the issue is whether consciousness is fundamentally a biological phenomenon or whether it is an ‘organizational invariant,’ (Chalmers 1995; Chalmers 1996), or is ‘substrate independent’ (Bostrom 2003). On one side of the debate are those who affirm that this is so and thus hold that a properly organized computer program could in fact be conscious in just the way that you or I ordinarily are. Because of this I will call this view computationalisma bout consciousness. On the other side of the debate are those that hold that consciousness is a distinctively biological phenomenon and that a maximally specific computer simulation of those biological processes would not result in consciousness.

The debate over computationalism is independent of the debate between the physicalist and non-physicalist. A non-physicalist may hold that a suitably programmed computer would come to instantiate non-physical qualia (as David Chalmers (1996) has suggested). All that would be required is that there be fundamental laws of nature that relate the computational states to non-physical qualia. While a physicalist may hold that the implemented program itself is conscious. Physicalists who endorse computationalism usually do so by endorsing some kind of (computational) functionalism whereas physicalists who endorse biological view usually do so by endorsing some kind of type identity theory.

Though there are those that defend the biological view (Searle 2004, Block 2009) the dominant view is computationalism. This is perhaps in part due to the popularity of functionalist views. However it is very rarely argued for. It is much more common to hear remarks to the effect that ‘neurons aren’t magic’. That is not an argument. It is also not enough to cite arguments for functionalism either. For instance the argument from multiple realizability at most suggests that there may be other biological configurations that may be conscious.  It is quite another thing to suggest that non-biological systems could be conscious. 

Of course one might invoke the conceivability of non-biological creatures that are conscious. But doing so begs the question. It may be the case that when we imagine that there is a silicon system that is conscious, a commander Data type robot, we imagine something that is the equivalent of xyz. Given that we know that water is essentially H2O we know that the xyz is not a candidate to be our world. Our world could not have been the xyz world. If the biological view is true then the same is true for the commander Data world. It is conceivable but it is not a way our world could have been. Given this what we need is an independent argument for computationalism. 

Perhaps the best (only?) argument for computationalism is David Chalmers’ Dancing Qualia and Fading Qualia arguments. Recently Chalmers (2010) has come to change his mind on the strength of the dancing qualia argument. His change of heart was motivated by empirical findings from cognitive neuroscience, and in particular change blindness. Here I will argue that we have further empirical findings that motivate a questioning of the strength of the fading qualia argument. What this shows is that one can reasonably think that the biological view is true. And absent any reasons to think that it isn’t true it should probably be the default view.  Before introducing the empirical results I first turn to looking at the original arguments and the reasons that Chalmers has given for his change of heart on their relative strength. In section three I introduce the empirical results from the partial report experiments. The upshot is that the original dancing and fading qualia arguments look less plausible and so the biological view of consciousness is more than likely true after all. 

II. Flip-Flopping on Dancing and Fading Qualia

These arguments both have roughly the same form. We start with a fully biological creature that is fully conscious. For vividness we can imagine that they are having an intense headache, a migraine say, while watching a movie and eating a box of sour patch kids. We then replace one of their neurons with a functionally identical computer chip. We can even imagine that we are able to do so in such a way that the subject is unaware that it is happening (the wonders of nanotechnology being what there are and all). We then imagine a series of replacements like this, with the second in the line having two neurons replaced, the third 3 neurons, etc. until we reach the other end where we have completely replaced the brain with computer chips. At each stage in the series the subject we end up with is functionally identical to me, or you. Given this there will be no way that the subject’s behavior can change in any way. As the neurons are being replaced the subject continues to complain of the headache, asking for the volume to be tuned down and remarking that the taste of the sour patch kids helps to distract from the migraine, etc. If we assume that computationalism is false then in the fading qualia case we would have to imagine that as we replace the neurons with silicon our conscious experience fades as a light on a dimmer switch would even though our behavior continues to be the same through out. In the dancing qualia case we imagine that we have a switch that activates and deactivates the group of silicon chips that has replaced the neural circuitry that is responsible for a certain conscious experience. As we flip the switch our conscious experience is blinking in and out of existence yet, again, your behavior continues to be the same through out. 

So, if we assume that computationalism is false, then in each case we end up with subjects that are radically out of touch with their own conscious experiences. They say that they have an intense headache but that isn’t true. In one case the headache is dim and in the other it is blinking in and out of existence. In our world we typically do not find ourselves in this kind of position. It seems plausible that we are not radically out of touch with our own conscious experience. So it is much more plausible to think that, while these scenarios are possible in some sense, they do not describe the actual world with its actual laws. In other words, it is safe to assume that in our actual world the complete silicon brain would be conscious in just the way that the biological creature originally was. 

These arguments are not presented as strict reductios but are rather offered to show how strange the biological view is and the consequences of the view. If the biological view is true then we can have systems that are radically out of touch with their own conscious experience and, for all we know, it could be happening right now! Since this seems prima facie implausible we have a prima facie case against the biological view and for computationalism. 

Originally Chalmers argued that the dancing qualia argument was stronger than the fading qualia argument. But in what sense is the one argument supposed to be stronger than the other? In his original paper on this (Chalmers 1995) he seems to suggest that the absent qualia argument is stronger because it has a stronger conclusion. The fading qualia argument, if successful, establishes only that the property of being conscious is an organizational invariant. So if it works it shows that there will be something that it is like for my silicon isomorph, but it does not show that our conscious experiences will be the same. For all the fading qualia argument shows when I and my silicon isomorph are in the same computational state I may be consciously experiencing red while the isomorph experiences blue, or even some color that is completely alien to me. The dancing qualia argument, on the other hand, is supposed to suggest the stronger conclusion that the silicon isomorph is not only conscious but that their experience is exactly the same as my conscious experience (given that we are in computationally identical states). 

In his 1996 book The Conscious Mindhe indicates another sense of strength. In this second sense the dancing qualia argument is thought to lead to an intuitively more bizarre outcome and so is much harder to accept. The subject is having a very vivid conscious experience blink in and out of existence; how could they fail to notice that? He suggests that one might be able to bite the bullet on fading qualia but dancing qualia are just too strange to be real (Chalmers 1996 p 270).

Chalmers has since come to reverse this decision based on considerations of change blindness. Here is what he says in a footnote in his recent book The Character of Consciousness(Chalmers 2010), which I reproduce in its entirety,

I still find this [dancing qualia] hypothesis very odd, but I am now inclined to think it is something less than a reductio. Work on change blindness has gotten us used to the idea that large changes in consciousness can go unnoticed. Admittedly, those changes are made outside of attention, and unnoticed changes in the contents of attention would be much stranger, but it is perhaps not so strange as to be ruled out in all circumstances. Russellian monism…also provides a natural model in which such changes could occur. In The Conscious MindI suggested that this “dancing qualia” argument was somewhat stronger than the “fading qualia” argument given there; I would now reverse that judgment (page 24 note 7)

In what sense, then, are we to take the reversal of strength that Chalmers indicates in the above footnote? It seems implausible that it should be the first notion. It can’t be the case that now the fading qualia argument establishes that computational isomorphs have the same kind of conscious experience as I do. So it must be the case that Chalmers now finds the fading qualia scenario to be more intuitively bizarre than the dancing qualia scenario. 

This, in turn, suggests that it is not such a high cost for those attracted to the biological view to bite this bullet. If this is right then Chalmers will have to back off of the claim that the computational isomorph’s experience is just like mine but he can fall back on the fading qualia argument and insist that the cost is too high to bite the bullet on that argument. If so then consciousness itself –that is, the property of their being something that it is like for the system— may still be an organizational invariant and so a more modest form of computationalism may still be true. 

It is striking that empirical results have such a dramatic effect on our intuitions about strangeness. What can seem intuitively bizarre from the armchair can turn out to empirically be verified. One might think that the mere fact that change blindness shows us how wrong we can be in our intuitive assessment of these kinds of thought experiments would give us pause in endorsing the strength of the fading qualia case. I think that this all by itself is a prima facie reason to doubt the fading qualia argument but even if one resists this somewhat empirically jaded suspicion there are further empirical results that should cause us to re-assess the fading qualia argument as well. 

III. Partial Report and Fading Qualia

In this section I will present a brief sketch of the partial report paradigm and the inattentional inflation results. The interpretation of the results is something that is currently hotly debated in the literature (Kourider 2010, Block 2012, Lau & Rosenthal 2012, Brown 2012, Lau & Brown 2019). The argument I will present does not rely on any one specific interpretation turning out to be correct. As I will show, the argument relies on only the most general interpretation of the experimental results. In fact the interpretation is so general that all of the relevant parties agree on this. 

In these experiments subjects are presented with an array of letters or objects that arranged in some particular fashion (e.g. in a city-block grid, or in clock face circular arrangement). After a brief presentation subjects are asked to freely report as to the identity of the objects they saw. Subjects, overall, cannot report the identities of all of the objects or letters.  However, if a subject is cued, by a tone say, to recall a specific row or item they do very well. The debate has centered on whether subjects consciously experience all of the letters or objects or if they instead represent them in some sparse or generic way. Once side of the debate holds that consciousness is rich and that there is more in our conscious experience than we able to report, while the other side holds that consciousness is sparse and that we experience less than we think we do (at least consciously). This debate, while interesting, does not concern us here. 

All parties to this debate agree that there are these generic representations involved. Experimental results have shown that subjects can correctly report only 3-4 out of 12 objects. Further calculations suggest that at least 10.4 of the letters must be represented in sufficient detail so as to allow identification. This suggests that subjects may have some partial or degraded conscious representations, as that would explain why they are not able to name those letters. Of course it may also be the case that they do consciously experience them but just forget their identities. To test this experimenters have replaced some of the letters in the display with things that are not letters (an upside down ‘r’ or an ‘@’ sign) and subjects failed to report anything abnormal, and when asked, said that there were only letters present. 

So while all parties agree that some of the objects are represented in a partial or indeterminate way, the question has been whether only the 3-4 the subjects get right are represented in full detail (consciously but not accessed) with all of the others being partial or degraded representations or whether instead it is only a few of the representations that are degraded with the majority being represented in full detail. Even those who believe that phenomenal consciousness overflows access are committed to there being at least some degraded or partial representations in these situations. But these subjects believe that they see all of the letters or objects. 

Thus we seem to have ended up showing that normal subjects can in fact have partially degraded experiences and yet be unaware of it. This provides us with motivation to question the fading qualia argument. Now, it is true that in that kind of case we are considering something which is much, much more radical than what is happening in the partial report cases. In that case the subject has some partially degraded experience and thinks it is non-degraded. In the fading qualia case the subject thinks it is having an intense headache even though the actual intensity of the headache consists in just a few bits. But what these kinds of considerations suggest is that it is not such a drastic scenario after al. 

Chalmers does consider one kind of empirical evidence in his original article. He considers the case of someone who denies that they are blind. In that kind of case, he argues, it is plausible that the subject is rationally defective. The have no visual information and yet they believe that they do. Because of this non-standard way that their experience and their beliefs are connected this doesn’t give us any reason to change our minds. However in the case that we have been considering here we do not have this problem. Subjects are not suffering from any kind of irrationality in these paradigms. They are simply asked to look at a bunch of objects that are briefly presented on the screen and then to report on what they had seen. It is true that the objects in question are presented relatively briefly and subjects do not get to look at the stimuli for an extended time, as we would in ordinary life. 

To be sure, this is not as radical as the kind of case imagined in the fading qualia scenario, where one’s conscious experience is severely degraded, but it does suggest that it is not as strange as one might have thought. It is basically a very severe case of what is going on in our everyday conscious lives, exactly parallel to the change blindness case for dancing qualia.  The partial report paradigm is just one of several new empirical findings that seem to vindicate fading qualia. For reasons of space I cannot go into other finding from so-called ‘inattentional inflation’ where subjects overestimate the visibility of stimuli outside of where they are attending.

IV. Conclusion

The fading and dancing qualia thought experiments were never offered as strict reductios of the biological view but were instead aimed at showing that it was unlikely to be true because it entailed some unlikely or implausible consequences about the relationship between our conscious experience and our knowledge of that conscious experience. However, as we have seen, we have good empirical reason to think that these results are a good deal less implausible than they appear to be from the armchair. Of course this doesn’t show that the biological view is true. Rather, it shows that there is not much cost in biting the bullet in both cases and so one can reasonably hold that it is true and admit that there might be cases of dancing and fading qualia. Science has shown us that the world is far stranger than any of us could have ever imagined. 

The Curious Case of my Interview/Discussion with Ruth Millikan

I started my YouTube interview/discussion series Consciousness Live! last summer and scheduled Ruth Millikan as the second guest. We tried to livestream our conversation July 4th 2018 and we spent hours trying to get the Google Hangouts Live to work. When it didn’t I tried to record a video call and failed horribly (though I did record a summary of some of the main points as I remembered them).

Ruth agreed to do the interview again and so we tried to livestream it Friday June 6th 2019, almost a year after our first attempt (and since which I did many of these with almost no problems). We couldn’t get Google Hangouts to work (again!) but I had heard you could now record Skype calls so we tried that. We got about 35 minutes in and the internet went out (I put the clips up here).

Amazingly Ruth agreed to try again and so we met the morning of Monday June 10th. I had a fancy setup ready to go. I had our Skype call running through Open Broadcast Studios and was using that to stream live to my YouTube Channel. It worked for about half an hour and then something went screwy. After that I decided to just record the Skype call the way we had ended up doing the previous Friday. The call dropped 3 times but we kept going. Below is an edited version of the various calls we made on Monday June 10th.

Anyone who knows Ruth personally will not be surprised. She is well known for being generous with her time and her love of philosophical discussion. My thanks to Ruth for such an enjoyable series of conversations and I hope viewing it is almost as much fun!