Consciousness Live! Season 3

I am happy to announce the opening line-up for the new season of Consciousness Live! I originally intended to try to limit these to the summer but then I realized I am just as busy then as now so why not let people pick when is best for them? There may be more to come and I will announce timing info when I have them scheduled.

Sounds like a lot of fun!!

…And the Conscious State is…

No too long ago Jake Berger and I presented a paper we are working on at the NYU philosophy of mind discussion session. There was a lot of very interesting discussion and there are a couple of themes I plan on writing about (if I ever get the chance I am teaching four classes in our short six week winter semester and it is a bit much).

One very interesting objection that came up, and was discussed in email afterwards, was whether HOT theory has the resources to say which first-order state is the conscious state. Ned Block raised this objection in the following way. Suppose I have two qualitative first-order states that are, say, slightly different shades of red. When these states are unconscious there is nothing that it is like for the subject to be in them (ex hypothesi). Now suppose I have an appropriate higher-order thought to the effect that I am seeing red (but not some particular shade of red). The content of the higher-order thought does not distinguish between the two first-order states so there is no good reason to think that one of them is consciousness and the other is not. Yet common sense seems to indicate that one of them could be conscious and the other non-conscious, so there is a problem for higher-order thought theory.

The basic idea behind the objection is that there could be two first-order states that are somewhat similar in some way, and there could be a fact of the matter about which of the two first-order states is conscious while there is a higher-order thought that does not distinguish between the two states. David’s views about intentional content tend toward descriptivism and so he thinks that the way in which a higher-order thought refers to its target first-order state is via describing it. I tend to have more sympathy with causal/historical accounts of intentional content (I even wrote about this back in 2007: Two Concepts of Transitive Consciousness) than David does but I think in this kind of case he does think that these kinds of considerations will answer Block’s challenge.

But stepping back from the descriptivism vs. causal theories of reference for a second, I this objection helps to bring out the differences between the way in which David thinks abut higher-order thought theory and they way that I tend to think about it.

David has presented the higher-order thought theory as a theory of conscious states. It is presented as giving an answer to the following question:

  • How can the very same first-order state occur consciously and also non-consciously?

The difference between these two cases is that when the state is conscious it is accompanied by a higher-order thought to the effect that one is currently in the state. Putting things this way makes Block’s challenge look pressing. We want to know which first-order state is conscious!

I trend to think of the higher-order thought theory as a theory of phenomenal consciousness. It makes the claim that phenomenal consciousness consists in having the appropriate higher-order thought. By phenomenal consciousness I mean that there is something that it is like for the organism in question. I want to distinguish phenomenal consciousness from state consciousness. A state is state-conscious when it is the target of an appropriate higher-order awareness. A state is phenomenally conscious when there is something that it is like for one to be in the state. A lot of confusion is caused because people use ‘conscious state’ for both of these notions. A state of which I am aware is naturally called a conscious state but so to is a state which there is something that it is like to be in.

Block’s challenge thus has two different interpretations. On one he is asking how the higher-order awareness refers to its target state. That is, he wants to know which first-order state am I aware of in his case. On the other interpretation he is asking which first-order state is there something that it is like for the subject to be in. The way I understand Rosenthal’s view is that he wants to give the same answer to both questions. The target of the higher-order state is the one that is ‘picked out’ by the higher-order state. And what it is like for the subject to be in that target first-order state consists in there being the right kind of higher-order awareness. Having the appropriate higher-order state is all there is to there being something that it is like to be in the first-order state.

I tend to think that maybe we want to give different answers to these two challenges. Regardless of which first-order state is targeted by the higher-order awareness the state which there is something that it is like for the subject to be in is the higher-order state itself. This higher-order state makes one aware of being in a first-order state, and that is just what phenomenal consciousness is. Thus it will seem to you as though you are in a first-order state (it will seem to you as though you are seeing red when you consciously see red). For that reason I think it is natural to say that the higher-order state is itself phenomenally conscious (by which I mean it is the state which there is something that it is like to be in). I agree that we intuitively think it is the first-order states which are phenomenally conscious but I don’t think that carries much weight when we get sufficiently far into theorizing.

While I agree that it does sound strange to say that the first-order state is not phenomenally conscious I think this is somewhat mitigated by the fact that we can none the less say that the first-order state is a conscious state when it is targeted by the appropriate higher-order awareness. This is because all there is to being a conscious state, as I use the term here, is that the state is targeted by an appropriate higher-order awareness. The advantage to putting things in this way is that it makes it clear what the higher-order theory is a theory of and that the objection from Block is clearly assuming that first-order states must be phenomenally conscious.

2019 in Review

I had a busy year in 2019!

I taught three less classes than I usually do and so I only taught 13 classes in 2019 (Spring, Summer, Fall, Winter ‘semesters’). That is the same number as the year I was on parental leave.

I teach a lot of courses but I also get to teach a lot of different classes. Besides my typical philosophy courses (Intro to Phil, Phil Religion, Logic, Ethics) I get to teach a variety of science classes including Introduction to Neuroscience, General Psychology, and a capstone seminar that I usually do on consciousness and some aspect of science. In the Spring semester I am teaching Life in the Universe which is really cool. In addition this semester I got to co-teach a class on neuroscience and philosophy of consciousness at the Gradate Center with Tony Ro, which was awesome!

I like teaching but I feel like I would be better at it if I did less of it. And I do wish I could do more teaching at the graduate level.

I also had my paper with hakwan Lau and Joe LeDoux come out in Trends in Cognitive Science. This journal works very quickly and we spent most of early 2019 working on this paper, which came out in September 2019! The whole experience of writing that paper was intense and a bit surreal.

On the other side of the spectrum, my paper with Hakwan Lau for the Ned Block volume was written a long time ago but just came out along with Block’s response (see my response here)

In addition I wrote some book reviews, gave a couple of talks, wrote a couple of blog posts, and of course completed my second season of Consciousness Live!

Looking back further, back in December 2009 I was a newly-married Assistant Professor and was organizing the very first jam session at the Parkside Lounge (after the American Philosophical Association meeting in Times Square December 28th 2009). (I also was just realizing that the way things had been presented to me at CUNY had slightly skewed my intuitions about philosophy over-all). (In addition I was also in the middle of organizing the second Online Consciousness Conference). I recently found an old recording of one of the tunes we played. I don’t remember who was playing what, or when it was recorded (maybe sometime in 2008) but it is some combination of core-NC/DC members. Next year in December 2020 we will be coming up on the 10 year anniversary of the Qualia Fest; might the world be ready for another one?

10000 lies performed by NC/DC
The New York Consciousness Collective at the Parkside Lounge December 28th 2009

Since that timely life has changed a lot! I don’t get to play as much music but I have been promoted to Full Professor, celebrated my 10 year wedding anniversary, welcomed two sons, bought a house, and moved out of Brooklyn. It’s been an eventful decade both personally and professionally! I wonder what I’ll be writing in 2029?

Consciousness is (Probably) a Biological Phenomenon

In light of the very interesting interview with Dave Chalmers in the Opinionator I thought I would revisit some of my objections to the notion of artificial consciousness (AC). I am somewhat of a skeptic about artificial consciousness in a way that I am not about AGI (artificial general intelligence). My suspicion is that intelligence and consciousness are indeed dissociable and so we could end up with artificial systems that were intelligent in the sense of being able to solve problems, but which would not be conscious (nothing that it is like for them to solve problems or ‘think’ about them). In the past I have called the view I am interested in Biologism about consciousness but then I became aware of the problematic history of that word and I have not come up with something better…Biologicalism maybe? Or just go with Searle’s Biological Naturalism? Here I’ll just call it the Biological view. Below is a draft I wrote a while ago exploring the case against the biological view…I still think it is pretty good and so would appreciate any feedback. (by the way, just to be clear I am open to being convinced that the biological view is wrong, it may just be my last common sense prejudice about consciousness refusing to succumb to good arguments, if so what are these good arguments?)

Whatever you call it, the issue is whether consciousness is fundamentally a biological phenomenon or whether it is an ‘organizational invariant,’ (Chalmers 1995; Chalmers 1996), or is ‘substrate independent’ (Bostrom 2003). On one side of the debate are those who affirm that this is so and thus hold that a properly organized computer program could in fact be conscious in just the way that you or I ordinarily are. Because of this I will call this view computationalisma bout consciousness. On the other side of the debate are those that hold that consciousness is a distinctively biological phenomenon and that a maximally specific computer simulation of those biological processes would not result in consciousness.

The debate over computationalism is independent of the debate between the physicalist and non-physicalist. A non-physicalist may hold that a suitably programmed computer would come to instantiate non-physical qualia (as David Chalmers (1996) has suggested). All that would be required is that there be fundamental laws of nature that relate the computational states to non-physical qualia. While a physicalist may hold that the implemented program itself is conscious. Physicalists who endorse computationalism usually do so by endorsing some kind of (computational) functionalism whereas physicalists who endorse biological view usually do so by endorsing some kind of type identity theory.

Though there are those that defend the biological view (Searle 2004, Block 2009) the dominant view is computationalism. This is perhaps in part due to the popularity of functionalist views. However it is very rarely argued for. It is much more common to hear remarks to the effect that ‘neurons aren’t magic’. That is not an argument. It is also not enough to cite arguments for functionalism either. For instance the argument from multiple realizability at most suggests that there may be other biological configurations that may be conscious.  It is quite another thing to suggest that non-biological systems could be conscious. 

Of course one might invoke the conceivability of non-biological creatures that are conscious. But doing so begs the question. It may be the case that when we imagine that there is a silicon system that is conscious, a commander Data type robot, we imagine something that is the equivalent of xyz. Given that we know that water is essentially H2O we know that the xyz is not a candidate to be our world. Our world could not have been the xyz world. If the biological view is true then the same is true for the commander Data world. It is conceivable but it is not a way our world could have been. Given this what we need is an independent argument for computationalism. 

Perhaps the best (only?) argument for computationalism is David Chalmers’ Dancing Qualia and Fading Qualia arguments. Recently Chalmers (2010) has come to change his mind on the strength of the dancing qualia argument. His change of heart was motivated by empirical findings from cognitive neuroscience, and in particular change blindness. Here I will argue that we have further empirical findings that motivate a questioning of the strength of the fading qualia argument. What this shows is that one can reasonably think that the biological view is true. And absent any reasons to think that it isn’t true it should probably be the default view.  Before introducing the empirical results I first turn to looking at the original arguments and the reasons that Chalmers has given for his change of heart on their relative strength. In section three I introduce the empirical results from the partial report experiments. The upshot is that the original dancing and fading qualia arguments look less plausible and so the biological view of consciousness is more than likely true after all. 

II. Flip-Flopping on Dancing and Fading Qualia

These arguments both have roughly the same form. We start with a fully biological creature that is fully conscious. For vividness we can imagine that they are having an intense headache, a migraine say, while watching a movie and eating a box of sour patch kids. We then replace one of their neurons with a functionally identical computer chip. We can even imagine that we are able to do so in such a way that the subject is unaware that it is happening (the wonders of nanotechnology being what there are and all). We then imagine a series of replacements like this, with the second in the line having two neurons replaced, the third 3 neurons, etc. until we reach the other end where we have completely replaced the brain with computer chips. At each stage in the series the subject we end up with is functionally identical to me, or you. Given this there will be no way that the subject’s behavior can change in any way. As the neurons are being replaced the subject continues to complain of the headache, asking for the volume to be tuned down and remarking that the taste of the sour patch kids helps to distract from the migraine, etc. If we assume that computationalism is false then in the fading qualia case we would have to imagine that as we replace the neurons with silicon our conscious experience fades as a light on a dimmer switch would even though our behavior continues to be the same through out. In the dancing qualia case we imagine that we have a switch that activates and deactivates the group of silicon chips that has replaced the neural circuitry that is responsible for a certain conscious experience. As we flip the switch our conscious experience is blinking in and out of existence yet, again, your behavior continues to be the same through out. 

So, if we assume that computationalism is false, then in each case we end up with subjects that are radically out of touch with their own conscious experiences. They say that they have an intense headache but that isn’t true. In one case the headache is dim and in the other it is blinking in and out of existence. In our world we typically do not find ourselves in this kind of position. It seems plausible that we are not radically out of touch with our own conscious experience. So it is much more plausible to think that, while these scenarios are possible in some sense, they do not describe the actual world with its actual laws. In other words, it is safe to assume that in our actual world the complete silicon brain would be conscious in just the way that the biological creature originally was. 

These arguments are not presented as strict reductios but are rather offered to show how strange the biological view is and the consequences of the view. If the biological view is true then we can have systems that are radically out of touch with their own conscious experience and, for all we know, it could be happening right now! Since this seems prima facie implausible we have a prima facie case against the biological view and for computationalism. 

Originally Chalmers argued that the dancing qualia argument was stronger than the fading qualia argument. But in what sense is the one argument supposed to be stronger than the other? In his original paper on this (Chalmers 1995) he seems to suggest that the absent qualia argument is stronger because it has a stronger conclusion. The fading qualia argument, if successful, establishes only that the property of being conscious is an organizational invariant. So if it works it shows that there will be something that it is like for my silicon isomorph, but it does not show that our conscious experiences will be the same. For all the fading qualia argument shows when I and my silicon isomorph are in the same computational state I may be consciously experiencing red while the isomorph experiences blue, or even some color that is completely alien to me. The dancing qualia argument, on the other hand, is supposed to suggest the stronger conclusion that the silicon isomorph is not only conscious but that their experience is exactly the same as my conscious experience (given that we are in computationally identical states). 

In his 1996 book The Conscious Mindhe indicates another sense of strength. In this second sense the dancing qualia argument is thought to lead to an intuitively more bizarre outcome and so is much harder to accept. The subject is having a very vivid conscious experience blink in and out of existence; how could they fail to notice that? He suggests that one might be able to bite the bullet on fading qualia but dancing qualia are just too strange to be real (Chalmers 1996 p 270).

Chalmers has since come to reverse this decision based on considerations of change blindness. Here is what he says in a footnote in his recent book The Character of Consciousness(Chalmers 2010), which I reproduce in its entirety,

I still find this [dancing qualia] hypothesis very odd, but I am now inclined to think it is something less than a reductio. Work on change blindness has gotten us used to the idea that large changes in consciousness can go unnoticed. Admittedly, those changes are made outside of attention, and unnoticed changes in the contents of attention would be much stranger, but it is perhaps not so strange as to be ruled out in all circumstances. Russellian monism…also provides a natural model in which such changes could occur. In The Conscious MindI suggested that this “dancing qualia” argument was somewhat stronger than the “fading qualia” argument given there; I would now reverse that judgment (page 24 note 7)

In what sense, then, are we to take the reversal of strength that Chalmers indicates in the above footnote? It seems implausible that it should be the first notion. It can’t be the case that now the fading qualia argument establishes that computational isomorphs have the same kind of conscious experience as I do. So it must be the case that Chalmers now finds the fading qualia scenario to be more intuitively bizarre than the dancing qualia scenario. 

This, in turn, suggests that it is not such a high cost for those attracted to the biological view to bite this bullet. If this is right then Chalmers will have to back off of the claim that the computational isomorph’s experience is just like mine but he can fall back on the fading qualia argument and insist that the cost is too high to bite the bullet on that argument. If so then consciousness itself –that is, the property of their being something that it is like for the system— may still be an organizational invariant and so a more modest form of computationalism may still be true. 

It is striking that empirical results have such a dramatic effect on our intuitions about strangeness. What can seem intuitively bizarre from the armchair can turn out to empirically be verified. One might think that the mere fact that change blindness shows us how wrong we can be in our intuitive assessment of these kinds of thought experiments would give us pause in endorsing the strength of the fading qualia case. I think that this all by itself is a prima facie reason to doubt the fading qualia argument but even if one resists this somewhat empirically jaded suspicion there are further empirical results that should cause us to re-assess the fading qualia argument as well. 

III. Partial Report and Fading Qualia

In this section I will present a brief sketch of the partial report paradigm and the inattentional inflation results. The interpretation of the results is something that is currently hotly debated in the literature (Kourider 2010, Block 2012, Lau & Rosenthal 2012, Brown 2012, Lau & Brown 2019). The argument I will present does not rely on any one specific interpretation turning out to be correct. As I will show, the argument relies on only the most general interpretation of the experimental results. In fact the interpretation is so general that all of the relevant parties agree on this. 

In these experiments subjects are presented with an array of letters or objects that arranged in some particular fashion (e.g. in a city-block grid, or in clock face circular arrangement). After a brief presentation subjects are asked to freely report as to the identity of the objects they saw. Subjects, overall, cannot report the identities of all of the objects or letters.  However, if a subject is cued, by a tone say, to recall a specific row or item they do very well. The debate has centered on whether subjects consciously experience all of the letters or objects or if they instead represent them in some sparse or generic way. Once side of the debate holds that consciousness is rich and that there is more in our conscious experience than we able to report, while the other side holds that consciousness is sparse and that we experience less than we think we do (at least consciously). This debate, while interesting, does not concern us here. 

All parties to this debate agree that there are these generic representations involved. Experimental results have shown that subjects can correctly report only 3-4 out of 12 objects. Further calculations suggest that at least 10.4 of the letters must be represented in sufficient detail so as to allow identification. This suggests that subjects may have some partial or degraded conscious representations, as that would explain why they are not able to name those letters. Of course it may also be the case that they do consciously experience them but just forget their identities. To test this experimenters have replaced some of the letters in the display with things that are not letters (an upside down ‘r’ or an ‘@’ sign) and subjects failed to report anything abnormal, and when asked, said that there were only letters present. 

So while all parties agree that some of the objects are represented in a partial or indeterminate way, the question has been whether only the 3-4 the subjects get right are represented in full detail (consciously but not accessed) with all of the others being partial or degraded representations or whether instead it is only a few of the representations that are degraded with the majority being represented in full detail. Even those who believe that phenomenal consciousness overflows access are committed to there being at least some degraded or partial representations in these situations. But these subjects believe that they see all of the letters or objects. 

Thus we seem to have ended up showing that normal subjects can in fact have partially degraded experiences and yet be unaware of it. This provides us with motivation to question the fading qualia argument. Now, it is true that in that kind of case we are considering something which is much, much more radical than what is happening in the partial report cases. In that case the subject has some partially degraded experience and thinks it is non-degraded. In the fading qualia case the subject thinks it is having an intense headache even though the actual intensity of the headache consists in just a few bits. But what these kinds of considerations suggest is that it is not such a drastic scenario after al. 

Chalmers does consider one kind of empirical evidence in his original article. He considers the case of someone who denies that they are blind. In that kind of case, he argues, it is plausible that the subject is rationally defective. The have no visual information and yet they believe that they do. Because of this non-standard way that their experience and their beliefs are connected this doesn’t give us any reason to change our minds. However in the case that we have been considering here we do not have this problem. Subjects are not suffering from any kind of irrationality in these paradigms. They are simply asked to look at a bunch of objects that are briefly presented on the screen and then to report on what they had seen. It is true that the objects in question are presented relatively briefly and subjects do not get to look at the stimuli for an extended time, as we would in ordinary life. 

To be sure, this is not as radical as the kind of case imagined in the fading qualia scenario, where one’s conscious experience is severely degraded, but it does suggest that it is not as strange as one might have thought. It is basically a very severe case of what is going on in our everyday conscious lives, exactly parallel to the change blindness case for dancing qualia.  The partial report paradigm is just one of several new empirical findings that seem to vindicate fading qualia. For reasons of space I cannot go into other finding from so-called ‘inattentional inflation’ where subjects overestimate the visibility of stimuli outside of where they are attending.

IV. Conclusion

The fading and dancing qualia thought experiments were never offered as strict reductios of the biological view but were instead aimed at showing that it was unlikely to be true because it entailed some unlikely or implausible consequences about the relationship between our conscious experience and our knowledge of that conscious experience. However, as we have seen, we have good empirical reason to think that these results are a good deal less implausible than they appear to be from the armchair. Of course this doesn’t show that the biological view is true. Rather, it shows that there is not much cost in biting the bullet in both cases and so one can reasonably hold that it is true and admit that there might be cases of dancing and fading qualia. Science has shown us that the world is far stranger than any of us could have ever imagined. 

The Curious Case of my Interview/Discussion with Ruth Millikan

I started my YouTube interview/discussion series Consciousness Live! last summer and scheduled Ruth Millikan as the second guest. We tried to livestream our conversation July 4th 2018 and we spent hours trying to get the Google Hangouts Live to work. When it didn’t I tried to record a video call and failed horribly (though I did record a summary of some of the main points as I remembered them).

Ruth agreed to do the interview again and so we tried to livestream it Friday June 6th 2019, almost a year after our first attempt (and since which I did many of these with almost no problems). We couldn’t get Google Hangouts to work (again!) but I had heard you could now record Skype calls so we tried that. We got about 35 minutes in and the internet went out.

Amazingly Ruth agreed to try again and so we met the morning of Monday June 10th. I had a fancy setup ready to go. I had our Skype call running through Open Broadcast Studios and was using that to stream live to my YouTube Channel. It worked for about half an hour and then something went screwy. After that I decided to just record the Skype call the way we had ended up doing the previous Friday. The call dropped 3 times but we kept going. Below is an edited version of the various calls we made on Monday June 10th.

Anyone who knows Ruth personally will not be surprised. She is well known for being generous with her time and her love of philosophical discussion. My thanks to Ruth for such an enjoyable series of conversations and I hope viewing it is almost as much fun!

[unfortunately I accidentally deleted the video of our discussion, audio available here: )

12 years!

I just realized that I recently passed the 12 year mark of blogging here at Philosophy Sucks! The top-5 most viewed post haven’t changed all that much from my 10 year reflections. Philosophy blogging isn’t what it used to be (which is both good and bad I would say) but this blog continues to be what it always has: A great way for me to work out ideas, jot down notes, and get excellent feedback really quickly (that isn’t facebook). Thanks to everyone who has contributed over these 12 years!

The five most viewed posts written since the ten year anniversary are below. 

5. Prefrontal Cortex, Consciousness, and….the Central Sulcus?

4. Do we live in a Westworld World?

3. Consciousness and Category Theory

2. Integrated Information Theory is not a Theory of Consciousness

  1. My issues with Dan Dennett 


Theories of Perception and Higher-Order Theories of Consciousness: An Analogy

I recently came across a draft of a post that I thought I had actually posted a while ago…on re-reading it I don’t think I entirely agree with the way I put things back then but I still kind of like it


When one looks at philosophical theories of perception one can see three broad classes of theoretical approaches. These are sometimes known as ‘relationalism’ and ‘representationalism’ (and ‘disjunctivism’). According to relationalism (sometimes known as naive realism) perception is a relation between the perceiver and the object they perceive. So when I see a red apple, on this view, there is the redness of the apple and then I come to be related to those things in the right way and that counts as perceiving. Often a ‘window’ analogy is invoked. Perception is like a window through which we can look out into the world and in so doing come to be acquainted with the ways that the objects in the world are. Representationalism on the other hand holds that perception involves, well, representing the world to be be some way or other, and this may diverge from the way the world is outside of perception.

I think a similar kind of debate has been occurring within the differing camps of higher-order theories of consciousness. In this debate the first-order state, which represents properties, objects, and events in the physical environment of the animal, takes the place of the physical object in the debates about perception. If one takes that perspective then one can see that we have versions of relationalism and representationalism in higher-order theories. Relationalists take the first-order state, and it’s properties, to be revealed in the act of becoming aware of it. Representationalists think that we represent the object as having various properties and that the experiences we have when we dream or hallucinate are literally the same ones we are aware of in ordinary experience. This is the famous argument from hallucination.

I think that the misrepresentation argument against higher-order theories of consciousness is actually akin to the argument from hallucination, and shows roughly the same thing, viz. that the relationalist version of higher-order theory is not in a position to explain what it is that is in common between “veridical” higher-order states and empty higher-order states. As long as one accepts that these cases are phenomenologically the same, and some versions of higher-order theory commit you to that claim, then it seems to me that you must say that we are aware of the same thing in each case. In the perception debate representationalists tend to say that what we are ware of in each case are properties. So take my experience of a red ripe tomato and my “perfect” hallucination as of a red ripe tomato. In one case I am aware of an actual object, the tomato, and in the other case I am not aware of any object (it is a hallucination). But in both cases I am aware of the redness of the tomato and the roundness of it, etc, in the good case these properties are instantiated in the tomato and in the bad case the are uninstantiated but they are there in both cases. The representationalist can thus explain why they two cases are phenomenologically the same: in each case we represent the same properties as being present.

I think the representational version of higher-order theories of consciousness have to similarly commit to what it is that is in common between veridical higher-order states and empty ones which none the less are phenomenologically indistinguishable. In one case we are aware of a first-order mental state (the one the higher-order state is about) and in the other case we are not (the state we represent ourselves as being in is one we are not actually in, thus the higher-order state is empty). So it must be the properties of the mental states that we are aware of in both cases. So if I am consciously seeing a red ripe tomato then I am in a first-order state which represents the tomato’s redness and roundness, etc and I am representing that these properties are present and that there is a tomato present, etc (this state can occur unconsciously but we are considering its conscious occurrence). To consciously experience the redness of the tomato I need to have a higher-order state representing me as seeing a tomato. And what this means is that I have a higher-order state representing myself as being in a first-order visual state with such and such properties. The ‘such-and-such properties’ bit is filled in by one’s theory of what kinds of properties first-order mental states employ to represent properties in the environment. Suppose that, like Rosenthal, one thinks they do so by having a kind of qualitative (i.e. non-conceptual, non-intentional) property that represents these properties. On Rosenthal’s view he posits ‘mental red’ as the way in which we represent the physical property objects have when they are red. He calls this red* and says that red* represents physical red in a distinctive non-conceptual non-intentional way.

This is not a necessary feature of higher-order theories but it gives us a way to talk about the issues in a definite way. So the upshot of this discussion is that it is these properties which are common between veridical and hallucinatory higher-order states. When one has a conscious experience of seeing a red ripe tomato but there is not a first-order visual representation of the tomato or its redness, etc, one represents oneself as being in first-order states which represent the redness and roundness of the tomato, one is aware of the same properties one would be in the veridical case but these properties are uninstantiated.


Block’s Response to Lau and Brown on Inattentional Inflation

Ned was nice enough to point out that the proofs of his response to us are available online. I want to thank him for his engagement but there is a lot I don’t agree with. I want to say something about each section but first I wanted to address his claim that the argument from Inattention Inflation is question begging. He is wrong about that

He says,

Apparently, their argument is this:

  1. The first-order states were about the same in strength as evidenced by the equal performance on discriminating the gratings;
  2. But as reflected in the differing visibility judgments, the unattended case was higher in consciousness;
  3. To explain the higher degree of consciousness in the unattended case we cannot appeal to a first-order difference since there is no such difference (see premise 1). So the only available explanation has to appeal to the higher-order difference in judgments of visibility.

He then agues that the only reason we would have for accepting premise two of the above argument was a prior commitment to the higher-order thought theory, which is clearly question begging.

First I would object to the characterization of our argument. Premise 2 should not say that one case was higher in consciousness but rather that there were phenomenological differences between the two cases. If there is a difference in what it is like for someone when we have reason to think that there is no difference in their first-order states, then we have reason to think that phenomenology is not fully determined by first-order activity. Block seems very confused by this but isn’t there an obvious difference between clearly seeing something presented to you and just catching a quick glimpse of something or other presented near threshold?

I think that ultimately his argument in his reply to Inattentional Inflation (II) is that since we have two models that both predict the same pattern of results we cannot use the pattern of results as evidence for one model over the other. The two models are

  • (A) a first-order view where difference in task performance is indicative of no difference in conscious experience and difference in report is indicative of cognitive effects without necessarily effecting phenomenology.
  • (B) a higher-order view where difference in task performance is not indicative that conscious experience is the same and difference in report is indicative of an effect on phenomenology.

The question then comes down to which of these two models we should prefer.

In giving our answer to this Block edited a quote from us without indicating that in the text. We say “if a combined increase in the frequency of saying “yes I see the target” and higher visibility ratings is not good evidence that phenomenology has changed, what else can count?” and he quotes us as just saying if “higher visibility rating is not good evidence…” totally ignoring that we explicitly said it is the combination of both that we are replying on. This is misleading!

It is both of these that lead us to think that there really is a difference between the two cases and that leads us to think (B) is the right interpretation. They say they see it more often and also rate it as more visible even though they are not doing a better job of detecting the stimulus. It has nothing to do with the fact that we are willing to defend a higher-order approach to consciousness.

It is too bad that Lau et al do not collect anecdotes from participants but I think just from our ordinary everyday experience we have some cases of inattention inflation. Sometimes as I am sitting at my computer writing something I will think that I saw the little red icon in the right corner of the screen that alerts me to an email in my inbox. Sometimes I will check and it will indeed be there. Other times I check and there is no red marker. But it sure did seem like there was one there just before I looked! The idea is that something like this is going on in the experimental conditions. I predict that if asked subjects would be surprised to find out that (some of) their false alarms were indeed false.

Block goes on to attribute to me “in conversation” the claim that training and reward did not influence the results. It is funny because we say it in the paper! But I did emphasize this at the pub after LeDoux and I gave a talk at the NYU philosophy of mind discussion group. Anyway, in response to that Block says that it would nullify the findings of the original paper that this is an effect of judgement. But that is silly because our claim was that since there is reason to think there is a difference in phenomenology and that the relevant difference psychologically/neurologically was a difference in HO representation then there is reason to think that HO state explains the difference in phenomenology.

Overall, then, I think it is really unfair to say that this argument is question begging. It does depend on their being an actual phenomenal difference when task performance is the same but we think we have good reasons to believe that which are independent of the higher-order view.

Consciousness Science & The Emperor’s Arrival

Things have been hectic around here because I have been teaching 4 classes (4 preps) in our short 6-week winter session. It is almost over, just in time for our Spring semester to start! Even so February has been nice with a couple of publications coming out.

The first is Opportunities and Challenges for a Maturing Science of Consciousness. I was very happy to see this piece come out in Nature Human Behavior. Matthias Michel, Steve Flemming, and Hakwan Lau did a great job of co-ordinating the 50+ co-authors (Open access viewable pdf here). As someone who was around as an undergraduate towards the beginning of the current enthusiasm for the science of consciousness it was quite an honor to be included in this project!

In addition to that Blockheads! Essays on Ned Block’s Philosophy of Mind and Consciousness is out! This book has a lot of interesting papers (and replies from Ned) and I am really looking forward to reading it.



Hakwan Lau and I wrote our contribution back in 2011-2012  and a lot has happened in the seven years since then! Of course I had to read Ned’s response to our paper first and I will have a lot to say in response (we actually have some things to say about it in our new paper together with Joe LeDoux) but for now I am just happy it is out!

Gennaro on Higher-Order Theories

I was asked to review the Bloomsbury Companion to the Philosophy of Consciousness and had some things to say about the chapter on higher-order theories of consciousness by Rocco Gennaro that I could not fit into a paragraph or two so I am extending them here.

In the fourth paper of this second section Rocco Gennaro gives us his interpretation of “Higher-Order Theories of Consciousness”. Higher-order theories of consciousness claim that consciousness as we ordinarily experience it requires a kind of inner awareness, an awareness of our own mental life. To consciously experience the red of a tomato is to be aware of oneself as seeing a red object. Gennaro offers a survey of the traditional higher-order accounts but anyone reading this chapter who was new to the area would get a very biased account of the lay of the land. Specifically there are three things that are misleading about Gennaro’s overview.  The first is how he presents the theory itself. The second is how he responds to the classic misrepresentation objection to higher-order thought theories of consciousness. And the third is in presenting the case for whether or not the prefrontal cortex is a possible neural realizer of the relevant higher-order thoughts.

Gennaro interprets the higher-order theory as what I have called the ‘relational view’. As he says on page 156,

Conscious mental states arise when two unconscious mental states are related in a certain specific way, namely that one of them (the [higher-order representation]) is directed at the other ([mental state]).

This makes it clear that on his way of doing things it is necessary that there be two states, with one directed at the other and that these two states together ‘give rise’ to a (phenomenally) conscious mental state. Rosenthal and those who follow him interpret the higher-order thought theory as what I have called the ‘non-relation view’. On the non-relational view consciousness consists in having the relevant higher-order state. There is some discussion of this distinction in Pete Mandik’s chapter at the end of the book (under heading of ‘cognitive approaches to phenomenal consciousness’) but if one just read Gennaro’s chapter on higher-order theory one would be misled about the other approach.

This comes out clearly in Gennaro’s discussion of the ‘mismatch’ objection. A familiar objection to higher-order theories is that they allow the possibility of differing contents in higher-order and lower-order states. If one sees a red object but has a higher-order thought of the right kind that represents that one as seeing a green object, what is it like for the subject? The non-relational view answers that it is like seeing green even though one will behave as though one is seeing red. Gennaro disagrees and says that there must be a partial or complete match between the concepts in the HOT and the first-order state (or the concepts in the higher-order state must be more fine-grained than in the lower-order state or vice versa) or there is no conscious experience at all. He considers cases like associative agnosia, where someone can see a whistle and consciously see the silver color of it and its shape, can draw it really well, etc, but doesn’t know that it is a whistle. They just can’t identify what it is based on how it looks (though they can identify a whistle by its sound). Gennaro holds that the right way to interpret this is that the subject has a higher-order thought that represents the first-order representation of the whistle incompletely. It represents that one is seeing a silver object that has such and such a shape. But it does not represent that one is seeing a whistle (p 156). He argues that in a case of associative agnosia there is a partial match between the HO and FO state and that results in a conscious experience that lacks meaning.

First it is strange to be talking in terms of ‘matching’ between contents. What determines whether there is a match? Gennaro talks of the ‘faculty of the understanding,’ and it ‘operating on the data of the sensibility’ by ‘applying higher-order thoughts’, and of the higher-order state ‘registering’ the content of the first-order state but it is not clear what these things really mean. Second he makes the assumption that one consciously experiences the whistle as a whistle, or that high level concepts figure in the phenomenology of a subject. This is a controversial claim and even if it is true (or one thinks that it is) one should recognize that this is not a required part of the higher-order view. On the way Rosenthal has set the theory up one has higher-order thought of the appropriate kind about sensory qualities and their relations to each other but one does not have concepts like ‘whistle’ in the consciousness-making higher-order thoughts. One will then come to judge/make an inference that one is seeing a whistle which will result in a belief that one is seeing that whistle, but this belief will be a first-order belief (that is, a belief which is not about something mental, in this case it is about the whistle).

Gennaro says that these kinds of cases e support the claim that there must be some kind of match between first-order and higher-order states but it is not clear that it really does. What he has argued for is the claim that the content of the higher-order state determines what it is like for the subject. What reason do we have to think that the match between first-order and higher-order state is playing a role? In other words, what reason do we have to think that the same would not be case when the first-order state represented red and the higher-order state that one was seeing green, as the non-relational view holds?

His sole criticism of the non-relational view comes when he says,

but the problem with this view is that somehow the [higher-order thought] alone is what matters. Doesn’t this defeat the purpose of [higher-order thought] theory which is supposed to explain state consciousness in terms of a relation between two states? Moreover, according to the theory the [lower-order] state is supposed to be conscious when one has an unconscious HOT,” (p 155; italics in the original).

This is a really bad objection to the non-relational version of the higher-order thought theory. The first part merely asserts that there is no non-relational version of the higher-order thought theory. The second part is something that Rosenthal accepts. The lower-order state is conscious when one has an appropriate higher-order state because that is what that property consists in. What it is for a first-order state to have the property of being conscious, for Rosenthal, is for one to have an appropriate higher-order thought which attributes that first-order state to .

In addition, Gennaro goes on to criticize the recent speculation by higher-order theorists that the prefrontal cortex is crucially involved in producing conscious experience. It is of course an open empirical question as to whether the prefrontal cortex is required for conscious experience and, if so, whether it is because it instantiates the relevant kind of higher-order awareness. However, Gennaro’s arguments are extremely weak and do nothing to cast doubt on this empirical hypothesis. He first appeals to work by Rafi Malach that there is decreased PFC activity when subjects are absorbed by watching a film. However, he does not note that Rosenthal and Lau responded to this. He then appeals to the fact that PFC activation is seen only when there is a required report. This has also been recently addressed (by Lau). Finally, he appeals to lesion studies suggesting that there is no change in conscious experience when the PFC is lesioned. However, there is considerable controversy over the correct interpretation of these results and Gennaro merely appeals to second and third hand literature reviews (see the recent debate in the Journal of Neuroscience between Lau and colleagues and Koch and colleagues).