Gottlieb and D’Aloisio-Montilla on Brown on Phenomenological Overflow

Last year I started to try to take note of papers that engage with my work in some way (previous posts here, here, here, here, here, here, and here). The hope was to get some thoughts down as a reference point for future paper writing. So far not much in that department has been happening; with a 3 year old and a 1 month old it is tough to find time to write (understatement!) but I am hoping I can “normalize” my schedule in the next few weeks and try to get some projects off of the back burner. At any rate I have belatedly noticed a couple of papers that came out and thought I woud quickly jot down some notes.

The first paper is one by Joseph Gottlieb and came out in Philosophical Studies in October of 2017. It is called The Collapse Argument and makes the argument that all of the currently available mentalistic first-order theories of consciousness turn out to really be versions of the higher-order theory of consciousness. I don’t know Joseph IRL (haha) but we have emailed about his papers several times, though I usually get back him too late for it to matter on account of the 16 classes a year I have been teaching since 2015 (for anyone who cares: I am contractually obligated to teach 9 a year and  in addition I teach another 7 as an adjunct (the maximum allowed by my contract)…sadly this is what is required in order for my family to live in New York! ) and I have blogged about his work here before (linked to above) but I really, really like this paper of his. First, I obviously agree with his conclusion and it is nice to see some discussion of this issue. I took some loose steps in this direction myself in the talk I gave at the Graduate Center’s Cognitive Science Speaker Series back in 2015. I thought about writing it up but then had my first son and then found out about Joseph’s paper, which is better than what I could have come up with anyway! I suppose the only place we might disagree is that I think this applies to Block’s first-order theory as well.

But even though I really like the paper there is a bit I would quibble about (but not very much). Gottlieb seems to take seriously my argument that higher-order theories are in principle compatible with phenomenological overflow but I am not sure I agree with how he puts it. He says,

As Richard Brown (2014) points out, HO theorists don’t need to claim that we are aware of our conscious states in all their respects. I might be aware that I am seeing letters (a fairly generic property) but not the identity of every letter I am seeing. In other words, I can be unaware of some of the information represented by the first- order state without the state itself being unconscious (ibid). What happens, then, is: I am phenomenally conscious of the entire 3 X  4 array, with representations of the identities of all the letters available prior to cuing. But only a small number (usually around four) ever get through, accessed by working memory. That’s overflow, and perfectly consistent with HO theory.

In the paper he is citing I was trying to make the point that the higher-order theories which deny overflow do not thereby also commit themselves to the existence of unconscious *states* which are doing heavy lifting. If the states are targeted by the appropriate higher-order representation then those states are conscious. Yet one may not represent all of the properties of the state and so, even though the state is conscious, there is information encoded in the state which you are not aware of (and so is unconscious). That unconscious information (that is to say, that aspect of the conscious state)  is (presumably) what you come to be aware of when you get the cue in the relevant experiments. So it is a bit strange to see this part of the paper cited as supporting overflow (though I do think the position is compatible with overflow I wasn’t thinking of it in this way). But I think I see his point. On the higher-order view it will true to say that one has a phenomenally conscious experience of all of the letters and the details but only access a few (even though what it is like for one may not have all of the details, which is really what I think the overflow people mean to be saying).

This point, though, is I think they key difference between higher-order theories and Global Workspace theories (which is what Block is really targeting with his argument). The basic idea behind the higher-order approach is this. When one is presented with the stimulus all or most of the details of the stimulus are encoded in first-order visual states (that is, states which represent the details of the visual scene). Let’s call the sum-total representational state S. S represents all (or most) of the letters and their specific identifies. One can have S without being aware that one is in S. In this case S is unconscious. Now suppose that one comes to have a (suitable) higher-order awareness that one is in S. According to the higher-order theory of consciousness one thereby comes to have a phenomenally conscious experience of S and becomes consciously aware of what S represents. But since one’s higher-order awareness is (on the theory) a cognitive thought-like state, it will describe its target. Thus one can be aware of S in different ways. Suppose that one is aware of S merely as a clock-like formation of rectangles. Then what it is like for one will be like seeing a clock-like formation of rectangles. Being aware of S seems to keep S online and as one is cued one may come to have a different higher-order awareness of S. One may become aware of some of the details already encoded in S. One was already aware of them, in a generic way, but now one comes to be aware of the same details but just in more detail. Put more in terms of the higher-order theory, one’s higher-order thought(s) come to have a different content than they previously did. The first higher-order state represented you as merely seeing a bunch of rectangles and now you have a state that represents you as seeing a bunch of rectangles where the five-o’clock position is occupied by a horizontal bar (or whatever). Notice that in this way of thinking about the case there are no unconscious states (except the higher-order ones). S is conscious throughout (just in different respects) and it will be true that subjects consciously see all of the letters (just not all of the details).

I want to keep this in mind as I turn to the second paper but before we do I also like Gottlieb’s paper because it actually references this blog! I think this may be the first time my personal blog has been cited in a philosophy journal! I will have more to say about that at some point but for now: cool!

The second paper is by Nicholas D’Aloisio-Montilla and came out in Ratio in December 2017. It is called A Brief Argument for Consciousness without Access. This paper is very interesting and I am glad I became aware of it and D’Alosio-Montilla’s work in general. He is trying to develop a case for phenomenological overflow based on empirical work on aphantasics. These are people who report lack of the ability to form mental imagery. I have to admit that I think of myself this way (with the exception of auditory imagery) so I find this very interesting. But at any rate the basic point seems to be that there is no correlation between one’s ability to form mental imagery (as measured in various ways) and one’s ability to perform the Sperling-like tasks under discussion in the overflow debate.  His basic argument is that if you deny phenomenological overflow then you must think that unconscious representations are the basis of subject’s abilities. Further, if that is the case then it must be because subjects form a (delayed) mental image of the original (unconscious) representation. But there is evidence that subject’s don’t form mental images and so evidence that we should not deny overflow.

I disagree with the conclusion but it is nice to see this very interesting argument and I hope it gets some attention. Even so, I think there is some mis-characterization of my view related to what we have just been talking about in Gottlieb’s paper. D’Alosio-Montilla begins by setting the problem up in the following way,

The reports of subjects [in Sperling-like tasks] imply that their phenomenology (i.e. conscious experience) of the grid is rich enough to include the identities of letters that are not reported (Block, 2011, p.1; Land- man et al., 2003; cf. Phillips, 2011b). As Sperling (1960, p.1) notes, they ‘enigmatically insist that they have seen more than they can … report afterwards’. Introspection therefore suggests that subjects consciously perceive almost all 12 items of the grid, even if they are limited to accessing the contents of just one row (Block 2011; Carruthers, 2015). The ‘overflow’ argument uses this phenomenon as evidence in favor of the claim that the capacity of consciousness outstrips that of access. Overflow theorists maintain that almost all items of the grid are consciously represented by perceptual and iconic representations (D’Aloisio-Montilla, 2017; Block, 1995, 2007, 2011, 2014; Bronfman et al., 2014; for further discussion, see Burge, 2007; Dretske, 2006; Tye, 2006).

This is a nice statement of the overflow argument and the claim that it is the specific identifies of the items of the grid which are consciously experienced but this way of framing the argument begs the question against the higher-order interpretation. The reports in question do not imply rich phenomenology because, as we have just discussed, subjects are correct that they have consciously seen all of the letters even if they are wrong that they consciously experienced the details. Because of this the higher-order no-overflow theorist can accept that there is no correlation between mental imagery ability and Sperling-like task performance and for pretty much the same reasons that the first-order theorist does: because there is a persisting conscious experience.

D’Aloisio-Montilla then goes on to give two objections to his interpretation of my account. He puts it this way,

A final way out for the no-overflow theorist might be to allow for a limited phenomenology of the cued item to occur without visual imagery (Brown, 2012, 2014; Carruthers, 2015). Brown (2012, p. 3) suggests that subjects can form a ‘generic’ experience of the memory array’s items while the array is visible, since attention can be thinly distributed to bring fragments of almost all items to both phenomenal and access consciousness. Phenomenology, for example, might include the fact that ‘there is a formation of rectangles in front of me’ without specifying the orientation of each rectangle (Block, 2014). However, there a still number of problems with an appeal to generic phenomenology. First, subjects report no shift in the precision of their conscious experience when they are cued to a subset of items that they subsequently access (Block, 2007; Block, 2011).

First, I would point out that my goal has always been to show that the higher-order theory of consciousness is both a.) compatible with the existence of overflow but also b.) compatible with no-overflow views and gives a different account of this than Global Workspace Theories (or other working memory-based views). So I am not necessarily a ‘no-overflow theorist’ though I am someone who thinks that i.) overflow has not been established but assumed to exist and ii.) even if there is overflow it is mostly an argument against a particular version of the Global Workspace theory of consciousness, not generally against cognitive theories of consciousness.

But ok, what about his actual argument? I hope it is clear from what we have said above that one would not expect subjects to report ‘a shift in precision’ of their phenomenology. One has a conscious experience (generic or vague in certain respects) but in so doing you help to maintain the first-order (detailed) state. When you get the cue you focus on the aspect of the state which you had only generically been aware of (by coming to have a higher-order awareness with a different content) but what it is like for you is just like feeling like you see all of the details and then focusing in on some of the details. No change in precision. But even so these appeals to the subject’s reports are all a bit suspect.  I use the Sperling stimulus in my classes every semester as a demo of iconic memory and an illustration of how philosophical issues connect to empirical ones and my students seem to be mixed on whether they think they “see all of the letters”. Granted we only do 10-20 trials in the classroom and not in the lab (in Sperling they did thousands of trials) and these are super informal reports made orally in the classroom…but I still think there is a issue here. I have long wanted there to be some experimental philosophy done on this question. It would be nice to see someone replicate Sperling’s results but also include some qualitative comments from subjects about their experience. I almost tried to get this going with Wesley Buckwalter years ago but it didn’t go through. I still think someone should do this and that the results would be useful in this debate.

D’Aloisio-Montilla goes on to say,

Second, subjects are still capable of generating a ‘specific’ image – that is, a visual image with specific content – when the cue is presented. Assuming that the cued item is generically conscious on the cue’s onset, imagery would necessarily be implicated in maintaining any persisting consciousness of the cued item (whether gist-like or specific) throughout the blank interval. Thus, we can still expect to see a correlation between imagery abilities and task performance, because subjects can generate either (1) a visual image with specific phenomenology, or (2) a visual image with generic phenomenology (Phillips, 2011a; Brown, 2014). In any case, subjects who generate a specific phenomenology of the cued item should perform better than those who rely solely on a gist-like experience, and so Brown’s interpretation is also called into question.

But again this seems to miss the point of the kind of no-overflow account the higher-order thought theory of consciousness delivers. It is not committed to mental imagery as a solution. Subjects have a persisting conscious experience which may be less detailed than they experience it as.

Shesh that is a lot and I am sure there is a lot more to say about it but nap time is over and I have to go and play Dinosaur now.

Papa don’t Teach (again!)


The Brown Boys

2018 is off to an eventful start in the Brown household. My wife and I have just welcomed our newborn son Caden (pictured with older brother Ryland and myself to the right) and I will soon be going on Parental Leave until the end of April. Because of various reasons I had to finish the last two weeks of the short Winter semester after Caden was born (difficult!). That is all wrapped up now and there is just one thing left to do before officially clocking out.

Today I will be co-teaching a class with Joseph LeDoux at NYU. Joe is teaching a course on The Emotional Brain and he asked me to come in to discuss issues related to our recent paper. I initially recorded the below presentation to get a feel for how long the presentation was (I went a bit overboard I think) but I figured once it was done I would post it. The animations didn’t work out (I used powerpoint instead of Keynote), I lost some of the pictures, and I was heavily rushed and sleep-deprived (plus I seem to be talking very slow when I listen back to it) but at any rate any feedback is appreciated. Since this was to be presented to a neuroscience class I tried to emphasize some of the points made recently by Hakwan Lau at his blog.

Ian Phillips on Simple Seeing

A couple of weeks ago I attended Ian Phillips’ CogSci talk at CUNY. Things have been hectic but I wanted to get down a couple of notes before I forget.

He began by reviewing change blindness and inattentional blindness. In both of these phenomena subjects sometimes fail to recognize (or report) changes that occur right in front of their faces. These cases can be interpreted in two distinct ways. On one interpretation one is conscious only of what what is able to report on, or attend to. So if there is a doorway in the background that is flicking in and out of existence as one searches the two pictures looking for a difference and when one is asked one says that they see no difference between the two pictures one does not consciously experience the door way or its absence. This is often dubbed the ‘sparse’ view and it is interpreted as the claim that conscious perception contains a lot less detail in it than we naively assume.

Fred Dretske was a well known defender of a view on which distinguishes two components of seeing. There is what he called ‘epistemic seeing’ which, when a subject sees that p, “ascribes visually based knowledge (and so a belief) to [the subject]”. This was opposed to ‘simple seeing’ which “requires no knowledge or belief about the object seen” (all quoted material is from Phillips’ handout). This ‘simple seeing’ is phenomenally conscious but the subject fails to know that they have that conscious experience.

This debate is well known and been around for a while. In the form I am familiar with it it is a debate between first-order and higher-order theories of consciousness. If one is able to have a phenomenally conscious experience in the absence of any kind of belief about that state then the higher-order thought theory on which consciousness requires a kind of higher-order cognitive state about the first-order state for conscious perception to occur, is false. The response developed by Rosenthal, and that I find pretty plausible, is that in change blindness cases the subject may be consciously experiencing the changing element but not conceptualize it as the thing which is changing. This, to me, is just a higher-order version of the kinds of claims that Dretske is making, which is to say that this is not a ‘sparse’ view. Conscious perception can be as rich and detailed as one likes and this does not require ‘simple seeing’. Of course, the higher-order view is also compatible with the claim that conscious experience is sparse but that is another story.

At any rate, Phillips was not concerned with this debate. He was more concerned with the arguments that Dretske gave for simple seeing. He went through three of Dretske’s arguments and argued that each one had an easy rejoinder from the sparse camp (or the higher-order camp). The first he called ‘conditions’ and involved the claim that when someone looks at a (say) picture for 3-5 minutes scanning every detail to see if there is any difference between the two, we would ordinarily say that they have seen everything in the two pictures. I mean, they were looking right at it and their eyes are not defective! The problem with this line of argument is that it does not rule out the claim that they unconsciously saw the objects in question. The next argument, from blocking, meets the same objection. Dretske claims that if you are looking for your friend and no-one  is standing in front of them blocking them from your sight, then we can say that you did see your friend even if you deny it. The third argument involved that when searching the crowd for your friend you did saw no-one was naked. But this meets a similar objection to the previous two arguments. One could easily not have (consciously) seen one’s friend and just inferred that since you didn’t see anyone naked your friend was naked as well.

Phillips then when on to offer a different way of interpreting simple seeing based on signal detection theory. The basic intuition for simple seeing, as Phillips sees it, lies in the idea that the visual system delivers information to us and then there is what we do with the information. The basic metaphor was a letter being delivered. The delivery of the letter (the placing of it into the mailbox) is one thing, you getting the letter and understanding the contents, is another. Simple seeing can then be thought of as the informative part and the cognitive noticing, attending, higher-order thought, etc, can be thought of as a second independent stage. Signal detection theory, on his view, offers a way to capture this distinction.

Signal detection theory starts with treating the subject as an information channel. They then go on to quantify this, usually by having the subject perform a yes/no task and then looking at how many times they got it right (hits) versus how many times the got it wrong (false alarms). False alarms, specifically, involve the subject saying the see something but being wrong about it, because there was no visual stimulus. This is distinguished from ‘misses’ where there was a target but the subject did not report it. The ‘sensitivity to the world’ is called d’, pronounced “d prime”. On top of this there is another value which is computed called ‘c’. c, for criterion, is thought of as measuring a bias in the subjects response and is typically computed from the average of hits versus false alarms. One can think of the criterion as giving you a sense of how ‘liberal’ or ‘conservative’ the subjects’ response is. If they will say they saw something all the time then the seeming have a very liberal criterion for determine whether they saw something (that is to say they are biased towards saying ‘yes I saw it’ and is presumably mistaking noise for a signal). If they never say the say it then they are very conservative (they are biased towards saying ‘no I didn’t see it). This gives us a sense of how much of the noise in the system the subject treats as actually carrying information.

The suggestion made by Phillips was that this distinction could be used to save Dretske’s view if one took d’ to track simple seeing and c to track they subjects knowledge. He then went on to talk about empirical cases. The first involved memory across saccades and came from Hollingworth and Henderson, Accurate Visual Memory for Previously Attended Objects in Natural Scenes, the second f rom Mitroff and Levin Nothing Compares 2 Views: Change Blindness can occur despite preserved access to the changed information, and the third Ward and Scholl Inattentional blindness reflects limitation on perception, not memory. Each of these can be taken to suggest that there is “evidence of significant underlying sensitivity in [change blindness] and [inattentional blindness],”.

He concluded by talking about blindsight as a possible objection. Dretske wanted to avoid treating blindsight as a case of simple seeing (that is of there being phenomenal consciousness that the subject was unaware (in any cognitive sense) of having). Dretske proposed that what was missing was the availability of the relevant information to act as a justifying reason for their actions. He then went on to suggest various responses to this line of argument. Perhaps blindsight subjects who do not act on the relevant information (say by not grabbing the glass of water in the area of their scotoma) are having the relevant visual experience but are simply unwilling to move (how would we distinguish this from their not having the relevant visual experience)? Perhaps blindsight patients can be thought of as adjusting their criterion and so as choosing the interval with the strongest response and if so this can be thought of as reason responsive. Finally, perhaps, even though they are guessing, they really can be thought of as knowing that the stimulus is there?

In discussion afterwards I asked whether he though this line of argument was susceptible o the same criticism he had leveled against Dretske’s original arguments. One could interpret d’ as tracking conscious visual processing that the subject doesn’t know about, or one could interpret it as tracking the amount of information represented by the subjects mental states independently of what the subject was consciously experiencing (at leas to some extent). So, one might think, the d’ is good so the subject represents information about the stimulus that is able to guide its behavior, but that may be going on while the subject is conscious of some of it but not all of it, or different aspects of it, etc. So there is no real reason to think of d’ as tracking simple (i.e. unconceptualized, unnoticed, uncategorized, etc) content that is conscious as opposed to non-conscious. He responded that he did not think that this constituted an argument. Rather he was trying to offer a model that captured what he took to be Dretske’s basic intuition, which was that there was the information represented by the visual system, which was conscious, and then there was the way that we were aware of that information. This view was sometimes cast as unscientific and he thought of the signal detection material as proving a framework that, if interpreted in the way he suggested, could capture, and thus make scientifically acceptable, something like what Dretske (and other first-order theorists) want.

There was a lot of good discussion, a lot of which I don’t remember, but I do remember Ned Block asking about Phillips’ response to cases like the famous Dretske example of a wall, painted a certain color, having a piece of wallpaper in one spot. The little square of wallpaper has been painted and so is the same color as the wall. If one is looking at the wall and doesn’t see that there is a piece of wallpaper there, does one see (in the simple seeing kind of way) the wallpaper? Phillips seemed to be saying we did (but didn’t know it) and Block asked whether it wasn’t the case that when we se something we represent it visually and Phillips responded by saying that on the kind of view he was suggesting that wasn’t the case. Block didn’t follow up and didn’t come out after so I didn’t get the chance to follow up on that interesting change.

Afterwards I pressed him on the issue I raised. I wondered what he thought about the kinds of cases, discussed by Hakwan Lau (and myself) where the d’ is matched but subjects give differing answers to questions like ‘how confident are you that you saw it?’ or ‘rate the visibility of the thing seen’. In those cases we have, due to matched d’, the same information content (worldly sensitivity) and yet one subject says they are guessing while the other says they are confident they saw it (or rates its visibility lower while the other rates it higher (so as more visible)). Taking this seriously seems to suggest that there is a difference in what it is like for these subjects (a difference in phenomenal consciousness) while there is no difference in what they represent about the world (so at the first-order level). The difference in what it is like for them seems to track the way in which they are aware of the first-order information (as tracked by their visibility/confidence ratings). If so then this suggests that d’ doesn’t track phenomenal consciousness. Phillips responded by suggesting that there may be a way to talk about simple seeing involving differences in what it is like for the subject but didn’t elaborate.

I still am not sure how he responds to the argument Hakwan and I have given. If there is differing conscious experience with the same first-order states each in each case then the difference in conscious experience can only be captured (or is best captured) by some kind of difference in our (higher-order) awareness of those first-order states.

In addition, now that I have thought about it a bit, I wonder how he would respond to Hakwan’s argument (more stemming from his own version of higher-order thought theory) that the setting of the criterion in Phillips’ appeal to it in blindsight cases, depends on a higher-order process and so amounts to a cognitive state having a constitutive role in determining how the first-order state is experienced. This suggests that an ‘austere’ notion of simple seeing where there is no cognitive states involved in phenomenal consciousness is harder to find than Phillips originally thought.

LeDoux and Brown on Higher-Order Theories and Emotional Consciousness

On Monday May 1st Joe LeDoux and I presented our paper at the NYU philosophy of mind discussion group. This was the second time that I have presented there (the first was with Hakwan (back in 2011!)). It was a lot of fun and there was some really interesting discussion of our paper.

There were a lot of inter-related points/objections that came out of the discussion but here I will just focus on just a few themes that stood out to Joe and I after the discussion. I haven’t yet had the chance to talk with him extensively about this so this is just my take on the discussion.

One of the issues centered on our postulation that there are three levels of content in emotional consciousness. On the ‘traditional’ higher-order theory there is the postulation of two distinct states. One is ‘first-order’ where this means that the state represents something in the world (the animal’s body counts as being in the world in this sense). A higher-order mental state is one that has higher-order content, where this means that it represents a mental state as opposed to some worldly-non-mental thing. It is often assumed that the first-order state will be some basic, some might even say ‘non-representational’ or non-conceptual, kind of content. We do not deny that there are states like the but we suggested that we needed to ‘go up a level’ so to speak.

Before delving into this I will say that I view this as an additional element in the theory. The basic idea of HOROR theory is just that the higher-order state is the phenomenally conscious state (because that what phenomenal consciousness is). I am pretty sure that the idea of the lower-order state being itself a higher-order state is Joe’s idea but to be fair I am not 100% sure. The idea was that the information coming in from the senses needed to be assembled in working memory in such a way as to allow the animal to connect memories, engage schemas etc. We coined the term ‘lower-order’ to take the place of ‘first-order’. For us a lower-order state is just one that is the target of a higher-order representation. Thus, the traditional first-order states would count as lower-order on our view but so would additional higher-order states that were re-represented  at a higher-level.

Thus on the view we defended the lower-order states are not first-order states. These states represent first-order states and thus are higher-order in nature. When you see an apple, for example, there must be a lot of first-order representations of the apple but these must be put together in working memory and result in a higher-order state which is an awareness of these first-order states. That higher-order representation is the ‘ground floor’ representation for our view. It is itself not conscious but it results in the animal behaving in appropriate ways. At this lower-order level we would characterize the content as something like ‘(I am) seeing an apple’. That is, there is an awareness of the first-order states and a characterization of those states as being seeing of red but there is no explicit representation of the self. There is an implicit referring to the self, by which we mean these states are attributed to the creature who has them but not in any explicit way. This is why we think of this state as just an awareness of the first-order activity (plus a characterization of it). At the their level we have a representation of this lower-order state (which is itself a higher-order state in that it represents first-order states).

Now, again, I do not really view this three-layer approach as essential to the HOROR theory. I think HOROR theory is perfectly compatible with the claim that it is first-order states that count as the targets. But I do think it is an interesting issue at state here and that is what role exactly the ‘I’ in “I am seeing a red apple’ is playing and also whether first-order states can be enough to play the role of lower-order states. Doesn’t the visual activity related to the apple need to be connected to concepts of red and apple? If so then there needs to be higher-order activity that is itself not conscious.

Another issue focused on our methodological challenge to using animals in consciousness research. Speaking for myself I certainly think that animals are conscious but since they cannot verbally report, and as long as we truly believe that the cognitive unconscious is as robust as widely held, then we cannot rule out that animal behavior is produced by non-conscious processes. What this suggests is that we need to be cautious when we infer from an animal’s behavior to the cause of it being a phenomenally conscious mental state. Of course that could be what is going on, but how do we establish that? It cannot be the default assumption as long as we accept the claims about the cognitive unconscious. Thus we do not think that animals do or do not have conscious experience but rather that the science of consciousness is best pursued in Humans (for now at least). For me this is related to what I think of as the biggest confound in all of consciousness science and that is the confound of behavior. If an animal can perform a task then it is assumed this is because its mental states are conscious. But if this kind of task can be performed unconsciously then behavior by itself cannot guarantee consciousness.

One objection to this claim (sadly I forgot who made this…maybe they’ll remind me in the comments?) was that maybe verbal responses themselves are non-conscious. When I asked if the kind of view that Dennett has, where there is just some sub-personal mechanism which results in an utterance of “I am seeing red” and this is all there is to the conscious experience of seeing red, counts as the kind of view the objector had in mind. The response was that no they had in mind that maybe the subjects are zombies with no conscious experience at all and yet were able to answer the question “what do you see” with “I see red,” just like zombies are thought to do. I responded to this with what I think is the usual way to respond to skeptical worries. That is, I acknowledge that there is a sense in which such skeptical scenarios are conceivable (though maybe not exactly as the conceiver supposes), but there are still reasons for not getting swept up in skepticism. For example I agree with the “lessons” from fading, dancing, and absent qualia cases that we would be in an unreasonable sense detached from our conscious experiences if this were happening. The laws of physics don’t give us any reason to suppose that there are radical differences between similar things (like you and I), though if we discovered an important brain area missing or damaged then I suppose we could be led to the conclusion that some member of the population lacked conscious experience. But why should we take this seriously now? I know I am conscious from my own first-person point of view and unless we endorse a radical skepticism then science should start from the view that report is a reliable(ish) guide to what is going on in a subject’s mind.

Another issue focused on our claim that animal consciousness may be different from human conscious experience. If you really need the concept ‘fear’ in order to feel afraid and if there is a good case to be made that animals don’t have our concept of fear then their experience would be very different from ours. That by itself is not such a bad thing. I take it that it is common sense that animal experience is not exactly like human experience. But it seems as though our view is committed to the idea that animals cannot have anything like the human experience of fear, or other emotions. Joe seemed to be ok with this but I objected. It is true that animals don’t have language like humans do and so are not able to form the rich and detailed kinds of concepts and schemas that humans do but that does not mean that they lack the concept of fear at all. I think it is plausible to think that animals have some limited concepts and if they are able to form concepts as basic as danger (present) and harm then they may have something that approaches human fear (or a basic version of it). A lot of this depends on your specific views about concepts.

Related to this, and brought up by Kate Pendoley was the issue of whether there can be emotional experiences that we only later learn to describe with a word. I suggested that I thought the answer may be yes but that even so we will describe the emotion in terms of its relations to other known emotions. ‘It is more like being afraid than feeling nausea’ and the like. This is related to my background view about a kind of ‘quality space’ for the mental attitudes.

Afterwards, over drinks, I had a discussion with Ned Block about the higher-order theory and the empirical evidence for the role of the prefrontal cortex in conscious experience. Ned has been hailing the recent Brascamp et al paper (nice video available here) as evidence against prefrontal theories. In that paper they showed that if they take away report and attention (by making the two stimuli barely distinguishable) then you can show that there is a loss of the prefrontal fMRI activation. I defended the response to this that fMRI is too crude of a measure to take this null result too seriously. This is what I take to be the line argued in this recent paper by Brain Odgaard, Bob Knight, and Hakwan, Should a few null findings falsify prefrontal theories of consciousness? Null results are ambiguous as between the falsifying interpretation and it just being missed by a crude tool. As Odgaard et al argue if we use more invasive measures like single cell or ECoG then we would find prefrontal activity. In particular the Mante et al paper referred to in Odgaard et all is pretty convincing demonstration that there is information decodable from prefrontal areas that would be missed by an fMRI. As they say in the linked to paper,

There are numerous single- and multi- unit recording studies in non-human primates, clearly demonstrating that specific perceptual decisions are represented in PFC (Kim and Shadlen, 1999; Mante et al., 2013; Rigotti et al., 2013). Overall, these studies are compatible with the view that PFC plays a key role in forming perceptual decisions (Heekeren et al., 2004; Philiastides et al., 2011; Szczepanski and Knight, 2014) via ‘reading out’ perceptual information from sensory cortices. Importantly, such decisions are central parts of the perceptual process itself (Green and Swets, 1966; Ratcliff, 1978); they are not ‘post-perceptual’ cognitive decisions. These mechanisms contribute to the subjective percept itself (de Lafuente and Romo, 2006), and have been linked to specific perceptual illusions (Jazayeri and Movshon, 2007).

In addition to this Ned accused us of begging the question in favor of the higher-order theory. In particular he thought that there really was no conscious experience in the Rare Charles Bonnett cases and that our appeal to Rahnev was just question begging.

Needless to say I disagree with this and there is a lot to say about these particular points but I will have to come back to these issue later. Before I have to run, and just for the record, I should make it clear that, while I have always been drawn to some kind of higher-order account, I have also felt the pull of first-order theories. I am in general reluctant to endorse any view completely but I guess I would have to say that my strongest allegiance is to the type-type identity theory. Ultimately I would like it to be the case that consciousness and mind are identical to brain states and/or states of the brain. I see the higher-order theory as compatible with the identity theory but I am also sympathetic to to other versions (for full-full disclosure, there is even a tiny (tiny) part of me that thinks functionalism isn’t as bad as dualism (which itself isn’t *that* bad)).

Why, then, do I spend so much time defending the higher-order theory? When I was still an  undergraduate student I thought that the higher-order thought theory of consciousness was obviously false. After studying it for a while and thinking more carefully about it I revised my credence to ‘not obviously false’. That is, I defended it against objections because I thought they dismissed the theory unduly quickly.

Over time, and largely because of empirical reasons, I have updated my credence  from ‘not obviously false’ to ‘possibly true’ and this is where I am at now. I have become more confident that the theory is empirically and conceptually adequate but I do not by any means think that there is a decisive case for the higher-order theory.

Dispatches from the Ivory Tower

In celebration of my ten years in the blogosphere I have been compiling some of my past posts into thematic meta-posts. The first of these listed my posts on the higher-order thought theory of consciousness. Continuing in this theme below are links to posts I have done over the past ten years reporting on talks/conferences/classes I have attended. I wrote these mostly so that I would not forget about these sessions but they may be interesting to others as well. Sadly, there are several things I have been to in the last year or so that I have not had the tim to sit down and write about…ah well maybe some day!

  1. 09/05/07 Kripke
    • Notes on Kripke’s discussion of existence as a predicate and fiction
  2. 09/05/2007 Devitt
  3. 09/05 Devitt II
  4. 09/19/07 -Devitt on Meaning
    • Notes on Devitt’s class on semantics
  5. Flamming LIPS!
  6. Back to the Grind & Meta-Metaethics
  7. Day Two of the Yale/UConn Conference
  8. Peter Singer on Climate Change and Ethics
    • Notes on Singer’s talk at LaGuardia
  9. Where Am I?
    • Reflections on my talk at the American Philosophical Association talk in 2008
  10. Fodor on Natural Selection
    • Reflections on the Society of Philosophy and Psychology meeting June 2008
  11. Kripke’s Argument Against 4-Dimensionalism
    • Based on a class given at the Graduate Center
  12. Reflections on Zoombies and Shombies Or: After the Showdown at the APA
    • Reflections on my session at the American Philosophical Association in 2009
  13. Kripke on the Structure of Possible Worlds
    • Notes on a talk given at the Graduate Center in September 2009
  14. Unconscious Trait Inferences
    • Notes on social psychologist James Uleman‘s talk at the CUNY Cogsci Speaker Series September 2009
  15. Attributing Mental States
    • Notes on James Dow‘s talk at the CUNY Cogsci Speaker Series September 2009
  16. Busy Bees Busily Buzzing ‘Bout
  17. Shombies & Illuminati
  18. A Couple More Thoughts on Shombies and Illuminati
    • Some reflections after Kati Balog’s presentation at the NYU philosophy of mind discussion group in November 2009
  19. Attention and Mental Paint
    • Notes on Ned Block’s session at the Mind and Language Seminar in January 2010
  20. HOT Damn it’s a HO Down-Showdown
    • Notes on David Rosenthal’s session at the NYU Mind and Language Seminar in March 2010
  21. The Identity Theory in 2-D
    • Some thoughts in response to theOnline Consciousness Conference in February 2010
  22. Part-Time Zombies
    • Reflections on Michael Pauen‘s Cogsci talk at CUNY in March of 2010
  23. The Singularity, Again
    • Reflections on David Chalmers’ at the NYU Mind and Language seminar in April of 2010
  24. The New New Dualism
  25. Dream a Little Dream
    • Reflections on Miguel Angel Sebastian’s cogsci talk in July of 2010
  26. Explaining Consciousness & Its Consequences
    • Reflections on my talk at the CUNY Cog Sci Speaker Series August 2010
  27. Levine on the Phenomenology of Thought
    • Reflections on Levine’s talk at the Graduate Center in September 2010
  28. Swamp Thing About Mary
    • Reflections on Pete Mandik’s Cogsci talk at CUNY in October 2010
  29. Burge on the Origins of Perception
    • Reflections on a workshop on the predicative structure of experience sponsored by the New York Consciousness Project in October of 2010
  30. Phenomenally HOT
    • Reflections on the first session of Ned Block and David Carmel’s seminar on Conceptual and Empirical Issues about Perception, Attention and Consciousness at NYU January 2011
  31. Some Thoughts About Color
  32. Stazicker on Attention and Mental Paint
  33. Sid Kouider on Partial Awareness
    • a few notes about Sid Kouider’s recent presentation at the CUNY CogSci Colloquium in October 2011
  34. The 2D Argument Against Non-Materialism
    • Reflections on my Tucson Talk in April 2012
  35. Peter Godfrey-Smith on Evolution And Memory
    • Notes from the CUNY Cog Sci Speaker Series in September 2012
  36. The Nature of Phenomenal Consciousness
    • Reflections on my talk at the Graduate Center in September 2012
  37. Giulio Tononi on Consciousness as Integrated Information
    • Notes from the inaugural lecture of the new NYU Center for Mind and Brain by Giulio Tononi
  38. Mental Qualities 02/07/13: Cognitive Phenomenology
  39. Mental Qualities 02/21/13: Phenomenal Concepts
    • Notes/Reflections from David Rosenthal’s class in 2013
  40. The Geometrical Structure of Space and Time
    • Reflections on a session of Tim Maudlin’s course I sat in on in February 2014
  41. Towards some Reflections on the Tucson Conferences
    • Reflections on my presentations at the Tucson conferences
  42. Existentialism is a Transhumanism
    • Reflections on the NEH Seminar in Transhumanism and Technohumanism at LaGuardia I co-directed in 2015-2016

Thinking about Higher-Order Thought Theories of Consciousness

I have on occasion been accused of being a “Higher-Order Theorist” and I suppose I will have to plead guilty to that at this point! I have spent a lot of time thinking, talking, and writing about the higher-order thought theory of consciousness. A lot of that thinking occurred here at Philosophy Sucks! and so I have gathered links to the posts I have written over the past 10 years exploring various aspects of the higher-order thought theory of consciousness (some which have been incorporated into various publications of mine but others haven’t).

Altogether I counted 51 posts, which is about 10% of my total posts!

  1. Explaining What It’s Like
  2. Do Thoughts Make Us Conscious of Things?
  3. A Tale of Two T’s
  4. Two Concepts of Transitive Consciousness
  5. Kripke, Consciousness, and the ‘Corn
  6. As ‘Corny as I want to Be
  7. HOT Fun in the Summertime 1
  8. HOT Fun in the Summertime 2
  9. Gary and Jerry
  10. On Hallucinating Pain
  11. Consciousness, Relational Properties, and Higher-Order Theories
  12. Consciousness is not a Relational Property
  13. Varieties of Higher-Order Zombie
  14. Empirical Support for the Higher-Order Theory of Consciousness
  15. The Function of Consciousness in Higher-Order Theories
  16. That’s not an Argument
  17. The Introspective HOT Zombie Problem
  18. Is There Such a Thing as a Neurophilosophical Theory of Consciousness?
  19. Implementing the Transitivity Principle
  20. Priming and Change Blindness
  21. Priming, Change Blindness, and the Function of Consciousness
  22. Unconscious Change Detection, Priming, and the Function of Consciousness
  23. HOT Fun in the Wintertime?
  24. Rosenthal’s Objection
  25. Pain Asymbolia and Higher-Order Theories of Consciousness
  26. There’s Something about Jerry
  27. HOT (Still) Implies PAM
  28. HOT Theories of Consciousness & Unconscious Gricean Intentions
  29. The Higher-Order Response to the Zombie Argument
  30. HOT Imagination
  31. HOT Byrne
  32. Consciousness, Consciousness, Consciousness!
  33. HOT Dam it’s a HO Down-Showdown
  34. Unconscious Introspection and Higher-Order Theories of Consciousness
  35. HOT Qualia Realism
  36. HOT Block
  37. More HOTer, More Better
  38. Higher-Order Mental Pointing
  39. The New New Dualism
  40. Dream a little dream
  41. Phenomenally HOT
  42. Same-Order Theories of Consciousness and the Failure of Phenomenal Intimacy
  43. Explaining Consciousness and its Consequences
  44. Cognitive Access: The Only Game in Town 
  45. Explaining Cartesian Consciousness
  46. The Overflow Cup Runneth Over
  47. The Nature of Phenomenal Consciousness
  48. Introspection, Acquaintance, and Higher-Order Representations
  49. Kozuch on Lau and Brown
  50. Gottlieb on Presentational Character and Higher-Order Thought Theories of Consciousness
  51. Seager on the Empirical Case for Higher-Order Theories of Consciousness

Seager on the Empirical Case for Higher-Order Theories of Consciousness

In the recent second edition of William Seager’s book Theories of Consciousness: An Introduction and Assessment he addresses some of my work on the higher-order theory. I haven’t yet read the entire book but he seems generally very skeptical of higher-order theories, which is fine. Overall the argument he presents is interesting and it allows me to clarify a few things.

It is clear from the beginning that he is interpreting the higher-order theory in the standard relational way. This is made especially clear when he says that the basic claim of higher-order theory can be put as follows:

A mental state is conscious if and only if it is the target of a suitable higher-order thought (page 94)

This is certainly the way that most people interpret the theory and is the main reason I adopted ‘HOROR’ theory as a name for the kind of view I thought was the natural interpretation of Rosenthal’s work. I seem to remember a time when I thought this was ‘the correct’ way to think about Rosenthal’s work but I have since come to believe that it is not as cut and dry and that.

This is why I have given up on Rosenthal exegesis and just pointed out that there are two differing ways to interpret the theory. One of which is the relational kind of view summed up above. The other is the non-relation view, which I have argued allows us to capture key insights of the first-order theories. On this alternative interpretation the first-order state is not ‘made’ phenomenally conscious by the higher-order state. Rather the higher-order state just is phenomenal consciousness. Simply having the appropriate higher-order state is what being phenomenally conscious consists in, there is nothing more to it than that. This is the way I interpret the higher-order theory.

Seager comes close to recognizing this when he says (on page 94),

Denial of (CS) [the claim that “if S is conscious then S is in (or has) at least one conscious state”] offers a clear escape hatch for HOT theory. Contrast that clarity with this alternative characterization of the issue ‘[c]onscious states are states we are conscious of ourselves as being in, whether we are actually in them’ (Rosenthal 2002 p 415). Here Rosenthal appears to endorse the existence of a conscious state which is not the target of a higher-order thought, contrary to HOT theory itself. If so then HOT theory is not the full account of the nature of conscious states and it is time to move on to other theories. I submit that it is better for HOT theorists to reject (CS) and allow for creatures to be conscious in certain ways in the absence of an associated conscious mental state.

The quote from Rosenthal is an accurate one and it does summarize his views. If one interprets it my way, as basically saying that the higher-order state is the phenomenally conscious state, then we do have a conscious state that is not the target of a higher-order state (or at least which need not be). This is because the higher-order state is phenomenally conscious but not because of a further higher-order state. It is because being phenomenally conscious consists in being aware of yourself in the way the higher-order theory requires. As I have argued, in several places, this does not require that we give up the higher-order theory or adopt a ‘same-order theory’. HOROR theory is the higher-order thought theory correctly interpreted.

It thus turns out that phenomenal consciousness is not the same thing as ‘state consciousness’ as it is usually defined on the traditional higher-order theory. That property involves being the target of the higher-order state. This is something that, on my view, reduces to the causal connections between higher-order states, and their conceptual contents, and the first-order states. This will amount to a causal theory of reference for higher-order states. They refer to the first-order states which cause them in the right way. The states to which they refer are what I call the ‘targets’ of the higher-order states. So, for me the targeting relation is causal, but for Rosenthal and others more influenced by Quine it essentially amounts to describing. Thus for Rosenthal the target of the relevant higher-order state will be the first-order state which ‘fits the description’ in the higher-order content. I suppose I could live with either of these ultimately but I do think you need to say something about this on the higher-order account. At any rate on my view being the target of the higher-order state tells us which state we are aware of and the content of the higher-order state tells us the way in which we are aware of it. The two typically occur together but if I had to call one the phenomenally conscious state it would be the higher-order state.

Seager goes on to say in the next paragraph,

One might try to make a virtue of necessity here and seek for confirmation of the false HOT scenario. There have been some recent attempts to marshall empirical evidence for consciousness in the absence of lower-level states but with the presence of characteristic higher-order thoughts, thus showing that the latter are sufficient to generate consciousness (see Lau and Rosenthal 2011; Lau and Brown forthcoming; Brown 2015). The strategy of these efforts is clear: Find the neural correlates of higher-order thoughts posited by HOT theory, test subjects on tasks which sometimes elicit consciousness and sometimes do not (e.g. present them with an image for a very short time and ask them to report on what they saw), and, ideally, observe that no lower-order states occur even in the case where subjects report seeing something. Needless to say, it is a difficult strategy to follow. (page 95)

I would quibble with the way that things are put here but overall I agree with it. The quibbles come from the characterization of the strategy. What Lau and I were arguing was that we want to find cases where the first-order state is either absent or degraded, or  otherwise less rich than the conscious experiences of subjects. So we would be happy just with a mis-match between the first-order and higher-order cases. Whether we ever get the ideal total absence of first-order states is maybe too high of a bar. This is why in the work that Lau does he aims to produce cases where task performance is matched but subjective reports differ. The primary goal is to show that conscious experience outstrips what is represented at the first-order level. It is a difficult strategy to follow but all we can do is to use the tools we have to try to test the various theories of consciousness.

Seager then goes on to focus on the case of the rare form of Charles Bonnett syndrome. In these rare cases subjects report very vivid visual hallucinations even though there is extensive damage to the primary visual cortex. Seager briefly considers Miguel Sebastian’s objection based on dreaming but then objects that

…a deeper problem undercuts the empirical case, tentative though it is, for HOT theory and the empty HOT scenario. This is a confusion about the nature of the lower-order and higher-order cognitive states ate issue. ‘Lower-order’ does not mean ‘early’ and ‘higher-order’ does not mean ‘later’ in the brain’s processing of information. Higher-order refers specifically to thoughts about mental states as such; lower-order states are not about thoughts as such but are about the world as presented to the subject (including the subject’s body).

There is little reason to think that lower-order states, properly conceived, should be implemented in low-level or entry-level sensory systems. It is not likely that an isolated occipital lobe would generate visually conscious states.

Nor is it unlikely that lower-order states, states, that is, which represent the world and the body occur in ‘higher’ brain regions such as the dorsolateral prefrontal cortex. It would be astounding if that brain region were devoted to higher-order thoughts about mental states as such. (page 96)

I largely agree with the points being made here but I do not think that Lau and I were confused about this. The first thing I would say is that we are pretty explicit that we adopt the usage that we think the typical first-order theorist does (and especially Ned Block) and that we include areas outside the occipital lobe “that are known to contain high number of neurons explicitly coding for visual objects (e.g. fusiform face area)”  as first-order areas (see footnote 7 in the paper).

In the second instance we talked about three empirical cases in the paper and each was used for a slightly different purpose. When people discuss this paper, though, they typically focus on one out of the three. Here is how we summed up the cases in the paper:

To sum up, there are three kinds of Empirical Cases – Rare Charles Bonnet Cases (i.e. Charles Bonnet cases that result specifically from damage to the primary visual cortex), Inattentional Inflation (i.e. the results of Rahnev et al, in press and in review) and Peripheral Vision (introspective evidence from everyday life). The three cases serve slightly different purposes. The Rare Charles Bonnet Cases highlight the possibility of vivid conscious experience in the absence of primary visual cortex. If we take the primary visual cortex as the neural structure necessary for first-order representations, this is a straightforward case of conscious experience without first-order representations. In Inattentional Inflation, the putative first-order representations are not missing under the lack of attention, but they are not strong enough to account for the “inflated” level of reported subjective perception, in that both behavioral estimates of the signal-to-noise ratio of processing and brain imaging data show that there was no difference in overall quality or capacity in the first-order perceptual signal, which does not concern only the primary visual cortex but also other relevant visual areas. Finally, Peripheral Vision gives introspective evidence that conscious experience may not faithfully reflect the level of details supported by first-order visual processing. Though this does not depend on precise

laboratory measures, it gives an intuitive argument that is not constrained by specific experimental details.

So I don’t think Seager’s criticism of us as being confused about this is fair.

In addition, in recent work with Joe LeDoux we endorse the second claim made by Seager. We explicitly argue that the ‘lower-order’ states we are interested in will occur in working emory and likely even dorsal lateral prefrontal cortex.

But even if I think Seager is wrong to accuse us of being insensitive or confused about this issue I do think he goes on to present an interesting argument. He goes on to say,

The problem can be illustrated by the easy way HOT (or HOT-like) theorists pass over this crucial distinction. Consider these remarks from Richard Brown:

Anyone who has had experience with wine will know that acquiring a new word will sometimes allow one to make finer-grained distinctions in the experience that one has. One interpretation of what is going on here is that learning the new word results in one’s having a new concept and the application of this concept allows one to represent one’s mental life in a more fine-grained way. This results in more phenomenal properties in one’s experience…that amounts to the claim that one represents one’s mental life as instantiating different mental qualities.

Those unsympathetic to HOT theory will balk at this description. What is acquired is an enhanced ability to perceive or appreciate the wine in this case, not the experience of the wine (the experience itself does not seem to have any distinctive perceivable properties). After training the taster has new lower-order states which better characterize the wine, not new higher-order states aimed at and mentally characterizing the experience of tasting the wine.

Since there is no reason to restrict lower-order states to relatively peripheral sensory systems, it will be very hard to make out an empirical case for HOT theory and the empty HOT scenario in the way suggested. (pages 96-97)

The quote he offers here is from the HOROR paper and so it is interesting to see that the proposed solution, that the higher-order state is phenomenally conscious and that this is not giving up on the higher-order theory, is neglected.

Before going on I should say that I am pretty much sympathetic to the point being made here. I think there is a first-order account of what is going on. I also tend to think that this is ultimately an empirical issue. If there were a way to test this that would be great but I am not sure we have the capacity to do so yet. But my main point in the paper was not to offer this as a phenomenon that the first-order theorist couldn’t explain. What I was intending to do was to argue that the higher-order interpretation is one consistent interpretation of this phenomenon. It fits naturally with the theory and shows that there is nothing absurd in the basic tenet of the HOROR theory that phenomenal consciousness really is just a kind of higher-order thought, with conceptual content.

As I read Rosenthal he does not think the first-order account is plausible. For Rosenthal we are explicitly focusing on our experience sin these kinds of cases. One takes a drink of the wine and focuses on the taste of the wine. This may be done even after one has swallowed the wine. The same is true for the auditory cases. It does seem plausible that in these cases I am focused on my experience, not on the wine (it is the experience of the wine of course). But if the general kind of theory he advocates is correct then one will still come to appreciate the wine itself. When I have the new fine-grained higher-order thoughts they will attribute to me finer-grained first-order states and these will be described in terms of the properties I experience the wine as having. They will thus make me consciously aware of the wine and its qualities but they do so by making me aware of the first-order states. The first-order alternative at least seems to be at a disadvantage here because it seems that on their view learning the new word produces new first-order qualities as opposed to making me aware of the qualities which were already there (as on the higher-order view). I think there is some evidence that we can have ‘top down’ activity producing/modifying lower-order states so I ultimately think this is an empirical issue. At the very least I think we can say that this argument shows that the higher-order theory makes a clear, empirically testable predication, and like the empty higher-order state claim itself, the more implausible the prediction the more of a victory it is when it is not falsified.

At any rate abstracting from all of this Seager presents an interesting argument. If I am reading it correctly the claim seems to be that the empirical case for the higher-order theory is going to be undercut because first-order theories are not committed to the claim that first-order states are to be found in early sensory areas, and might even be found in places like the dlPFC. If so then even if there were a difference in activation there, as between early sensory areas, then this by itself would not be evidence for a higher-order theory because those may be first-order states.

The way I tried to get around this kind of worry (in my Brain and its States paper) was by taking D prime to be a measure of the first-order information which is being represented. This was justified, I thought, because the first-or-lower-order states are thought by us to largely drive the task performance. D prime gives us a measure of how well the subjects perform the task (by calculating the ration of hits to false alarms) and so it seems natural to suppose it gives a measure of what the first-order states are representing. The bias in judgment can be measured by C (the criterion) in signal detection theory and this can roughly be treated as a measure of the confidence of the subjects. So, instead of looking for direct anatomical correlates we can look for matched D prime scores while there is difference in subjective report. This is exactly what Lau and his lab has been able to show in many different cases. In addition when there is fMRI data it shows no significant difference in any first-order areas while there is a difference in the prefrontal cortex. Is this due to residual first-order states in ‘higher-order’ areas? Maybe, but if so they would be accounted for in the measure of D prime. And that would not explain why subjects report a difference in visibility, or confidence, or whatever. Because of this I do not think the empirical cases has been much undermined by Seager.