No Euthyphro Dilemma for Higher-order Theories

I just came across Daniel Stoljar’s forthcoming paper A Euthyphro Dilemma for Higher-order theories. In it he tries to present a kind of dilemma for the higher-order thought theory but I find his reasoning highly suspect.

He assumes throughout that the higher-order theory is offering a definition of ‘consciousness,’ which is not exactly right. At least as I understand the theory it is an empirical conjecture about the nature of phenomenal consciousness and so not in the business of offering a definition. However, if we mean by definition something like what Socrates is seeking, viz., the thing which all conscious states have in common in virtue of which they count as conscious states, there there is a sense in which the higher-order view is after a definition, so I will go along with him on this.

The basic thrust of the paper is that we can ask two questions, one is ‘are we aware of ourselves as being in the state because the state is conscious?’ and the other is ‘is the state conscious because we are aware of ourselves as being in it?” Obviously the first ‘horn’ is not going to be taken as it effectively assumes that the higher-order theory is in fact false. The second ‘horn’ is the one the higher-order theories will take. So, what is the problem with it? Here is what Stoljar says:

Alternatively, if you say the second, that the state is conscious because you believe you are in it, you need to deal with the possibility of being in the state and yet failing to believe that you are. On the higher-order thought theory, the state is in that case no longer conscious. But as before that is questionable. Suppose you are so consumed by the fox that you completely forget (and so have no beliefs about) what you are doing, at least for a short interval. On the face of it, you remain conscious of the fox, and so your state of perceiving the fox remains conscious. If so, it can’t be the case that the state is conscious because you believe that you are in it. After all, you do not believe this, having temporarily forgotten completely what you are doing.

I am not sure how ‘on the face of it’ is supposed to work! It seems as though he is just assuming that the theory is false and then saying ‘ahah! The theory could be false!’ Even if we interpret him charitably it seems like he is assuming that the higher-order states in question would be like conscious beliefs. Calling the higher-order thoughts beliefs is a bit of a misnomer since I take beliefs to be dispositions to have occurrent assortoric thoughts. But as long as one means by ‘belief’ something like an occurrent thought then we can go along with this as well. If one is ‘so absorbed in the fox’ that one forgets (consciously) what one is doing it does not follow that one has no unconscious thoughts about oneself.

Stoljar recognizes this and goes on to say:

Friends of the theory may insist that you do hold the belief in question. Maybe the belief is not so demanding. Or maybe it is suppressed or inarticulate, not the sort of belief that you could formulate in words if asked. Maybe, but it doesn’t matter. For even if you do believe you are in the state of perceiving the fox, it doesn’t follow that this state is conscious because you believe this. Further, even if you do believe this, it remains as true as ever that, if you didn’t, the state of perceiving would nevertheless be conscious. After all, even if you didn’t believe that you are in the state of perceiving the fox, you would still focus on the fox, and so be conscious of it, as much as before.

I find this passage to be extremely puzzling and I am not sure how to interpret it. There are arguments given for the higher-order theory and this does not address any of them. Further, there is no justification given for the final claim, that even if one did not have the relevant higher-order thought one would still be (phenomenally) conscious of the fox in the same way. What reason is there to accept this? It is just assumed by fiat. So there is no dilemma for higher-order theories here. There is just someone with differing intuitions about what conscious states are.

Stoljar goes on to consider a version of the view that os closer to what is actually defended by Rosenthal. he says:

Rosenthal says you must believe that you are in the state in a way that is non-perceptual and non-inferential (Rosenthal 2005).

This is incorrect. What Rosenthal says is that the relevant higher-order state must be arrived at in a way that does not subjectively seem to be inferential. That is compatible with its actually being the product of inference. But ok, subtle points aside what is the issue? He goes on to say:

But even this is not sufficient. Suppose again you are in and an amazing and unlikely thing happens. Before you even open Linguistic Inquiry, you get banged on the head and freakishly come to believe that you are in S. In this case, three things are true: you are in S, you believe you are in S, and you came to believe this in a way that is neither perceptual nor inferential. Even so it does not follow that is conscious; on the contrary, it remains as unconscious as it was before.

But again what reason is there to think this? If one is in a higher-order state to the effect that one is in S and this is arrived at in a way that subjectively seems to be non-inferential then according to the theory on will be in a conscious state! That is just what the theory claims. So there is no need to use introspection in the way that Stoljar claims.

Stoljar also briefly discusses the argument from empty higher-order thoughts, saying:

It is worth noting that many proponents of the higher-order theory insist on a different response to this objection. They say the belief can be empty but that the state that is conscious exists not as such but only according to the belief, rather as certain things may exist not as such but only according to the National Inquirer. I won’t attempt to discuss this idea here, since it is extensively discussed elsewhere; see, e.g., (Rosenthal 2011, Weisberg 2011, Berger 2014, Brown 2015, Gottlieb 2020). But it is worth noting that interpreting the view this way has the consequence that it is no longer a definition of a conscious state in the way that it is normally taken to be, and as I have taken it to be throughout this discussion. After all, adefinition of a conscious state either is or entails something of the form ‘x is a conscious state if and only if x is…’. This entails in turn that the state that is conscious must turn up on the right-hand side of the definition. But if you say that something is a conscious state if and only if you believe such and such, and if the belief in question does not entail the existence of the relevant state, then the state does not turn up as it should on the right-hand side; hence you have not defined anything.

But again, this is incorrect. According to Rosenthal the state which turns up on the right hand side is the state you represent yourself as being in, -whether or not one is actually in that state is irrelevant!-

There is a lot more to say about these issues, and other issues in Stoljar’s paper but I have to help get the kids their lunch!

…And the Conscious State is…

No too long ago Jake Berger and I presented a paper we are working on at the NYU philosophy of mind discussion session. There was a lot of very interesting discussion and there are a couple of themes I plan on writing about (if I ever get the chance I am teaching four classes in our short six week winter semester and it is a bit much).

One very interesting objection that came up, and was discussed in email afterwards, was whether HOT theory has the resources to say which first-order state is the conscious state. Ned Block raised this objection in the following way. Suppose I have two qualitative first-order states that are, say, slightly different shades of red. When these states are unconscious there is nothing that it is like for the subject to be in them (ex hypothesi). Now suppose I have an appropriate higher-order thought to the effect that I am seeing red (but not some particular shade of red). The content of the higher-order thought does not distinguish between the two first-order states so there is no good reason to think that one of them is consciousness and the other is not. Yet common sense seems to indicate that one of them could be conscious and the other non-conscious, so there is a problem for higher-order thought theory.

The basic idea behind the objection is that there could be two first-order states that are somewhat similar in some way, and there could be a fact of the matter about which of the two first-order states is conscious while there is a higher-order thought that does not distinguish between the two states. David’s views about intentional content tend toward descriptivism and so he thinks that the way in which a higher-order thought refers to its target first-order state is via describing it. I tend to have more sympathy with causal/historical accounts of intentional content (I even wrote about this back in 2007: Two Concepts of Transitive Consciousness) than David does but I think in this kind of case he does think that these kinds of considerations will answer Block’s challenge.

But stepping back from the descriptivism vs. causal theories of reference for a second, I this objection helps to bring out the differences between the way in which David thinks abut higher-order thought theory and they way that I tend to think about it.

David has presented the higher-order thought theory as a theory of conscious states. It is presented as giving an answer to the following question:

  • How can the very same first-order state occur consciously and also non-consciously?

The difference between these two cases is that when the state is conscious it is accompanied by a higher-order thought to the effect that one is currently in the state. Putting things this way makes Block’s challenge look pressing. We want to know which first-order state is conscious!

I trend to think of the higher-order thought theory as a theory of phenomenal consciousness. It makes the claim that phenomenal consciousness consists in having the appropriate higher-order thought. By phenomenal consciousness I mean that there is something that it is like for the organism in question. I want to distinguish phenomenal consciousness from state consciousness. A state is state-conscious when it is the target of an appropriate higher-order awareness. A state is phenomenally conscious when there is something that it is like for one to be in the state. A lot of confusion is caused because people use ‘conscious state’ for both of these notions. A state of which I am aware is naturally called a conscious state but so to is a state which there is something that it is like to be in.

Block’s challenge thus has two different interpretations. On one he is asking how the higher-order awareness refers to its target state. That is, he wants to know which first-order state am I aware of in his case. On the other interpretation he is asking which first-order state is there something that it is like for the subject to be in. The way I understand Rosenthal’s view is that he wants to give the same answer to both questions. The target of the higher-order state is the one that is ‘picked out’ by the higher-order state. And what it is like for the subject to be in that target first-order state consists in there being the right kind of higher-order awareness. Having the appropriate higher-order state is all there is to there being something that it is like to be in the first-order state.

I tend to think that maybe we want to give different answers to these two challenges. Regardless of which first-order state is targeted by the higher-order awareness the state which there is something that it is like for the subject to be in is the higher-order state itself. This higher-order state makes one aware of being in a first-order state, and that is just what phenomenal consciousness is. Thus it will seem to you as though you are in a first-order state (it will seem to you as though you are seeing red when you consciously see red). For that reason I think it is natural to say that the higher-order state is itself phenomenally conscious (by which I mean it is the state which there is something that it is like to be in). I agree that we intuitively think it is the first-order states which are phenomenally conscious but I don’t think that carries much weight when we get sufficiently far into theorizing.

While I agree that it does sound strange to say that the first-order state is not phenomenally conscious I think this is somewhat mitigated by the fact that we can none the less say that the first-order state is a conscious state when it is targeted by the appropriate higher-order awareness. This is because all there is to being a conscious state, as I use the term here, is that the state is targeted by an appropriate higher-order awareness. The advantage to putting things in this way is that it makes it clear what the higher-order theory is a theory of and that the objection from Block is clearly assuming that first-order states must be phenomenally conscious.

Theories of Perception and Higher-Order Theories of Consciousness: An Analogy

I recently came across a draft of a post that I thought I had actually posted a while ago…on re-reading it I don’t think I entirely agree with the way I put things back then but I still kind of like it

———————————————————-

When one looks at philosophical theories of perception one can see three broad classes of theoretical approaches. These are sometimes known as ‘relationalism’ and ‘representationalism’ (and ‘disjunctivism’). According to relationalism (sometimes known as naive realism) perception is a relation between the perceiver and the object they perceive. So when I see a red apple, on this view, there is the redness of the apple and then I come to be related to those things in the right way and that counts as perceiving. Often a ‘window’ analogy is invoked. Perception is like a window through which we can look out into the world and in so doing come to be acquainted with the ways that the objects in the world are. Representationalism on the other hand holds that perception involves, well, representing the world to be be some way or other, and this may diverge from the way the world is outside of perception.

I think a similar kind of debate has been occurring within the differing camps of higher-order theories of consciousness. In this debate the first-order state, which represents properties, objects, and events in the physical environment of the animal, takes the place of the physical object in the debates about perception. If one takes that perspective then one can see that we have versions of relationalism and representationalism in higher-order theories. Relationalists take the first-order state, and it’s properties, to be revealed in the act of becoming aware of it. Representationalists think that we represent the object as having various properties and that the experiences we have when we dream or hallucinate are literally the same ones we are aware of in ordinary experience. This is the famous argument from hallucination.

I think that the misrepresentation argument against higher-order theories of consciousness is actually akin to the argument from hallucination, and shows roughly the same thing, viz. that the relationalist version of higher-order theory is not in a position to explain what it is that is in common between “veridical” higher-order states and empty higher-order states. As long as one accepts that these cases are phenomenologically the same, and some versions of higher-order theory commit you to that claim, then it seems to me that you must say that we are aware of the same thing in each case. In the perception debate representationalists tend to say that what we are ware of in each case are properties. So take my experience of a red ripe tomato and my “perfect” hallucination as of a red ripe tomato. In one case I am aware of an actual object, the tomato, and in the other case I am not aware of any object (it is a hallucination). But in both cases I am aware of the redness of the tomato and the roundness of it, etc, in the good case these properties are instantiated in the tomato and in the bad case the are uninstantiated but they are there in both cases. The representationalist can thus explain why they two cases are phenomenologically the same: in each case we represent the same properties as being present.

I think the representational version of higher-order theories of consciousness have to similarly commit to what it is that is in common between veridical higher-order states and empty ones which none the less are phenomenologically indistinguishable. In one case we are aware of a first-order mental state (the one the higher-order state is about) and in the other case we are not (the state we represent ourselves as being in is one we are not actually in, thus the higher-order state is empty). So it must be the properties of the mental states that we are aware of in both cases. So if I am consciously seeing a red ripe tomato then I am in a first-order state which represents the tomato’s redness and roundness, etc and I am representing that these properties are present and that there is a tomato present, etc (this state can occur unconsciously but we are considering its conscious occurrence). To consciously experience the redness of the tomato I need to have a higher-order state representing me as seeing a tomato. And what this means is that I have a higher-order state representing myself as being in a first-order visual state with such and such properties. The ‘such-and-such properties’ bit is filled in by one’s theory of what kinds of properties first-order mental states employ to represent properties in the environment. Suppose that, like Rosenthal, one thinks they do so by having a kind of qualitative (i.e. non-conceptual, non-intentional) property that represents these properties. On Rosenthal’s view he posits ‘mental red’ as the way in which we represent the physical property objects have when they are red. He calls this red* and says that red* represents physical red in a distinctive non-conceptual non-intentional way.

This is not a necessary feature of higher-order theories but it gives us a way to talk about the issues in a definite way. So the upshot of this discussion is that it is these properties which are common between veridical and hallucinatory higher-order states. When one has a conscious experience of seeing a red ripe tomato but there is not a first-order visual representation of the tomato or its redness, etc, one represents oneself as being in first-order states which represent the redness and roundness of the tomato, one is aware of the same properties one would be in the veridical case but these properties are uninstantiated.

 

Block’s Response to Lau and Brown on Inattentional Inflation

Ned was nice enough to point out that the proofs of his response to us are available online. I want to thank him for his engagement but there is a lot I don’t agree with. I want to say something about each section but first I wanted to address his claim that the argument from Inattention Inflation is question begging. He is wrong about that

He says,

Apparently, their argument is this:

  1. The first-order states were about the same in strength as evidenced by the equal performance on discriminating the gratings;
  2. But as reflected in the differing visibility judgments, the unattended case was higher in consciousness;
  3. To explain the higher degree of consciousness in the unattended case we cannot appeal to a first-order difference since there is no such difference (see premise 1). So the only available explanation has to appeal to the higher-order difference in judgments of visibility.

He then agues that the only reason we would have for accepting premise two of the above argument was a prior commitment to the higher-order thought theory, which is clearly question begging.

First I would object to the characterization of our argument. Premise 2 should not say that one case was higher in consciousness but rather that there were phenomenological differences between the two cases. If there is a difference in what it is like for someone when we have reason to think that there is no difference in their first-order states, then we have reason to think that phenomenology is not fully determined by first-order activity. Block seems very confused by this but isn’t there an obvious difference between clearly seeing something presented to you and just catching a quick glimpse of something or other presented near threshold?

I think that ultimately his argument in his reply to Inattentional Inflation (II) is that since we have two models that both predict the same pattern of results we cannot use the pattern of results as evidence for one model over the other. The two models are

  • (A) a first-order view where difference in task performance is indicative of no difference in conscious experience and difference in report is indicative of cognitive effects without necessarily effecting phenomenology.
  • (B) a higher-order view where difference in task performance is not indicative that conscious experience is the same and difference in report is indicative of an effect on phenomenology.

The question then comes down to which of these two models we should prefer.

In giving our answer to this Block edited a quote from us without indicating that in the text. We say “if a combined increase in the frequency of saying “yes I see the target” and higher visibility ratings is not good evidence that phenomenology has changed, what else can count?” and he quotes us as just saying if “higher visibility rating is not good evidence…” totally ignoring that we explicitly said it is the combination of both that we are replying on. This is misleading!

It is both of these that lead us to think that there really is a difference between the two cases and that leads us to think (B) is the right interpretation. They say they see it more often and also rate it as more visible even though they are not doing a better job of detecting the stimulus. It has nothing to do with the fact that we are willing to defend a higher-order approach to consciousness.

It is too bad that Lau et al do not collect anecdotes from participants but I think just from our ordinary everyday experience we have some cases of inattention inflation. Sometimes as I am sitting at my computer writing something I will think that I saw the little red icon in the right corner of the screen that alerts me to an email in my inbox. Sometimes I will check and it will indeed be there. Other times I check and there is no red marker. But it sure did seem like there was one there just before I looked! The idea is that something like this is going on in the experimental conditions. I predict that if asked subjects would be surprised to find out that (some of) their false alarms were indeed false.

Block goes on to attribute to me “in conversation” the claim that training and reward did not influence the results. It is funny because we say it in the paper! But I did emphasize this at the pub after LeDoux and I gave a talk at the NYU philosophy of mind discussion group. Anyway, in response to that Block says that it would nullify the findings of the original paper that this is an effect of judgement. But that is silly because our claim was that since there is reason to think there is a difference in phenomenology and that the relevant difference psychologically/neurologically was a difference in HO representation then there is reason to think that HO state explains the difference in phenomenology.

Overall, then, I think it is really unfair to say that this argument is question begging. It does depend on their being an actual phenomenal difference when task performance is the same but we think we have good reasons to believe that which are independent of the higher-order view.

Consciousness Science & The Emperor’s Arrival

Things have been hectic around here because I have been teaching 4 classes (4 preps) in our short 6-week winter session. It is almost over, just in time for our Spring semester to start! Even so February has been nice with a couple of publications coming out.

The first is Opportunities and Challenges for a Maturing Science of Consciousness. I was very happy to see this piece come out in Nature Human Behavior. Matthias Michel, Steve Flemming, and Hakwan Lau did a great job of co-ordinating the 50+ co-authors (Open access viewable pdf here). As someone who was around as an undergraduate towards the beginning of the current enthusiasm for the science of consciousness it was quite an honor to be included in this project!

In addition to that Blockheads! Essays on Ned Block’s Philosophy of Mind and Consciousness is out! This book has a lot of interesting papers (and replies from Ned) and I am really looking forward to reading it.

fullsizeoutput_62c0

 

Hakwan Lau and I wrote our contribution back in 2011-2012  and a lot has happened in the seven years since then! Of course I had to read Ned’s response to our paper first and I will have a lot to say in response (we actually have some things to say about it in our new paper together with Joe LeDoux) but for now I am just happy it is out!

Gennaro on Higher-Order Theories

I was asked to review the Bloomsbury Companion to the Philosophy of Consciousness and had some things to say about the chapter on higher-order theories of consciousness by Rocco Gennaro that I could not fit into a paragraph or two so I am extending them here.


In the fourth paper of this second section Rocco Gennaro gives us his interpretation of “Higher-Order Theories of Consciousness”. Higher-order theories of consciousness claim that consciousness as we ordinarily experience it requires a kind of inner awareness, an awareness of our own mental life. To consciously experience the red of a tomato is to be aware of oneself as seeing a red object. Gennaro offers a survey of the traditional higher-order accounts but anyone reading this chapter who was new to the area would get a very biased account of the lay of the land. Specifically there are three things that are misleading about Gennaro’s overview.  The first is how he presents the theory itself. The second is how he responds to the classic misrepresentation objection to higher-order thought theories of consciousness. And the third is in presenting the case for whether or not the prefrontal cortex is a possible neural realizer of the relevant higher-order thoughts.

Gennaro interprets the higher-order theory as what I have called the ‘relational view’. As he says on page 156,

Conscious mental states arise when two unconscious mental states are related in a certain specific way, namely that one of them (the [higher-order representation]) is directed at the other ([mental state]).

This makes it clear that on his way of doing things it is necessary that there be two states, with one directed at the other and that these two states together ‘give rise’ to a (phenomenally) conscious mental state. Rosenthal and those who follow him interpret the higher-order thought theory as what I have called the ‘non-relation view’. On the non-relational view consciousness consists in having the relevant higher-order state. There is some discussion of this distinction in Pete Mandik’s chapter at the end of the book (under heading of ‘cognitive approaches to phenomenal consciousness’) but if one just read Gennaro’s chapter on higher-order theory one would be misled about the other approach.

This comes out clearly in Gennaro’s discussion of the ‘mismatch’ objection. A familiar objection to higher-order theories is that they allow the possibility of differing contents in higher-order and lower-order states. If one sees a red object but has a higher-order thought of the right kind that represents that one as seeing a green object, what is it like for the subject? The non-relational view answers that it is like seeing green even though one will behave as though one is seeing red. Gennaro disagrees and says that there must be a partial or complete match between the concepts in the HOT and the first-order state (or the concepts in the higher-order state must be more fine-grained than in the lower-order state or vice versa) or there is no conscious experience at all. He considers cases like associative agnosia, where someone can see a whistle and consciously see the silver color of it and its shape, can draw it really well, etc, but doesn’t know that it is a whistle. They just can’t identify what it is based on how it looks (though they can identify a whistle by its sound). Gennaro holds that the right way to interpret this is that the subject has a higher-order thought that represents the first-order representation of the whistle incompletely. It represents that one is seeing a silver object that has such and such a shape. But it does not represent that one is seeing a whistle (p 156). He argues that in a case of associative agnosia there is a partial match between the HO and FO state and that results in a conscious experience that lacks meaning.

First it is strange to be talking in terms of ‘matching’ between contents. What determines whether there is a match? Gennaro talks of the ‘faculty of the understanding,’ and it ‘operating on the data of the sensibility’ by ‘applying higher-order thoughts’, and of the higher-order state ‘registering’ the content of the first-order state but it is not clear what these things really mean. Second he makes the assumption that one consciously experiences the whistle as a whistle, or that high level concepts figure in the phenomenology of a subject. This is a controversial claim and even if it is true (or one thinks that it is) one should recognize that this is not a required part of the higher-order view. On the way Rosenthal has set the theory up one has higher-order thought of the appropriate kind about sensory qualities and their relations to each other but one does not have concepts like ‘whistle’ in the consciousness-making higher-order thoughts. One will then come to judge/make an inference that one is seeing a whistle which will result in a belief that one is seeing that whistle, but this belief will be a first-order belief (that is, a belief which is not about something mental, in this case it is about the whistle).

Gennaro says that these kinds of cases e support the claim that there must be some kind of match between first-order and higher-order states but it is not clear that it really does. What he has argued for is the claim that the content of the higher-order state determines what it is like for the subject. What reason do we have to think that the match between first-order and higher-order state is playing a role? In other words, what reason do we have to think that the same would not be case when the first-order state represented red and the higher-order state that one was seeing green, as the non-relational view holds?

His sole criticism of the non-relational view comes when he says,

but the problem with this view is that somehow the [higher-order thought] alone is what matters. Doesn’t this defeat the purpose of [higher-order thought] theory which is supposed to explain state consciousness in terms of a relation between two states? Moreover, according to the theory the [lower-order] state is supposed to be conscious when one has an unconscious HOT,” (p 155; italics in the original).

This is a really bad objection to the non-relational version of the higher-order thought theory. The first part merely asserts that there is no non-relational version of the higher-order thought theory. The second part is something that Rosenthal accepts. The lower-order state is conscious when one has an appropriate higher-order state because that is what that property consists in. What it is for a first-order state to have the property of being conscious, for Rosenthal, is for one to have an appropriate higher-order thought which attributes that first-order state to .

In addition, Gennaro goes on to criticize the recent speculation by higher-order theorists that the prefrontal cortex is crucially involved in producing conscious experience. It is of course an open empirical question as to whether the prefrontal cortex is required for conscious experience and, if so, whether it is because it instantiates the relevant kind of higher-order awareness. However, Gennaro’s arguments are extremely weak and do nothing to cast doubt on this empirical hypothesis. He first appeals to work by Rafi Malach that there is decreased PFC activity when subjects are absorbed by watching a film. However, he does not note that Rosenthal and Lau responded to this. He then appeals to the fact that PFC activation is seen only when there is a required report. This has also been recently addressed (by Lau). Finally, he appeals to lesion studies suggesting that there is no change in conscious experience when the PFC is lesioned. However, there is considerable controversy over the correct interpretation of these results and Gennaro merely appeals to second and third hand literature reviews (see the recent debate in the Journal of Neuroscience between Lau and colleagues and Koch and colleagues).

Consciousness, Higher-Order Theories of

I have been asked to write an entry on higher-order theories of consciousness for the Routledge Encyclopedia of Philosophy, which apparently has not ever had an entry on this! Below is a very (very) rough draft of the entry so far. It really isn’t much more than a first-draft and obviously needs a lot of work but it will give you some idea of the direction I am heading. Any feedback/criticism would be most welcome!

 

  1. Introduction

Higher-order theories of consciousness take a variety of forms but they are united by the claim that consciousness crucially involves some kind of inner awareness of one’s own mind. Though there are clear historical precedents and inspirations in the work of Aristotle, Descartes, Locke, and Kant it is not clear which version (if any) of higher-order theory these historical figures had. There is among these thinkers seemingly a commitment to the idea that consciousness requires some kind of inner awareness but higher-order theories were most clearly formulated in contemporary philosophy of mind. This entry will focus on contemporary developments.

  1. The Higher-Order Approach to Consciousness

When giving a theory of consciousness one must first delineate what the target phenomenon is supposed to be, especially when pursuing something as ambiguous as consciousness.

We say of creatures that they are conscious or unconscious, that they are awake or asleep, etc. This has been called creature consciousness(Rosenthal). This can be contrasted with what is often called state consciousness, which marks the contrast between a particular mental state being conscious versus unconscious (as in subliminal perception).

Phenomenal consciousnesscaptures the subjective ‘what it is like’ component of consciousness. When we taste chocolate, see red, experience pain, hunger, or anger, there is something that it is like for us to have those experience. The specific way that it is for us to have those experience consist in various phenomenal properties (Chalmers).

Higher-order theories are often cast as theories of state consciousness.  That is, higher-order theories are often aimed at explaining what the difference is between a state which is conscious and a state which is unconsciousness. The higher-order strategy is to appeal to the inner awareness that we have of our own mental lives. A conscious state, on this approach, consists in my being aware of myself as being in that state.

Some higher-order theorists go so far as to deny that phenomenal consciousness exists (Rosenthal). However, there is a natural way to connect these two notions of consciousness. When one is in a mental state that one is in no way at all aware of being in, there is nothing that it is like for one. For example, when subliminally presented with a red strawberry, so that one denies seeing it, it is natural to say that it is not like seeing red for one. It is also natural to say that the state which represents the strawberry and its redness is unconscious. The converse of this is that when there is something that it is like for one to see the red strawberry one is in some way aware of oneself as being in the state that represents it. Thus when a state is conscious there is something that it is like for one to be in that state. This is the way in which these terms will be used in this entry.

Construed in this way higher-order theories of consciousness aim to explain phenomenal consciousness which is the same as trying to explain state consciousness. Traditionally we recognize two ways in which we can become aware of things in our environment, which are by perceiving and by thinking. First-order theories argue that phenomenal consciousness can be understood by appeal to the awareness of the world. Higher-order theories argue that these first-order states are not enough and in addition to an awareness of things, properties, and facts about the world we must also have an awareness of our outer-directed awareness. This inner awareness is higher-order in that it is an awareness of something that is mental rather than in the environment or the animal’s body.

  1. Higher-Order Thought Theories

Classical higher-order theories often appealed to inner sense or inner perception as a way to capture inner awareness (Armstrong; Lycan). But this kind of view has faced difficulties which have rendered it all but obsolete. First, we do not have any reason to posit higher-order mental qualities (Rosenthal). In addition, we have not discovered any kind of inner sense (Lycan and Suret).

Since we can also be aware of things with the appropriate thought higher-order thought theories appeal to intentional thought-like states to explain the way in which we are aware of our mental lives.

Perhaps the earliest explicit version of this kind of theory is that of David Rosenthal. On his view we become conscious of our first-order mental states via having a thought to the effect that we are occurently in those states. This thought must have assertoric force and indicate that the relevant mental qualities are currently present.

Higher-order thought theories themselves come in many different varieties, each with a different structure posited. What units them is the postulation that there are two levels of content in the mind. The first level of content represents the environment, the second, higher-order level, represents the first level.

One model of the relation between these, which I will call the Relational Model (RM), is as follows. One starts with an unconscious mental state and then one adds a higher-order representation of that state which results in the first-order state becoming conscious. The consciousness of the first-order state is explained, on this model, by the relation-the awareness relation- that holds between the first order state and the higher-order state. The first-order state is conscious because you are aware of it. On this way of thinking the higher-order state is a distinct mental representation.

Some have felt that this is unsatisfactory because my awareness of non-mental items like rocks does not result in the rocs becoming conscious (Goldman). RM theorists have responded that theirs is a theory of mental state consciousness and so does not include rocks. To be made conscious, on RM, we require a mental state to become conscious in the first place. Whatever the merits of this response there is an additional a well-known objection based on the possibility of misrepresentation. Since RM claims that there are two distinct states one may misrepresent the other. So, if one is representing that there is a red tomato in the environment but then has a higher-order state that represents one as seeing a green tomato, what is it like for the individual in question? (Lycan) According to RM it is the first-order representation of red that is conscious but it is also the case that the higher-order state determines what it is like for you. This suggests that there are deep problems with RM (Block).

Because of this some have moved to what I will call the Joint-Determination Model (JM).On this model the first-order state is postulated not to be a distinct mental state but rather to be part of the conscious state itself. JDM posits that there is one state with two contents. Part of the content is first-order and part of the content is higher-order. JM comes in different varieties (Kriegel, Gennarro, Lau).  One major difference between these models is that of whether the higher-order state itself employs conceptual content (Kriegel, Lau). Some versions, which I will call Same-Order Models(SOM) claim that the higher-order content is itself conceptual and then seek to rule out misrepresentation worries by putting restrictions on the kind of higher-order content that results in a conscious mental state. Gennaro is the most vigorous defender of this kind of view. On his account a conscious mental state results only when there is a (full or partial) conceptual match between first-order and higher-order states, or when the first-order content is more specific than the higher-order content, or when the higher-order content is more specific than the first-order content, or when the higher-order concepts can combine to match the first-order representations (2012 p 179). All of the provisos are arrived at so as to block the claim that there can be a conscious mental state in cases of mismatched content between higher-order and first-order states. However, they seem ad hoc. When examining the cases presented in detail it seems straightforwardly the case that the higher-order content determines what it is like for one. Why wouldn’t it be that way for case of radical misrepresentation as well?

Other versions of JDM that I will call Split-level Models (SLM) deny that the higher-order state is itself conceptual in this way (Lau, Lau and Brown). On these versions the higher-order state is some kind of ‘mere’ pointer, which points to the relevant first-order state. The content of the conscious state is given by the content of the first-order state, but that it is a conscious experience at all is given by the higher-order state. In its most recent iteration the higher-order state ‘toggles’ between three states indicating that the first-order state is veridical, held in working memory, or just noise. SLM is distinct from the other versions of JDM because of what the theory claims happens in radical misrepresentation. On SOM, when one just has the higher-order representation and no first-order target at all there is no conscious experience at all. On Model SLM one will have some kind of conscious experience but it will not be specific. That is to say on SLM the higher-order state will indicate that one is verdically perceiving something but if one has no relevant first-order state then there will be no content to experience other than that one is veridically perceiving something. When one goes to report what it is one will fail.

This extravagant disjunctive theory has been resisted by those who endorse what I will call the Non-Relational Model (NRM).NRM rejects the claim that the first-order state is made conscious by the higher-order state (Rosenthal, Brown). On NRM it is the higher-order state itself that accounts for conscious experience. There is some disagreement among those who endorse this model as to which state is the conscious state. Rosenthal has suggested that it is the notional state that becomes conscious (Rosenthal, Weisberg). Berger has suggested that it is the individual that becomes conscious and not the state at all (Berger). Brown has suggested that it is the higher-order state itself that is phenomenally conscious (Brown).

  1. Still Further Varieties of Higher-Order Theory

In addition to these kinds of theories there are non-traditional ways to account for the inner awareness that many think is a crucial part of phenomenal consciousness.

On the one hand are those theories that explicitly seek to find some non-traditional form of inner awareness. On the other hand, are those that deny this and yet end up appealing to something like inner awareness.

Lycan has recently argued that his version of higher-order perception really is a version of the attention hypothesis. In his paper with Wesley Sauret he argues that attention is one of the ways in which we can become aware of things. On this view attention makes us aware of our mental states but it does so not by representing the states in question. They appeal to analogies like a funnel or sieve. A funnel directs something, a fluid say, towards a target but not by representing what is being directed. As recognized by these authors work remains to be done to explain what exactly the relation is, they suggest that it may be some kind of acquaintance.

In a similar vein other theorist have adopted some kind of ‘inner acquaintance’ view (Hellie). Hellie presents a version of higher-order acquaintance as a non-intentional relation of awareness to one’s first-order qualitative states. Chalmers has also endorsed a non-reductive, non-physical version of higher-order acquaintance. On Chalmers view to be aware of x is also, by the very nature of phenomenal awareness, to be acquainted with one’s awareness of x (Chalmers). This may be a (non-reductive, non-physical) version of SOM above.

There also have been philosophers who have sought to implement inner awareness via a quotational model (Coleman,Picciuto). On Coleman’s model one quotes a quality and thereby becomes conscious of it. This view requires that the mental quality is already primitively red and is fundamental (Coleman endorses pan-qualityism). The quotation of that red quality makes it a phenomenally conscious experience. On Picciuto’s view one forms a phenomenal concept of the relevant mental quality. As Picciuto formulates it, the mental quality does not have an intrinsic redness to it but becomes qualitatively red once one quotes it.

Finally there are those who seek radically non-traditional ways. For example Ned Block has agreed that some kind of inner awareness is necessary for phenomenally conscious experiences (Block). He denies that this kind of inner awareness is any kind of cognitive awareness. He has suggested that it may be a deflationary kind of awareness. Much as I walk my own walk or smile my own smiles, so to I am aware of my own phenomenally conscious states. This kind of deflationary move seems to include every mental state as phenomenally conscious. On the other hand Block has suggested that some kind of same-order awareness may do the trick (i.e. a version of SOM). However it is unclear how this notion of non-cognitive awareness differs from any of the models canvased above. Perhaps Block will ultimately settle on something like JDM but if so the relevant notion of awareness will seem to be cognitive after all. Or perhaps he will ultimately settle on something like acquaintance but then that needs to be spelled out.

Gottlieb and D’Aloisio-Montilla on Brown on Phenomenological Overflow

Last year I started to try to take note of papers that engage with my work in some way (previous posts here, here, here, here, here, here, and here). The hope was to get some thoughts down as a reference point for future paper writing. So far not much in that department has been happening; with a 3 year old and a 1 month old it is tough to find time to write (understatement!) but I am hoping I can “normalize” my schedule in the next few weeks and try to get some projects off of the back burner. At any rate I have belatedly noticed a couple of papers that came out and thought I woud quickly jot down some notes.

The first paper is one by Joseph Gottlieb and came out in Philosophical Studies in October of 2017. It is called The Collapse Argument and makes the argument that all of the currently available mentalistic first-order theories of consciousness turn out to really be versions of the higher-order theory of consciousness. I don’t know Joseph IRL (haha) but we have emailed about his papers several times, though I usually get back him too late for it to matter on account of the 16 classes a year I have been teaching since 2015 (for anyone who cares: I am contractually obligated to teach 9 a year and  in addition I teach another 7 as an adjunct (the maximum allowed by my contract)…sadly this is what is required in order for my family to live in New York! ) and I have blogged about his work here before (linked to above) but I really, really like this paper of his. First, I obviously agree with his conclusion and it is nice to see some discussion of this issue. I took some loose steps in this direction myself in the talk I gave at the Graduate Center’s Cognitive Science Speaker Series back in 2015. I thought about writing it up but then had my first son and then found out about Joseph’s paper, which is better than what I could have come up with anyway! I suppose the only place we might disagree is that I think this applies to Block’s first-order theory as well.

But even though I really like the paper there is a bit I would quibble about (but not very much). Gottlieb seems to take seriously my argument that higher-order theories are in principle compatible with phenomenological overflow but I am not sure I agree with how he puts it. He says,

As Richard Brown (2014) points out, HO theorists don’t need to claim that we are aware of our conscious states in all their respects. I might be aware that I am seeing letters (a fairly generic property) but not the identity of every letter I am seeing. In other words, I can be unaware of some of the information represented by the first- order state without the state itself being unconscious (ibid). What happens, then, is: I am phenomenally conscious of the entire 3 X  4 array, with representations of the identities of all the letters available prior to cuing. But only a small number (usually around four) ever get through, accessed by working memory. That’s overflow, and perfectly consistent with HO theory.

In the paper he is citing I was trying to make the point that the higher-order theories which deny overflow do not thereby also commit themselves to the existence of unconscious *states* which are doing heavy lifting. If the states are targeted by the appropriate higher-order representation then those states are conscious. Yet one may not represent all of the properties of the state and so, even though the state is conscious, there is information encoded in the state which you are not aware of (and so is unconscious). That unconscious information (that is to say, that aspect of the conscious state)  is (presumably) what you come to be aware of when you get the cue in the relevant experiments. So it is a bit strange to see this part of the paper cited as supporting overflow (though I do think the position is compatible with overflow I wasn’t thinking of it in this way). But I think I see his point. On the higher-order view it will true to say that one has a phenomenally conscious experience of all of the letters and the details but only access a few (even though what it is like for one may not have all of the details, which is really what I think the overflow people mean to be saying).

This point, though, is I think they key difference between higher-order theories and Global Workspace theories (which is what Block is really targeting with his argument). The basic idea behind the higher-order approach is this. When one is presented with the stimulus all or most of the details of the stimulus are encoded in first-order visual states (that is, states which represent the details of the visual scene). Let’s call the sum-total representational state S. S represents all (or most) of the letters and their specific identifies. One can have S without being aware that one is in S. In this case S is unconscious. Now suppose that one comes to have a (suitable) higher-order awareness that one is in S. According to the higher-order theory of consciousness one thereby comes to have a phenomenally conscious experience of S and becomes consciously aware of what S represents. But since one’s higher-order awareness is (on the theory) a cognitive thought-like state, it will describe its target. Thus one can be aware of S in different ways. Suppose that one is aware of S merely as a clock-like formation of rectangles. Then what it is like for one will be like seeing a clock-like formation of rectangles. Being aware of S seems to keep S online and as one is cued one may come to have a different higher-order awareness of S. One may become aware of some of the details already encoded in S. One was already aware of them, in a generic way, but now one comes to be aware of the same details but just in more detail. Put more in terms of the higher-order theory, one’s higher-order thought(s) come to have a different content than they previously did. The first higher-order state represented you as merely seeing a bunch of rectangles and now you have a state that represents you as seeing a bunch of rectangles where the five-o’clock position is occupied by a horizontal bar (or whatever). Notice that in this way of thinking about the case there are no unconscious states (except the higher-order ones). S is conscious throughout (just in different respects) and it will be true that subjects consciously see all of the letters (just not all of the details).

I want to keep this in mind as I turn to the second paper but before we do I also like Gottlieb’s paper because it actually references this blog! I think this may be the first time my personal blog has been cited in a philosophy journal! I will have more to say about that at some point but for now: cool!

The second paper is by Nicholas D’Aloisio-Montilla and came out in Ratio in December 2017. It is called A Brief Argument for Consciousness without Access. This paper is very interesting and I am glad I became aware of it and D’Alosio-Montilla’s work in general. He is trying to develop a case for phenomenological overflow based on empirical work on aphantasics. These are people who report lack of the ability to form mental imagery. I have to admit that I think of myself this way (with the exception of auditory imagery) so I find this very interesting. But at any rate the basic point seems to be that there is no correlation between one’s ability to form mental imagery (as measured in various ways) and one’s ability to perform the Sperling-like tasks under discussion in the overflow debate.  His basic argument is that if you deny phenomenological overflow then you must think that unconscious representations are the basis of subject’s abilities. Further, if that is the case then it must be because subjects form a (delayed) mental image of the original (unconscious) representation. But there is evidence that subject’s don’t form mental images and so evidence that we should not deny overflow.

I disagree with the conclusion but it is nice to see this very interesting argument and I hope it gets some attention. Even so, I think there is some mis-characterization of my view related to what we have just been talking about in Gottlieb’s paper. D’Alosio-Montilla begins by setting the problem up in the following way,

The reports of subjects [in Sperling-like tasks] imply that their phenomenology (i.e. conscious experience) of the grid is rich enough to include the identities of letters that are not reported (Block, 2011, p.1; Land- man et al., 2003; cf. Phillips, 2011b). As Sperling (1960, p.1) notes, they ‘enigmatically insist that they have seen more than they can … report afterwards’. Introspection therefore suggests that subjects consciously perceive almost all 12 items of the grid, even if they are limited to accessing the contents of just one row (Block 2011; Carruthers, 2015). The ‘overflow’ argument uses this phenomenon as evidence in favor of the claim that the capacity of consciousness outstrips that of access. Overflow theorists maintain that almost all items of the grid are consciously represented by perceptual and iconic representations (D’Aloisio-Montilla, 2017; Block, 1995, 2007, 2011, 2014; Bronfman et al., 2014; for further discussion, see Burge, 2007; Dretske, 2006; Tye, 2006).

This is a nice statement of the overflow argument and the claim that it is the specific identifies of the items of the grid which are consciously experienced but this way of framing the argument begs the question against the higher-order interpretation. The reports in question do not imply rich phenomenology because, as we have just discussed, subjects are correct that they have consciously seen all of the letters even if they are wrong that they consciously experienced the details. Because of this the higher-order no-overflow theorist can accept that there is no correlation between mental imagery ability and Sperling-like task performance and for pretty much the same reasons that the first-order theorist does: because there is a persisting conscious experience.

D’Aloisio-Montilla then goes on to give two objections to his interpretation of my account. He puts it this way,

A final way out for the no-overflow theorist might be to allow for a limited phenomenology of the cued item to occur without visual imagery (Brown, 2012, 2014; Carruthers, 2015). Brown (2012, p. 3) suggests that subjects can form a ‘generic’ experience of the memory array’s items while the array is visible, since attention can be thinly distributed to bring fragments of almost all items to both phenomenal and access consciousness. Phenomenology, for example, might include the fact that ‘there is a formation of rectangles in front of me’ without specifying the orientation of each rectangle (Block, 2014). However, there a still number of problems with an appeal to generic phenomenology. First, subjects report no shift in the precision of their conscious experience when they are cued to a subset of items that they subsequently access (Block, 2007; Block, 2011).

First, I would point out that my goal has always been to show that the higher-order theory of consciousness is both a.) compatible with the existence of overflow but also b.) compatible with no-overflow views and gives a different account of this than Global Workspace Theories (or other working memory-based views). So I am not necessarily a ‘no-overflow theorist’ though I am someone who thinks that i.) overflow has not been established but assumed to exist and ii.) even if there is overflow it is mostly an argument against a particular version of the Global Workspace theory of consciousness, not generally against cognitive theories of consciousness.

But ok, what about his actual argument? I hope it is clear from what we have said above that one would not expect subjects to report ‘a shift in precision’ of their phenomenology. One has a conscious experience (generic or vague in certain respects) but in so doing you help to maintain the first-order (detailed) state. When you get the cue you focus on the aspect of the state which you had only generically been aware of (by coming to have a higher-order awareness with a different content) but what it is like for you is just like feeling like you see all of the details and then focusing in on some of the details. No change in precision. But even so these appeals to the subject’s reports are all a bit suspect.  I use the Sperling stimulus in my classes every semester as a demo of iconic memory and an illustration of how philosophical issues connect to empirical ones and my students seem to be mixed on whether they think they “see all of the letters”. Granted we only do 10-20 trials in the classroom and not in the lab (in Sperling they did thousands of trials) and these are super informal reports made orally in the classroom…but I still think there is a issue here. I have long wanted there to be some experimental philosophy done on this question. It would be nice to see someone replicate Sperling’s results but also include some qualitative comments from subjects about their experience. I almost tried to get this going with Wesley Buckwalter years ago but it didn’t go through. I still think someone should do this and that the results would be useful in this debate.

D’Aloisio-Montilla goes on to say,

Second, subjects are still capable of generating a ‘specific’ image – that is, a visual image with specific content – when the cue is presented. Assuming that the cued item is generically conscious on the cue’s onset, imagery would necessarily be implicated in maintaining any persisting consciousness of the cued item (whether gist-like or specific) throughout the blank interval. Thus, we can still expect to see a correlation between imagery abilities and task performance, because subjects can generate either (1) a visual image with specific phenomenology, or (2) a visual image with generic phenomenology (Phillips, 2011a; Brown, 2014). In any case, subjects who generate a specific phenomenology of the cued item should perform better than those who rely solely on a gist-like experience, and so Brown’s interpretation is also called into question.

But again this seems to miss the point of the kind of no-overflow account the higher-order thought theory of consciousness delivers. It is not committed to mental imagery as a solution. Subjects have a persisting conscious experience which may be less detailed than they experience it as.

Shesh that is a lot and I am sure there is a lot more to say about it but nap time is over and I have to go and play Dinosaur now.

Papa don’t Teach (again!)

IMG_4628

The Brown Boys

2018 is off to an eventful start in the Brown household. My wife and I have just welcomed our newborn son Caden (pictured with older brother Ryland and myself to the right) and I will soon be going on Parental Leave until the end of April. Because of various reasons I had to finish the last two weeks of the short Winter semester after Caden was born (difficult!). That is all wrapped up now and there is just one thing left to do before officially clocking out.

Today I will be co-teaching a class with Joseph LeDoux at NYU. Joe is teaching a course on The Emotional Brain and he asked me to come in to discuss issues related to our recent paper. I initially recorded the below presentation to get a feel for how long the presentation was (I went a bit overboard I think) but I figured once it was done I would post it. The animations didn’t work out (I used powerpoint instead of Keynote), I lost some of the pictures, and I was heavily rushed and sleep-deprived (plus I seem to be talking very slow when I listen back to it) but at any rate any feedback is appreciated. Since this was to be presented to a neuroscience class I tried to emphasize some of the points made recently by Hakwan Lau at his blog.

Ian Phillips on Simple Seeing

A couple of weeks ago I attended Ian Phillips’ CogSci talk at CUNY. Things have been hectic but I wanted to get down a couple of notes before I forget.

He began by reviewing change blindness and inattentional blindness. In both of these phenomena subjects sometimes fail to recognize (or report) changes that occur right in front of their faces. These cases can be interpreted in two distinct ways. On one interpretation one is conscious only of what what is able to report on, or attend to. So if there is a doorway in the background that is flicking in and out of existence as one searches the two pictures looking for a difference and when one is asked one says that they see no difference between the two pictures one does not consciously experience the door way or its absence. This is often dubbed the ‘sparse’ view and it is interpreted as the claim that conscious perception contains a lot less detail in it than we naively assume.

Fred Dretske was a well known defender of a view on which distinguishes two components of seeing. There is what he called ‘epistemic seeing’ which, when a subject sees that p, “ascribes visually based knowledge (and so a belief) to [the subject]”. This was opposed to ‘simple seeing’ which “requires no knowledge or belief about the object seen” (all quoted material is from Phillips’ handout). This ‘simple seeing’ is phenomenally conscious but the subject fails to know that they have that conscious experience.

This debate is well known and been around for a while. In the form I am familiar with it it is a debate between first-order and higher-order theories of consciousness. If one is able to have a phenomenally conscious experience in the absence of any kind of belief about that state then the higher-order thought theory on which consciousness requires a kind of higher-order cognitive state about the first-order state for conscious perception to occur, is false. The response developed by Rosenthal, and that I find pretty plausible, is that in change blindness cases the subject may be consciously experiencing the changing element but not conceptualize it as the thing which is changing. This, to me, is just a higher-order version of the kinds of claims that Dretske is making, which is to say that this is not a ‘sparse’ view. Conscious perception can be as rich and detailed as one likes and this does not require ‘simple seeing’. Of course, the higher-order view is also compatible with the claim that conscious experience is sparse but that is another story.

At any rate, Phillips was not concerned with this debate. He was more concerned with the arguments that Dretske gave for simple seeing. He went through three of Dretske’s arguments and argued that each one had an easy rejoinder from the sparse camp (or the higher-order camp). The first he called ‘conditions’ and involved the claim that when someone looks at a (say) picture for 3-5 minutes scanning every detail to see if there is any difference between the two, we would ordinarily say that they have seen everything in the two pictures. I mean, they were looking right at it and their eyes are not defective! The problem with this line of argument is that it does not rule out the claim that they unconsciously saw the objects in question. The next argument, from blocking, meets the same objection. Dretske claims that if you are looking for your friend and no-one  is standing in front of them blocking them from your sight, then we can say that you did see your friend even if you deny it. The third argument involved that when searching the crowd for your friend you did saw no-one was naked. But this meets a similar objection to the previous two arguments. One could easily not have (consciously) seen one’s friend and just inferred that since you didn’t see anyone naked your friend was naked as well.

Phillips then when on to offer a different way of interpreting simple seeing based on signal detection theory. The basic intuition for simple seeing, as Phillips sees it, lies in the idea that the visual system delivers information to us and then there is what we do with the information. The basic metaphor was a letter being delivered. The delivery of the letter (the placing of it into the mailbox) is one thing, you getting the letter and understanding the contents, is another. Simple seeing can then be thought of as the informative part and the cognitive noticing, attending, higher-order thought, etc, can be thought of as a second independent stage. Signal detection theory, on his view, offers a way to capture this distinction.

Signal detection theory starts with treating the subject as an information channel. They then go on to quantify this, usually by having the subject perform a yes/no task and then looking at how many times they got it right (hits) versus how many times the got it wrong (false alarms). False alarms, specifically, involve the subject saying the see something but being wrong about it, because there was no visual stimulus. This is distinguished from ‘misses’ where there was a target but the subject did not report it. The ‘sensitivity to the world’ is called d’, pronounced “d prime”. On top of this there is another value which is computed called ‘c’. c, for criterion, is thought of as measuring a bias in the subjects response and is typically computed from the average of hits versus false alarms. One can think of the criterion as giving you a sense of how ‘liberal’ or ‘conservative’ the subjects’ response is. If they will say they saw something all the time then the seeming have a very liberal criterion for determine whether they saw something (that is to say they are biased towards saying ‘yes I saw it’ and is presumably mistaking noise for a signal). If they never say the say it then they are very conservative (they are biased towards saying ‘no I didn’t see it). This gives us a sense of how much of the noise in the system the subject treats as actually carrying information.

The suggestion made by Phillips was that this distinction could be used to save Dretske’s view if one took d’ to track simple seeing and c to track they subjects knowledge. He then went on to talk about empirical cases. The first involved memory across saccades and came from Hollingworth and Henderson, Accurate Visual Memory for Previously Attended Objects in Natural Scenes, the second f rom Mitroff and Levin Nothing Compares 2 Views: Change Blindness can occur despite preserved access to the changed information, and the third Ward and Scholl Inattentional blindness reflects limitation on perception, not memory. Each of these can be taken to suggest that there is “evidence of significant underlying sensitivity in [change blindness] and [inattentional blindness],”.

He concluded by talking about blindsight as a possible objection. Dretske wanted to avoid treating blindsight as a case of simple seeing (that is of there being phenomenal consciousness that the subject was unaware (in any cognitive sense) of having). Dretske proposed that what was missing was the availability of the relevant information to act as a justifying reason for their actions. He then went on to suggest various responses to this line of argument. Perhaps blindsight subjects who do not act on the relevant information (say by not grabbing the glass of water in the area of their scotoma) are having the relevant visual experience but are simply unwilling to move (how would we distinguish this from their not having the relevant visual experience)? Perhaps blindsight patients can be thought of as adjusting their criterion and so as choosing the interval with the strongest response and if so this can be thought of as reason responsive. Finally, perhaps, even though they are guessing, they really can be thought of as knowing that the stimulus is there?

In discussion afterwards I asked whether he though this line of argument was susceptible o the same criticism he had leveled against Dretske’s original arguments. One could interpret d’ as tracking conscious visual processing that the subject doesn’t know about, or one could interpret it as tracking the amount of information represented by the subjects mental states independently of what the subject was consciously experiencing (at leas to some extent). So, one might think, the d’ is good so the subject represents information about the stimulus that is able to guide its behavior, but that may be going on while the subject is conscious of some of it but not all of it, or different aspects of it, etc. So there is no real reason to think of d’ as tracking simple (i.e. unconceptualized, unnoticed, uncategorized, etc) content that is conscious as opposed to non-conscious. He responded that he did not think that this constituted an argument. Rather he was trying to offer a model that captured what he took to be Dretske’s basic intuition, which was that there was the information represented by the visual system, which was conscious, and then there was the way that we were aware of that information. This view was sometimes cast as unscientific and he thought of the signal detection material as proving a framework that, if interpreted in the way he suggested, could capture, and thus make scientifically acceptable, something like what Dretske (and other first-order theorists) want.

There was a lot of good discussion, a lot of which I don’t remember, but I do remember Ned Block asking about Phillips’ response to cases like the famous Dretske example of a wall, painted a certain color, having a piece of wallpaper in one spot. The little square of wallpaper has been painted and so is the same color as the wall. If one is looking at the wall and doesn’t see that there is a piece of wallpaper there, does one see (in the simple seeing kind of way) the wallpaper? Phillips seemed to be saying we did (but didn’t know it) and Block asked whether it wasn’t the case that when we se something we represent it visually and Phillips responded by saying that on the kind of view he was suggesting that wasn’t the case. Block didn’t follow up and didn’t come out after so I didn’t get the chance to follow up on that interesting change.

Afterwards I pressed him on the issue I raised. I wondered what he thought about the kinds of cases, discussed by Hakwan Lau (and myself) where the d’ is matched but subjects give differing answers to questions like ‘how confident are you that you saw it?’ or ‘rate the visibility of the thing seen’. In those cases we have, due to matched d’, the same information content (worldly sensitivity) and yet one subject says they are guessing while the other says they are confident they saw it (or rates its visibility lower while the other rates it higher (so as more visible)). Taking this seriously seems to suggest that there is a difference in what it is like for these subjects (a difference in phenomenal consciousness) while there is no difference in what they represent about the world (so at the first-order level). The difference in what it is like for them seems to track the way in which they are aware of the first-order information (as tracked by their visibility/confidence ratings). If so then this suggests that d’ doesn’t track phenomenal consciousness. Phillips responded by suggesting that there may be a way to talk about simple seeing involving differences in what it is like for the subject but didn’t elaborate.

I still am not sure how he responds to the argument Hakwan and I have given. If there is differing conscious experience with the same first-order states each in each case then the difference in conscious experience can only be captured (or is best captured) by some kind of difference in our (higher-order) awareness of those first-order states.

In addition, now that I have thought about it a bit, I wonder how he would respond to Hakwan’s argument (more stemming from his own version of higher-order thought theory) that the setting of the criterion in Phillips’ appeal to it in blindsight cases, depends on a higher-order process and so amounts to a cognitive state having a constitutive role in determining how the first-order state is experienced. This suggests that an ‘austere’ notion of simple seeing where there is no cognitive states involved in phenomenal consciousness is harder to find than Phillips originally thought.