Ian Phillips on Simple Seeing

A couple of weeks ago I attended Ian Phillips’ CogSci talk at CUNY. Things have been hectic but I wanted to get down a couple of notes before I forget.

He began by reviewing change blindness and inattentional blindness. In both of these phenomena subjects sometimes fail to recognize (or report) changes that occur right in front of their faces. These cases can be interpreted in two distinct ways. On one interpretation one is conscious only of what what is able to report on, or attend to. So if there is a doorway in the background that is flicking in and out of existence as one searches the two pictures looking for a difference and when one is asked one says that they see no difference between the two pictures one does not consciously experience the door way or its absence. This is often dubbed the ‘sparse’ view and it is interpreted as the claim that conscious perception contains a lot less detail in it than we naively assume.

Fred Dretske was a well known defender of a view on which distinguishes two components of seeing. There is what he called ‘epistemic seeing’ which, when a subject sees that p, “ascribes visually based knowledge (and so a belief) to [the subject]”. This was opposed to ‘simple seeing’ which “requires no knowledge or belief about the object seen” (all quoted material is from Phillips’ handout). This ‘simple seeing’ is phenomenally conscious but the subject fails to know that they have that conscious experience.

This debate is well known and been around for a while. In the form I am familiar with it it is a debate between first-order and higher-order theories of consciousness. If one is able to have a phenomenally conscious experience in the absence of any kind of belief about that state then the higher-order thought theory on which consciousness requires a kind of higher-order cognitive state about the first-order state for conscious perception to occur, is false. The response developed by Rosenthal, and that I find pretty plausible, is that in change blindness cases the subject may be consciously experiencing the changing element but not conceptualize it as the thing which is changing. This, to me, is just a higher-order version of the kinds of claims that Dretske is making, which is to say that this is not a ‘sparse’ view. Conscious perception can be as rich and detailed as one likes and this does not require ‘simple seeing’. Of course, the higher-order view is also compatible with the claim that conscious experience is sparse but that is another story.

At any rate, Phillips was not concerned with this debate. He was more concerned with the arguments that Dretske gave for simple seeing. He went through three of Dretske’s arguments and argued that each one had an easy rejoinder from the sparse camp (or the higher-order camp). The first he called ‘conditions’ and involved the claim that when someone looks at a (say) picture for 3-5 minutes scanning every detail to see if there is any difference between the two, we would ordinarily say that they have seen everything in the two pictures. I mean, they were looking right at it and their eyes are not defective! The problem with this line of argument is that it does not rule out the claim that they unconsciously saw the objects in question. The next argument, from blocking, meets the same objection. Dretske claims that if you are looking for your friend and no-one  is standing in front of them blocking them from your sight, then we can say that you did see your friend even if you deny it. The third argument involved that when searching the crowd for your friend you did saw no-one was naked. But this meets a similar objection to the previous two arguments. One could easily not have (consciously) seen one’s friend and just inferred that since you didn’t see anyone naked your friend was naked as well.

Phillips then when on to offer a different way of interpreting simple seeing based on signal detection theory. The basic intuition for simple seeing, as Phillips sees it, lies in the idea that the visual system delivers information to us and then there is what we do with the information. The basic metaphor was a letter being delivered. The delivery of the letter (the placing of it into the mailbox) is one thing, you getting the letter and understanding the contents, is another. Simple seeing can then be thought of as the informative part and the cognitive noticing, attending, higher-order thought, etc, can be thought of as a second independent stage. Signal detection theory, on his view, offers a way to capture this distinction.

Signal detection theory starts with treating the subject as an information channel. They then go on to quantify this, usually by having the subject perform a yes/no task and then looking at how many times they got it right (hits) versus how many times the got it wrong (false alarms). False alarms, specifically, involve the subject saying the see something but being wrong about it, because there was no visual stimulus. This is distinguished from ‘misses’ where there was a target but the subject did not report it. The ‘sensitivity to the world’ is called d’, pronounced “d prime”. On top of this there is another value which is computed called ‘c’. c, for criterion, is thought of as measuring a bias in the subjects response and is typically computed from the average of hits versus false alarms. One can think of the criterion as giving you a sense of how ‘liberal’ or ‘conservative’ the subjects’ response is. If they will say they saw something all the time then the seeming have a very liberal criterion for determine whether they saw something (that is to say they are biased towards saying ‘yes I saw it’ and is presumably mistaking noise for a signal). If they never say the say it then they are very conservative (they are biased towards saying ‘no I didn’t see it). This gives us a sense of how much of the noise in the system the subject treats as actually carrying information.

The suggestion made by Phillips was that this distinction could be used to save Dretske’s view if one took d’ to track simple seeing and c to track they subjects knowledge. He then went on to talk about empirical cases. The first involved memory across saccades and came from Hollingworth and Henderson, Accurate Visual Memory for Previously Attended Objects in Natural Scenes, the second f rom Mitroff and Levin Nothing Compares 2 Views: Change Blindness can occur despite preserved access to the changed information, and the third Ward and Scholl Inattentional blindness reflects limitation on perception, not memory. Each of these can be taken to suggest that there is “evidence of significant underlying sensitivity in [change blindness] and [inattentional blindness],”.

He concluded by talking about blindsight as a possible objection. Dretske wanted to avoid treating blindsight as a case of simple seeing (that is of there being phenomenal consciousness that the subject was unaware (in any cognitive sense) of having). Dretske proposed that what was missing was the availability of the relevant information to act as a justifying reason for their actions. He then went on to suggest various responses to this line of argument. Perhaps blindsight subjects who do not act on the relevant information (say by not grabbing the glass of water in the area of their scotoma) are having the relevant visual experience but are simply unwilling to move (how would we distinguish this from their not having the relevant visual experience)? Perhaps blindsight patients can be thought of as adjusting their criterion and so as choosing the interval with the strongest response and if so this can be thought of as reason responsive. Finally, perhaps, even though they are guessing, they really can be thought of as knowing that the stimulus is there?

In discussion afterwards I asked whether he though this line of argument was susceptible o the same criticism he had leveled against Dretske’s original arguments. One could interpret d’ as tracking conscious visual processing that the subject doesn’t know about, or one could interpret it as tracking the amount of information represented by the subjects mental states independently of what the subject was consciously experiencing (at leas to some extent). So, one might think, the d’ is good so the subject represents information about the stimulus that is able to guide its behavior, but that may be going on while the subject is conscious of some of it but not all of it, or different aspects of it, etc. So there is no real reason to think of d’ as tracking simple (i.e. unconceptualized, unnoticed, uncategorized, etc) content that is conscious as opposed to non-conscious. He responded that he did not think that this constituted an argument. Rather he was trying to offer a model that captured what he took to be Dretske’s basic intuition, which was that there was the information represented by the visual system, which was conscious, and then there was the way that we were aware of that information. This view was sometimes cast as unscientific and he thought of the signal detection material as proving a framework that, if interpreted in the way he suggested, could capture, and thus make scientifically acceptable, something like what Dretske (and other first-order theorists) want.

There was a lot of good discussion, a lot of which I don’t remember, but I do remember Ned Block asking about Phillips’ response to cases like the famous Dretske example of a wall, painted a certain color, having a piece of wallpaper in one spot. The little square of wallpaper has been painted and so is the same color as the wall. If one is looking at the wall and doesn’t see that there is a piece of wallpaper there, does one see (in the simple seeing kind of way) the wallpaper? Phillips seemed to be saying we did (but didn’t know it) and Block asked whether it wasn’t the case that when we se something we represent it visually and Phillips responded by saying that on the kind of view he was suggesting that wasn’t the case. Block didn’t follow up and didn’t come out after so I didn’t get the chance to follow up on that interesting change.

Afterwards I pressed him on the issue I raised. I wondered what he thought about the kinds of cases, discussed by Hakwan Lau (and myself) where the d’ is matched but subjects give differing answers to questions like ‘how confident are you that you saw it?’ or ‘rate the visibility of the thing seen’. In those cases we have, due to matched d’, the same information content (worldly sensitivity) and yet one subject says they are guessing while the other says they are confident they saw it (or rates its visibility lower while the other rates it higher (so as more visible)). Taking this seriously seems to suggest that there is a difference in what it is like for these subjects (a difference in phenomenal consciousness) while there is no difference in what they represent about the world (so at the first-order level). The difference in what it is like for them seems to track the way in which they are aware of the first-order information (as tracked by their visibility/confidence ratings). If so then this suggests that d’ doesn’t track phenomenal consciousness. Phillips responded by suggesting that there may be a way to talk about simple seeing involving differences in what it is like for the subject but didn’t elaborate.

I still am not sure how he responds to the argument Hakwan and I have given. If there is differing conscious experience with the same first-order states each in each case then the difference in conscious experience can only be captured (or is best captured) by some kind of difference in our (higher-order) awareness of those first-order states.

In addition, now that I have thought about it a bit, I wonder how he would respond to Hakwan’s argument (more stemming from his own version of higher-order thought theory) that the setting of the criterion in Phillips’ appeal to it in blindsight cases, depends on a higher-order process and so amounts to a cognitive state having a constitutive role in determining how the first-order state is experienced. This suggests that an ‘austere’ notion of simple seeing where there is no cognitive states involved in phenomenal consciousness is harder to find than Phillips originally thought.

Gottlieb on Presentational Character and Higher-Order Thought Theories of Consciousness

In his paper, Presentational Character and Higher-Order Thoughts, which came out in 2015 in the Journal of Consciousness Studies, Gottlieb presents a general argument against the higher-order theory of consciousness which invokes some of my work as support. His basic idea is that conscious experience has what he calls presentational character, where this is something like the immediate directness with which we experience things in the world.

Nailing down this idea is a bit tricky but we don’t need to be too precise to get the puzzle he wants. He puts it this way in the paper,

Focus on the visual case. Then, fix the concept ‘presentational character’ in purely comparative terms, between visual experiences and occurrent thoughts: ‘presentational character’ picks out that phenomenological quality, whatever it is, that marks the difference between what it is like to be aware of an object O by having an occurrent thought about O and what it is like to be aware of an object O by having a visual experience of O. That is the phenomena I am claiming to be incompatible with the traditional HOT-theoretic explanation of consciousness. And so long as one concedes there is such a difference between thinking about O and visually experiencing O, we should have enough of a fix on our phenomenon of interest.

Whether or not you agree that presentational character, as Gottlieb defines it, is a separate, distinct, component of our overall phenomenology there is clearly a difference between consciously seeing red (a visual experience) and consciously thinking about red (a cognitive experience). If the higher-order theory of consciousness were not able to explain what this difference amounted to we would have to admit a serious deficit in the theory.

But why should we think that the higher-order theory has any problem with this? Gottlieb presents his official argument as follows:

S1  If HOT is true, m*(the HOT) entirely fixes the phenomenal character of experience.

S2  HOTs are thoughts.

S3  Presentational character is a type of phenomenal character.

S4  Thoughts as such do not have presentational character.


S5 HOTs do not have presentational character.


S6 If HOTs do not have presentational character, no experience (on HOT) has presentational character.


P1 If HOT is true, no experience has presentational character.

The rest of the paper goes on to defend the argument from various moves a higher-order theorist may make but I would immediately object to premise S4. There are some thoughts, in particular a specific kind of higher-order thought, which will have presentational character. Or at least these thoughts will be able to explain the difference that Gottlieb claims can’t be explained.

Gottlieb is aware that this is the most contentious premise of his argument. This is where he appeals to the work that I have done trying to connect the cognitive phenomenology debate to the higher-order thought theory of consciousness (this is the topic of some of my earliest posts here at Philosophy Sucks!). In particular he says,

Richard Brown and Pete Mandik (2013) have argued that if HOT is true, we have can have (first-order, non-introspected) thoughts with propriety phenomenology. Suppose one first has a suitable HOT about one’s first-order pain sensation. Here, the pain will become conscious. Yet now suppose one has a suitable HOT about one’s thought that the Eiffel Tower is tall. As Brown and Mandik point out, if we deny cognitive phenomenology, one will then need to say that though the thought is conscious, there is nothing that it is like for this creature to consciously think the thought. But this would be—by the edicts of HOT itself—absurd; after all, the two higher-order states are in every relevant respect the same.

I agree that this is what we say about the traditional higher-order theory (where we take the first-order state to be made conscious by the higher-order state) but I would prefer to put this by saying that if we are talking about phenomenal consciousness (as opposed to mere-state-consciousness) then it would be the higher-order state that was conscious, but other than that this is our basic point. How does it help Gottlieb’s case?

The argument is complicated but it seems to go like this. If we accept the conclusion of the argument from Brown and Mandik then conscious thoughts and visual experiences both have phenomenology and they have different kinds of phenomenology (i.e. cognitive phenomenology is proprietary). In particular cognitive phenomenology does not have presentational character. Whatever the phenomenology of thinking is, it is not like see the thing in front of you! But now consider the case where you are seeing something red and you introspect that conscious experience. When one introspects, on the traditional higher-order view, one comes to have a third-order thought about the second order thought. So, in effect, the second-order thought becomes conscious. But we already said that cognitive phenomenology is not the kind of thing that results in presentational character, so when the second-order thought becomes conscious we should be aware of it *as a thought* and so *as the kind of thing which lacks presentational character* but that would mean that introspection is incompatible with the presentational character.

I have had similar issues with Rosenthal’s account of introspection so I am glad that Gottlieb is drawing attention to this issue. I have also explored his recommended solution of having the first-order state contribute something to the content of the higher-order state (here, and in my work with Hakwan)

I also have a talk and a draft of a paper devoted to exploring alternative accounts of introspection from the higher-order perspective. I put it up on Academia.edu but that was before I fully realized that I am not much of a fan of the way they are developing it. In fact, I forgot my login info and was locked out of seeing the paper myself for about a week! Someday I aim to revisit it. But one thing that I point out in that paper is that Rosenthal seems to talk about introspection in a very different way. Here is what he says in one relevant passage,

We sometimes have thoughts about our experiences, thoughts that sometimes characterize the experiences as the sort that visually represent red physical objects.  And to have a thought about an experience as visually representing a red object is to have a thought about the experience as representing that object qualitatively, that is, by way of its having some mental quality and it is the having of just such thoughts that make one introspectively conscious of one’s experience, (CM p. 119)

This paragraph has often been in my thoughts when I think about introspection on the higher-order theory. But it has become clear to me that a lot depends on what you mean by ‘thoughts about our experiences’.

Here is what I say in the earlier mentioned draft,

…In [Rosenthal’s Trends in Cognitive Science] paper with Lau where they respond to Rafi Malach, they characterize the introspective third-order thought as having the content ‘I am having this representation that I am seeing this red object’. I think it is interesting that they do not characterize it as having content like ‘I am having this thought that I am seeing red’. On their account we represent the second-order thought as being the kind of state that represents me as seeing physical red and we do so in a way that does not characterize it as a thought. One reason for this may be that if, as we have seen, the highest-order thought determines what it is like for you then if I am having a third-order thought with the content ‘I am having this thought that I am seeing red’ then what it will be like for me is like having a thought. But this is arguably not what happens in canonical cases of introspection (Gottlieb forthcoming makes a similar objection). Rosenthal himself in his earlier paper agued that when we introspect we are having thoughts about our experiences and that we characterize them as being the kind that qualitatively represents blue things. This is a strange way to characterize a thought.

So I agree that there seems to be a problem here for the higher-order theory but I would not construe it as a problem with the theory’s ability to explain presentational character. I think it can do that just fine. Rather what it suggests is that we should look for a different account of introspection.

When Rosenthal talks specifically about introspection he is talking about the very rare case where one ‘quote-unquote’ brackets the external world and considers one’s experience as such. So, in looking at a table I may consciously perceive it but I am focused on the table (and this translates to the claim that the concepts I employ in the higher-order thought are about the worldly properties). When I introspect I ‘bracket’ the table in the world and take my experience itself as the object of my inner awareness. The intuitive idea that Rosenthal wants to capture is that when we have conscious experience we are aware of our first-order states (as describing properties in the world) and in deliberate attentive introspection we are aware of our awareness of the first-order state. The higher-order state is unconscious and when we become aware of our awareness we make that state conscious, but, on his view, we do so in a way so as not to notice that it is a thought.

But part of me wonders about this. Don’t some people take introspection to be a matter of having a belief about one’s own experience? If so the a conscious higher-order thought would fit this bill. So there may be a notion of introspection that a third-order thought may account for. But we might also want a notion of introspection that was more directly related to focusing on what it is like for the subject. When I focus on the redness of my conscious experience it doesn’t seem as though I am having a conscious thought about the redness. It seems like I am focused on the particular nature of my conscious experience. We might describe that with something like ‘I am seeing red’ and that may sound like a conscious higher-order thought but we are here talking about being aware of the conscious experience itself. So, to capture this, I would suggest, in both cases we are aware of our first-order states. In non-introspective consciousness we are aware of the first-order state as presenting something external to us. In introspective consciousness we are aware of the first-order state as a mental state, as being a visual experience, or a seeing, etc.

I am inclined to see these two kinds of thoughts as ‘being at the same level’ in the sense that they are both thoughts about the first-order states but which have very different contents. And this amounts to the claim that they employ different kinds of concepts. But these ideas are still very much in development. Any thoughts (of whatever order) appreciated!

Peter Godfrey-Smith on Evolution And Memory

On friday I attended the first session of the CUNY Cognitive Science Speaker Series. The talk seemed to me to be based largely on this paper. I only have a few moments but I thought I would jot down the gist of the talk while it is fresh in my mind.

Godfrey-Smith wanted to take the ‘sender-receiver’ model of communication develop by David Lewis and apply it to debates about memory. On the Lewis model we have a sender that has access to a part of the world that the receiver does not, a sign that is passed between them, and a receiver that is able to take the sign and produce an action. Godfrey-Smith’s guiding idea is that when you have this kind of set up in the psychology of an organism and the signaling takes place over time, then you have memory.

One of the main points he wanted to make in the first half of his talk was that the idea of episodic memory as ‘constructive’ does not show that one of the main functions of episodic memory is to be truth preserving. He was aiming to oppose a group of scientist working on memory who hold what he called the ;future first’ hypothesis about episodic memory. Roughly speaking the idea is this. Our ability to imagine future events, and their outcomes is crucial for us and gives us an evolutionary advantage against those that cannot do it. What we have found out is that the neural areas that underlie our ability to do this are also largely the same ones involved in our ability to remember our past experiences. The ‘future first’ hypothesis is the idea that our ability to remember our own past experiences is simply a by-product of our ability to imagine future events and their outcomes. This is supposed to be further supported by the fact that episodic memory is thought of as ‘constructive’ in the sense that it is often wrong about the details of past experiences and tends to ‘construct’ memories along the most likely scenarios. Godfrey-Smith argued that if we think of memory in terms of the sender-receiver model then we should not immediately expect that the contrastive nature of episodic memory means that it is not truth-tracking. It is perfectly conceivable that the senders produce truth-tracking representations and that the receivers, who may be a bit smarter, ‘improvise’ from there. We can then go on and ask just how much deviation from the facts there is in the sender’s signal.

In the second half of his talk he went to discuss the controversy in cognitive science of whether there is any kind of reader in the brain. That is, is there anything in the brain which is akin to the ‘head’ in a turing machine. Something which interprets whatever message has been sent by the sender. He argued that the dominant view in the sciences is that there is no such reader. Godfrey-Smith went on to argue that there must be a reader, but that there is no sender. DNA, for instance, on this view is not an instance of information being sent. It is information, that happens to be able to be read (as a result of natural selection), but it has not been ‘written’ because, to put it crudely, nothing had the purpose of sending the message. In discussion he wanted to back away from talk of intention and purpose as ‘shorthand’ for the longer answer but I couldn’t make out what the longer answer was supposed to be.

Zombies vs Shombies

Richard Marshall, a writer for 3am Magazine, has been interviewing philosophers. After interviewing a long list of distinguished philosophers, including Peter Carruthers, Josh Knobe, Brian Leiter, Alex Rosenberg, Eric Schwitzgebel, Jason Stanley, Alfred Mele, Graham Priest, Kit Fine, Patricia Churchland, Eric Olson, Michael Lynch, Pete Mandik, Eddy Nahmais, J.C. Beal, Sarah Sawyer, Gila Sher, Cecile Fabre, Christine Korsgaard, among others, they seem to be scraping the bottom of the barrel, since they just published my interview. I had a great time engaging in some Existential Psychoanalysis of myself!

NyCC Video

Thanks to everyone who came out to the Parkside Lounge last night! It was a weird and wonderful night! For those of you who couldn’t make it here is some video recorded by Jennifer on my iPhone set to our version of Freddie Freeloader…We’ll be back @ the Parkside April 26th and May 31st…Let me know if you are in town!

This video doesn’t exist

Busy Bees Busily Buzzing ‘Bout

This last week was a very busy one. As you may have noticed from the side bar the call for papers for Consciousness Online is now out…spread the word!

Tuesday I attended a talk/discussion of a paper by Ned Block on Attention and direct realism. Direct realism, roughly, is the view that when one has a veridical experience, say of the subway train coming into the station, the phenomenology of one’s experience is is determined by, or is constituted by, the properties that the object actually has. So on this view when one sees the subway one is somehow directly in contact with the physical object. This is contrasted with the view that one’s phenomenology is instead determined by, or constituted by some kind of mental representation that is perhaps caused by a physical object but which represents the physical object with a set of mental properties.

Block was arguing that direct realists can’t explain a certain fact about attention. His argument revolved around an interesting phenomenon discovered in attention research. If one is staring at a fixed spot and while doing that focuses one’s attention on one of three circles that are interlocked (something that is hard but can be done) one sees the circle one is attending to as brighter than the others. With practice one is able to move ones attention around the three circles and light them up as one goes. Given this some researchers took two gratings one of which was slightly dimmer than the other and what they found was that when the subjects attended to the fixation spot they could tell which of the two gratings was actually brighter than the other. But when they attended to the fixation point and shifted their attention to the dimmer patch they judged it to be the same brightness as the other patch; that is the two patches looked equally luminous. Block’s argument was then that the direct realist did not have any objective thing in the figure that they could point to to explain the difference in phenomenology. the figures stayed the same. Nor did they have any principled reason to say that one of the two perceptions was illusion and the other veridical.

On Friday I went to Jeremy Grey’s talk at the cuny cog sci speaker series. he was presenting data on the relationship between intelligence, as measured by standard psychological measures correlates with self-control. He was arguing for what he called the Individual Differences view. He started with a famous and intriguing study that found a correlation between self-control in 6 years olds and their subsequent performance on the SAT’s. Kid were given the following two options. They could either take a marshmallow that sitting on the table in front on them right now or they could wait until the experimenter returned and have two marshmallows. The experimenter then left and the children were videoed. Some of the kids were able to wait for the two marshmallow reward while others gave in immediately and ate the one that they had in front of them. What was surprising was that 12 years later when they took their SATs the ones who did best were those that waited longest for the two marshmallow reward. That is, the longer they were able to resist the marshmallow in front of them and wait for the return of the experimenter (some made it others didn’t) the higher their SAT scores were. Grey did a series of studies on adults to test the relationship between intelligence and self control and he found that there was indeed this relationship. There were, however, some people who scored high on the standardized tests but also scored high on impulsivity tests (that is they would be classified as high intelligence and low self control). The even more surprising thing was that if you factored in a certain kind of genetic variation which results in a variation in the dopamine receptors one saw that the outliers had this variation while those who conformed to the model did not. He also pointed to a study which suggested that pre-school children who participated in daily self-control exercises improved their performance on standardized IQ tests and so there is room for optimism that one is not stuck at one’s current IQ/self-control level.

Attributing Mental States

Friday I attended James Dow‘s talk at the CUNY Cogsci Speaker Series. He was concerned with answering the question of how people are able to ascribe various mental states to themselves. In particular he was interested in critiquing the account offered by Bermudez and developing an alternative account inspired by P.F. Strawson.

The standard account has it that we first come to see that we have certain mental states like belief, pain, etc and that these result in various behaviors (e.g. utterances as well as other behavior). We then notice that other people engage in these kinds of behaviors and then reason by analogy that since they exhibit behaviors like the ones that I do when I have a, say, pain these people must also be in pain. Thus the standard account has that we start with ascribing mental states to ourselves (I am in pain) and then use that ability to ascribe mental states to others (Doug is in pain).

Bermudez criticizes the standard account using a posteriori evidence from developmental psychology. In particular Bermudez uses data from the phenomenon of joint attention. In joint attention you have two observers each attending to some object, say a piece of fruit, and each aware that the other one is attending the same object. Bermudez argues that in order to be able to do this (and infants do it as early as 9 months old) the child must be representing the mother as seeing the object and attending to it. This, in turn, must mean that the child represents the mother under a psychological sortal; that is as seeing x and attending to x. This together with other evidence that suggests that the child does not at this point attribute mental states to itself  shows that the standard account can’t be right. Bermudez then argues that the best explanation of what is going on here is that the ability to attribute mental states to others constitutes the ability to attribute mental states to oneself.

Dow wanted to criticize Bermudez for using an a posteriori argument to establish something like logical dependence. Whatever that turns out to mean Dow’s basic concern was to develop the Strawsonian alternative and to argue that none of Bermudez’s arguments decide between his account and the new alternative. In short the Strawsonian alternative postulates that the child simply has the ability to pick out other persons. ‘Person’ here is used in the Strawsonian sense as of something which has both mental and physical attributes. Dow claims that in representing the mother as a person the child is neither representing their mother under any kind of sortal. They simply attend to the eye and where it is focused. This Strawsonian view is what Dow called a ‘no-priority’ view in that it holds that there is a logical dependance (whatever that is) between self and other ascriptions (at the very least it seems to mean that in order to have the one ability one must also have the other ability (and vice versa?) but that neither one develops before the other.

We were promised a transcendental argument that was supposed to establish this but we ran out of time.

Over drinks I had an interesting discussion with Josh Dulberger who was proposing a novel take on the simulation theory/theory theory debate. Traditionally these are thought to be opposed but Josh suggested that they need not be. He thought that the simulation might be used to generate data for the theory one employs of other people and their mental states. This is an interesting idea. He then suggested that if one thought this then one might be able to argue that the function of consciousness lay in enriching the data that one gets. Intuitively the idea is that consciousness gives one better access to ones own mental states and so boosts the amount of data that one has for one’s theory thus making the theory richer. He thought this was nice since one could adopt David Rosenthal’s higher-order thought theory of consciousness and then argue that Rosenthal is wrong that there is no function of consciousness using his own theory against him, so to speak. But there is a problem here. In Rosenthal’s account there must have been a time when there were people who were able to infer what mental states they were in from observing their own behavior (which includes verbal utterances). But since these people are not able to have these higher-order thoughts in a way that seems unmediated by inference they do not have conscious mental states yet. As they get better and better and attributing these kinds of states to themselves they get to the point at which these attributions no longer seem to be mediated by inference at which point they come to have conscious mental states. At the point just before they have conscious mental states their access to their unconscious mental states is just as good as it will be when they do have conscious mental states. They will have what seems to them to be a different kind of access to their mental states but they really just have the same access as before (only now it seems to them to be immediate and non-inferential). If this is right then Rosenthal’s account ia not committed to Dulberger’s claim that consciousness produces more data. Our Rosenthalian ancestors have all of the same data that we do even though all of their mental states are unconscious.

On an interesting side note Daniel Shargel pointed out an interesting difficulty for Rosenthal in this story. The Rosenthalian ancestors do not have any conscious mental states. At some point they acquire the appropriate concepts which enables them to have higher-order thoughts attributing (theoretical) mental states to themselves. On Rosenthal’s account the fact that these higher-order thoughts are mediated by inference means that they do not result in the target states becoming conscious. It is only once the higher-order thoughts are seemingly unmediated by inference that we get consciousness. But Shargel argues that when these Rosenthalians have their very first higher-order thought, whether mediated by inference or not, it will not seem to them to be so mediated since nothing seems any way to them (all of their mental states are unconscious). Shargel suggested that Rosenthal response to this was that the inference needs to be the product of some internal mental state. Since the Rosenthalians always make inferences based on external perceptions the higher-order thoughts they have are not of the right kind. I wonder if there is some other response he can give, but this is already too long!