Consciousness Afoot!

The first paper in the special issue of Consciousness and Cognition that I am guest editing is now available online. Congratulations Pete! The complete special issue will have re-written papers from the second online consciousness conference, new commentary on the papers, and author’s responses. The issue should be finished sometime in 2011.  In the meantime accepted papers will appear online first, so more to come!

The call for papers for the online consciousness conference is now closed and I am working on the program. Things are moving quickly and I expect to have it put together sometime next couple of weeks.

The Conceivability of Quality Inversion

This morning I stumbled upon David Rosenthal’s forthcoming Philosophical Issues paper “How to Think About Mental Qualities” (available on his website). In this paper he argues that inverted qualia are inconceivable. He says,

Quality spaces cannot be symmetrical around any axis without making it impossible to distinguish qualities on one side of the axis from qualities on the other. And [our higher-order awareness] makes one aware of mental qualities in respect of their relative position in the relevant quality space. So that lack of symmetry carries over to the way the [higher order states] make us aware of mental qualities. Undetectable quality inversion is accordingly no more possible for conscious than for non-conscious mental qualities.

The first part of the argument is supposed to show that the usual move, that if saying that the quality space is conceivably symmetrical, for principled reasons about the way perception works. Any quality space complicated enough to do justice to human discriminatory capabilities will have to be asymmetric. This means that any inversion will be detectible from the third person and so quality inversion without detection is inconceivable. The intuition that it is conceivable is a reflection of one’s already accepting what Rosenthal calls a “consciousness-based” theory of the mental qualities. I am deeply sympathetic to this line of argument but I think that there is a problem with Rosenthal’s line of argument in the above passage.

Suppose that we grant the point about the asymmetric quality space, and that the content of the higher-order awareness is also asymmetric. Suppose that in both you and I when we are presented with an orange stimulus the same first-order sensory quality is produced and that this quality is more like the quality that represents yellow and red than it is like the one that represents green and blue, etc. But now suppose that in you there are higher-order thoughts that accurately reflect these similarity and differences. So in you, when the orange quality is consciousness, one has a higher-order state that picks it out as more similar to the yellow* and red* qualities than it is to the green* and blue* qualities. All along, though, we may suppose, in me the ‘opposite’ is happening. When I have a first-order quality that represents orange my higher-order awareness picks it out as the one that is more like green* and blue* than it is like yellow* and red*. But suppose also that I have the same perceptual thoughts that you do. So, I believe that there is something orange near me when I have orange* qualities and when that thought is conscious I consciously believe that something orange is near me. But what it is ike for me in this situation will be like it is for you when you see blue. So we have different conscious experience that is not detectable from a third-person standpoint. We may suppose even that when I introspect my third-order thoughts “get it right”. So, when I introspect my higher-order awareness that the quality is more like green than it is like orange, I am aware of it as one that has the quality more like yellow than it is like green. Thus my introspective reports will exactly match yours. I will also match you in all of my discriminatory abilities.

Granted that this is detectable in principle since we are stipulated to have different higher-order states, which could ultimately be detected in some very advanced neuroscience, but it is nearly undetectable and this is all we need t ground the intuition. In short, it takes a lot of work, even on a purely physicalist conception of the mind like quality space and higher-order theory, to show that these scenarios are not ideally conceivable. Whether they are or not it is important to see that they seem conceivable and that this allows us to get a common sense handle on what we are talking about.

Burge on the Origins of Perception

Saturday I attended a workshop on the predicative structure of experience sponsored by the New York Consciousness Project, which is sponsored by the New York Institute of Philosophy . Speaking was Tyler Burge and Mark Johnston with commentary by Alex Byrne and Adam Pautz respectively. I may write a separate post on Johnston’s talk but here I want to say something about Burge’s talk.

The first thing that Burge wants to do is to clarify the notion of representation in the claim that perception is representational. For a state to be representational is for veridicality conditions to be an ineliminable part of the scientific explanation of the formulation of the state. Thus representational states, in this sense, are not states that merely co-vary with some thing in the world. For instance, the level of mercury in a simple thermometer causal co-varies with the temperature but Burge wants to deny that the mercury level in the thermometer represents the temperature in his preferred sense. This is because the scientific explanation of how the mercury level cam to be such-and-such proceeds “from the inside” so to speak, and does not need to bring in notions like true or false. He admitted that we could, if we want, adopt a certain stance towards the state and call it representational. But there is still something unique about the kind of states that psychologists are interested in. The central task of perceptual theories, for Burge, is that of discovering the conditions under which we correctly, or accurately represent the world and when we fall into mistakes, i.e. illusions. The idea of “getting it right” does not enter into the explanation of why the mercury is at a certain level in the thermometer. To illustrate this idea Burge kept going back to saying that the explanation for the level of mercury being thus-and-so as opposed to -such-and-such would be the same whether or not we just started with the proximal stimulation or not. The basic idea seemed to be this; when we calibrate a mercury thermometer we take the contraption and put it in ice as it is melting (i.e. the just melted ice-water) and wait for the mercury to stabilize. We then do the same for boiling water and then assign 0 to the first and 100 to the second and divide the rest into 100 equal parts. Does it really make sense to say that the thermometer got the temperature right in the first step? Can we make sense of the notion that it got it wrong? Wherever it settles we call 0 so how could it be mistaken? Or to put it slightly differently, how could we make sense of the notion of it being under some illusion? These kinds fo considerations don’t even seem to apply. Now as already said we can adopt this sort of talk if we want to, but if it is really the case that nothing is lost when we stop talking that way then it is just a stylistic thing. When we talk about representational states in perception we are immediately confronted with truth-value talk. And Burge wagered that psychology as a science would not give up this notion.

For my own part I find this distinction quite plausible but I don’t see why it then follows that no causal or teleological theory can work. The special category isn’t representation, it is mental representation.

But back to Burge. The second thing that he wanted to clarify was the notion of perception. He first distinguished between mere sensory registration, which is just statistical co-variation, and perception. For a state to be a perceptual state it must be in the business of objectifying the world. That is, it is in the business of offering a solution to the underdetermination problem. The classic example is the construction/recovery of a 3-d image from the 2-d image projected onto the retina. There are an infinite number of ways that the brain could generate a 3-d image from that information but of course the perceptual systems are in the business of ignoring most of those. This fact is then used in the explanation of various visual illusions. The mark of perceptual systems are perceptual constancies. So, consider color. We Human Beings are pretty good at telling the actual color of a thing in a variety of lighting conditions. That is, our perceptual systems somehow take a range if inputs and treat them as the same. The same is true for length and etc. He distinguishes perception from any kind of mere sensory registration and seemed to think that olfaction and taste were non-perceptual senses. The reason that he gave was that there were no smell constancies. We don’t seem equipped to be able to track the same smell under a bunch of different environmental conditions.

Finally, and perhaps the most interesting part of the discussion, he defended his claim that talk about perceptual as representational and perceptual processes as computational does not commit us to a purely syntactic view of how it is implemented. So by way of illustration consider the way that we talk about the logical category of being a predicate. We can give a purely syntactic description but the level of explanation that matters is the level at which semantic information plays a role in individuating the state. This is somewhat like a point that has always bothered me. In logic we pretend that we are dealing with purely syntactic rules, but they are useless unless they are individuated semantically. But anyway, Burge’s point was that he thought that we would not even be able to get to the point where we could individuate perceptual states in a purely syntactical way, unlike the logic case where we can (so he thought). He speculated that the reason so many people think that you must do computational psychology purely syntactically is because of an antecedent position on the mind/body problem. It is because people start with the assumption that this stuff must ultimately be physical and so we must only apply to physical properties (viz. syntax). But Burge objected that we should do psychology autonomously and the see what the mind-body problem looks like afterwards. There is nothing in computational theory that would force us to opt for a purely syntactic theory of computation and so, Burge claimed, no reason that people who accept the language of thought hypothesis, even for perception, were committed to these computations being done on the basis of purely syntactical properties of the computata (if that is a world).

Now all of this is only by way of clarifying what perceptual representation is (!) and after this is done he goes on to talk about the structure of perceptual representations. This is getting long so I will make it short now and perhaps come back to it later for a fuller account. The basic idea seemed to be that perceptions must be composed of a general attributive part and a part that indicates some particular thing (singular reference part). So, for example, to perceive that one object is to the left of another object is to have a state that represents the general relation of ‘to the left of’ as being instantiated by these two particular objects. In both its general, attributive aspect, as well as the singular aspect perception is always trying to demonstrate some particular. It can fail to do that but it is always trying to do so. He seemed to want to model this on demonstratives. As I said there is much more to be talked about, including his view that the difference between conscious and unconscious perceptions might lie in some aspect of the states mode of presentation, but it is getting late and this is already too long!

Swamp Thing About Mary

On Friday I attended Pete Mandik’s Cogsci talk at CUNY. The talk was excellent; entertaining and with a spirited discussion of many interesting topics. This paper represents one of a two part project that Pete is pursuing that involves Swamp Mary. One project employs Swamp Mary against dualists while the other employs her in an in-house dispute between what Pete call ‘gappy’ and ‘non-gappy’ physicalism. Friday’s talk was the latter and is based on his paper Swamp Mary Semantics: A Case for Physicalism without Gaps.

Pete’s distinction is recognizable as meant to capture type-a and type-b physicalism. More generally it is the distinction between a priori and a posteriori phsyicalism. Or at least one may get that idea from the way Pete characterizes the distinction. He says that the issue is over whether or not what we can “classic Mary” in her pre-release condition is in a position to know what it is like to see red. Pete claims that the gappy physicalist will deny that she does while the non-gappy physicalist does not. Swamp Mary is then presented as a challenge to the gappy physicalist. In short the challenge goes as follows. Swamp Mary is a physical duplicate of post-release Mary, the one who has seen red and so knows what it is like to see red, but Swamp Mary has never herself seen red since she is a swamp being who, we can stipulate, has had no experiences herself at all. She is a duplicate of a being that has had the experience of seeing red but hasn’t herself had the experience. Nonetheless, Pete urges, it is natural to say that Swamp Mary knows what it is like to see red. This is natural since post-release Mary knows what it is like to see red and Swamp Mary is a physical duplicate of post-release Mary. But since it is also natural to say that Swamp Mary has never seen red, or had an occurrent red quale, it is now mysterious why it is supposed to be the case the pre-release Mary doesn’t know what it is like to see red. If Swamp Mary can know what it is like to see red without ever having a red quale then it ought to be the case that pre-release Mary can also know what it is like to see red without having seen it. Thus, Pete concludes, physicalist should be non-gappy physicalist and hold that Mary can know what it is like to see red in her pre-release state.

Pete then goes through various responses that might be made by the type-b physicalist and tries to show that they all have problems. He does this by examining four different psychosemantic theories, which he also claims to be exhaustive, and arguing that they cannot provide the relevant explanation (i.e. of the difference between pre-release Mary and Swamp Mary’s knowledge of what it is like to see red). The four categories of psychosemantic theory are: 1. Quotation 2. Actual Cause 3. Nomological 4. Descriptive-homomorphic/isomorphic. Briefly, 1 is supposed to capture any kind of self-presentational view about phenomanl concepts like that of Chalmers or perhaps Block. 2 is supposed to capture any kind of teleological or causal theory of content while 3 is supposed to capture anything that resembles Fodor’s psychosemantics (think ‘asymmetric dependance’ here). Finally, 4 is supposed to capture any kind of conceptual role kind of theory. The problems for each of these, to make a long story short, are that 1 and 2 cannot explain how Swamp Mary does know what it is like to see red while 3 & 4 end up having problems explaining why pre-release Mary doesn’t know what it is like to see red.

There is a lot of detail that I am leaving out but in general Pete is trying to construct an argument against a central claim of the type-b physicalist. This is the intuition, shared by many, that one cannot really have the full concept RED without having the red quale present in ones consciousness. Or to put it more common sensically, one cannot know what it is like to see red unless one has (a) had a red experience and (b) has the ability to think, or otherwise identify, that the experience is red while one is having it. Pete has the intuition that Swamp Mary knows what it is like to see red and yet neither (a) nor (b) seem to be met and therefore concludes that the type-b physicalist is wrong. I suggested in discussion that Pete was running over the distinction between ‘knowing what it is like’ in the general sense, and ‘knowing what it is like FOR one’. To know what it is like for one too see red requires that one have, or be able to recall, a red experience and to be able to say, so to speak, to one’s self ‘this is what it is like to see red’. To know what it is like in the generic sense is to know what it is like to see red in the way that pre-release Mary is typically thought of by the type-b physicalist. She can know a lot about what it is like to see red. She can know that it is more like seeing something pink than it is like seeing blue, and all other kinds of facts. But intuitively she doesn’t know what it is like for her to see red. Once we have this distinction in mind we no longer have a problem with Swamp Mary. Swamp Mary knows what it is like in the generic sense, in just the same way as pre-release Mary, but she does no know what it is like for her to have the experience, again just like pre-release Mary. To put this in different terminology, both pre-release Mary and Swamp Mary lack the “pure” phenomenal concept, though they have many others. On the other hand, if we think that Swamp Mary does know what it is like in this way it will be because she has an ability to call up the red experience and identify it, whereas Classic Mary cannot do this.

Finally, does saying this really make one a gappy/type-b phsyicalist? I claim that it doesn’t. I agree that in the way the Mary thought experiment is usually set up we arrive at the conclusion that pre-release Mary cannot know what it is like for he to see red. This is because she lacks the appropriate concept, the pure phenomenal concept. In order to acquire this concept she needs to have the experience. But once she does there is no longer any gap between the physical and phenomenal. Once Mary has the pure phenomenal concept she is able to know what it is like to see red in the complete way necessary to make deductions from physical facts to phenomenal facts without relying on the pure phenomenal concept (or introspection) for justification of any step in the deduction. Thus in order to really count as a gappy physicalist in Pete’s sense we would have to deny that even once Mary had this concept she would be unable to know what it is like for her to see red just on the basis of the physical facts. This is much less plausible to me.

After the talk at the bar we started to talk about the role that intuitions should play in philosophy. This discussion was started by thinking about the experience principle. Why does anyone think that it is true that in order to fully have the concept of red one must have seen red, or had a red experience? I claimed that have prima facie evidence for this claim from the fact that it is intuitively obvious to many philosophers. I agree, basically, with Michael Devitt’s view that intuitions, especially those of experts, should count as defeasible evidence. Thus, we can have a priori justification for believing something but not a priori knowledge in the traditional sense. The mere fact that most philosophers think that Mary couldn’t know what it was like for her to see red from within her room should count as evidence, not for epiphenomenalism or dualism, but rather for the experience principle. That is what is really being tested there. Now, I agree that this is defeasible evidence. We could come to have reason to reject the idea. I *think* I can at least negatively conceive of a situation where pre-release Mary comes to know what it is like to see red without seeing it first. She will know that when people see red they are in a certain brain state. She will also know that people talk about knowing this state in a special first-person way and that they can only do this when in the relevant brain state. She might then conclude that to know what it is like to see red in this sense requires that she be in this brain state. Since she is not allowed to have the stimulus that will produce the brain state she must find some other way to produce the brain state. She might then realize that when one imagines seeing red one goes into something like the brain state that one is in when one actually sees red. Could she come to token this brain state without the stimulus? Well, obviously she could rig some kind of autobrain stimulation to stimulate the area and produce the experience. But is it absurd to think that she could do this without brain stimulation? That is, could she imagine what red looked liked without ever having seen it? I am not sure…my intuitions tend to go with the experience principle but I can’t see anything contradictory about the scenario just described…

There is more to say about all of this but this is getting too long already!

cfp: 3rd Online Consciousness Conference

Please post and distribute widely; apologies for cross-posting.

I am pleased to announce the call for papers for the third online consciousness conference. Invited speakers include,

Kathleen Akins, Simon Fraiser University

Paul Churchland, University of California San Diego

Steven Harnad, University of South Hampton

Jesse Prinz, The Graduate Center, CUNY

Papers in any area of consciousness studies are welcome, though the conference has as its theme this year ‘Neurophilosophy and the Philosophy of Neuroscience’. Selected papers from the conference relating to this theme, pending outside review, will be published in the annual special issue of Synthese “Neuroscience and its Philosophy”. Because of this contributions that are unpublished elsewhere and related to the theme are preferred, though exceptions can be made.

Papers should be roughly 3,000-4,000 words and subsequent presentations, should the presenter choose to make one, should be about 20 minutes (though longer papers/presentations are acceptable). Submissions, suitable for blind review, should be sent to consciousnessonline@gmail.com by January 5th 2011. Those interested in being referees or commentators should also contact me. Authors of accepted papers are urged to make, or have made, some kind of audio/visual presentation (e.g. narrated powerpoint or video of talk) though this is not required to present.

For more information visit the conference website: http://consciousnessonline.wordpress.com

Explaining Consciousness & Its Consequences

Yesterday I presented Explaining Consciousness and its Consequences at the CUNY Cognitive Science Speaker Series which was a lot of fun and a very fruitful discussion. I have a narrated powerpoint rehearsal of the talk and those that are interested can look at that at the end of this post but here I want to discuss some of the things that came up in the discussion yesterday.

The core of the puzzle that I am pressing lies in asking why it is that conscious thoughts are not like anything for the creature that enjoys them. My basic claim is that if one started with the theory of phenomenal consciousness and qualitative character and came to understand and accept it but one hadn’t yet thought about conscious thoughts one would expect that the theory would produce cognitive phenomenology. Granted it wouldn’t be like the phenomenology of our sensations –seeing blue consciously is very different from consciously thinking that there is something blue in front of one– but why is it so different that in one case there is nothing that it is like whatsoever while in the other case there is something that it is like for the creature? The only difference between the contents of HOTs about qualitative states and HOTs about intentional states is that one employs concepts of mental qualities whereas the other employs concepts about thoughts and their intentional contents yet in one case conscious phenomenology –which is to say that there is something that it is like for the creature to have those conscious mental states– in all its glory is produced while in the other case nothing happens. As far as the creature is concerned it is a zombie when its has conscious thoughts. But what could account for this very dramatic difference? It looks like we haven’t really explained what phenomenal consciousness is, all we have done is re-locate the problem to the content of the higher-order thought. This is because no answer can be given to my question except “that how phenomenal concepts work” and so we have admitted that they are special.

Now one thing that came up in the discussion, by David Pereplyotchik, was what I meant by ‘special’ in the above. David P. suggested that qualitative properties may be distinctive without being special. I agree that they are distinctive and that is the reason that thinking that p and seeing blue are different. We move from distinctive to special when we deny that conscious thought have a phenomenology because we can’t explain why they don’t.

One detail that came out was that the way I formulated the HOTs and their contents was misleading. Instead of “I think I see blue*” the HOT has the content “I am in a blue* state”

At some point David said that when he had a conscious thought what it was like for him was like feeling one was about to say the sentence which would express the thought. So when one thinks that there is something blue in front of one what it is like for that creature is like feeling that they were about to say “there is something blue in front of me”. When I said ‘aha, so there is something that it is like for you to have a conscious mental state’ he responded “what does that mean?” This challenge to my use of the phrase “what it’s like for one” was a main theme of the discussion. A lot of the time I ask whether or not there is something that it is like for one to have a conscious thought  and if not why not but David objected that the phrase is multiply ambiguous and is used to confuse the issue more than anything else. One way this came out was in his challenging me to explain what was at stake. What difference is made if we say that there is something that it is like for one to have a conscious thought and what is lost if we deny it? I responded that it is obvious what the reference of the phrase ‘what it is like for one’ is. It is the thing that would be missing in the zombie world. David responded that the zombie world was impossible, which I agree with at the end of a long theoretical journey but we can still intuitively make sense of the zombie world even if only seemingly. That is even if it is the case that zombie are inconceivable we still know what it would mean for there to be zombies and that still helps us zone in on what the explanatory problem is. I take it that the whole point of the ambitious higher-order theory is that it tries to explain how this property, the one we single out via the phrase ‘what it is like for one’ and the zombie and mary cases, could be a perfectly respectful natural property. So what is at stake is whether or not I really am like a zombie when I have a conscious thought and what that means for the higher-order thought theory. If we cannot account for the difference between intentional conscious states and qualitative conscious states then we have not explained anything.

David’s main response to my argument seemed to be to appeal to the different ways in which the concepts that figure in our HOTs are acquired. In the case of the qualitative states we acquire the concepts that figure in our HOTs roughly by noticing that our sensations misrepresent things in the world. So, if I mistakenly see some surface as red and then come to find out that it isn’t red but is, say, under a red light and is really white, this will cause me to have a thought to the effect that the sensation is inaccurate and this requires that I have the concept of the mental quality that the state has. In the case of intentional states the story is different. We are to imagine that there is a creature that has concepts for intentional states but only applies them on the basis of third person behavior. This creature will have higher-order thoughts but they will be mediated by inference and will not seem subjectively unmediated. Eventually this creature will get to the point where it can apply these concepts to itself automatically at which point it will have conscious thoughts. This difference is offered as a way of saying what is different about the concepts that figure in HOTs about qualitative states and those that figure in HOTs about intentional states. It amounts to an elaboration of David Pereplyotchik’s suggestion early on that the qualitative properties are distinctive without being mysterious. They are distinctive in the way that concepts are acquired. But as before how can this be an answer to the question I pose? I agree that there is this difference for the sake of argument. What seems to me to follow from this is what I said before; namely that the phenomenology of thought and the phenomenology of sensations is not the same…but this should be obvious already. So, the claim is not that having a conscious thought should be like seeing blue for me or feel like a conscious pain for me only that it should be like something for me. Basically then, my response is that this will make a difference in what it is like for the creature but doesn’t explain such a drastic difference as absence of something that it is like for one in one case.

Another way I like to put the argument is in terms of mental appearances. David Rosenthal often says that what it is like for one is a matter of mental appearances at which point I argue that the HOT is what determines the mental appearances and so in the case of thinking that p it should appear to me as though I am thinking that p. In response to this David said that while it is the case that phenomenology is a matter of mental appearances it might not be the case that all mental appearances are phenomenological. At this point I have the same response as before…viz. what reason do we have to think that there are these two kinds of appearances? It looks like on is just inserting this into the theory by fiat to solve an unexpected problem. There is no theoretical machinery which explains why we have this disparity. When we ask why applying starred concepts results in appearance of qualitative phenomenology the application of intentional concepts does not so result in intentional phenomenology when we ask why? We are simply told that this is the way phenomenology works. It is as mysterious as ever.

At the close of the talk I touched briefly on Ned Block’s recent paper “The Higher-Order Theory is Defunct” which raises a new objection to the higher-order theory based on the consequences of explaining consciousness as outlined here. The problem that Ned sees is that when one has an empty HOT one has an episode of phenomenal consciousness that is real but that is not the result of a higher-order thought. David’s response seems to be to fall back on his denial that there are ever actually cases of empty higher-order thoughts. I brought up Anton’s syndrome and David responded that in Anton’s syndrome we don’t have any evidence that they actually have visual phenomenology. They don’t want to admit that they are blind but when we ask them to tell us what they see they can’t. If there are never empty higher-order thoughts then Block’s problem goes away.

My response to this problem is to identify the property of p-consciousness with the higher-order thought while still identifying the conscious mental states as the target of the HOT but at that point we adjourned to Brendan’s for some beer and further discussion.

During the discussion at Brendan’s we talked a little bit about my suggestion that we develop a homomorphism theory of teh mental attitudes. David and Myrto wanted to know how many similarities there were between sensory hommorphisms and the mental attitudes. In the sensory case we build up the quality space by presenting pairs of stimuli and noting what kind of discriminations the creature can do. What we end up doing is constructing the quality space from these kinds of discriminatory abilities. So, what kind of discriminations would happen in the mental attitude case? I suggested that maybe we could present pairs of sentences and ask subjects whether they expressed the same thought or different thoughts. Dan wanted to know what the dimensions of the quality space for mental attitudes would be. I suggested that one would be degree conviction, so that whether one doubts something or believes something firmly or just barely will be one dimension of difference but I have yet to think of any others. This has always been a project I hope to get to at some point…right now its just a pretty picture in my head…

Ok well I feel like I have been writing this all day so I am going to stop…

Error
This video doesn’t exist

Dream a Little Dream

One of the other issues that came up at Miguel’s cogsci talk was that of the empirical testability of the HOT theory. Miguel suggested that we might have the following argument against HOT. Experimental evidence suggests very strongly that the dorsal lateral pre-frontal cortex is likely to be the home of HOTs. David has said several times that if we did not find activity in the DLPFC when we had evidence that there were conscious mental states about this would be very bad for the HOT theory. So t if we think that we have conscious mental states in our dreams and we accept the evidence that shows that the DLPFC is deactivated during REM sleep this would seem to count as evidence against the HOT theory. David seemed to think that there were basically two plausible responses to this argument. One copuld deny that there are conscious mental states during dreaming or one could argue that the HOTs have a summer home that we haven’t found yet. A lot of the discussion centered on whether or not we had any evidence that dreams are conscious in the way we think they are. David argued that we didn’t Miguel that we did.

David’s argument seemed to me to be the following. The evidence we have that dreams are conscious are the reports that people make when they are awake and remembering the dream. But it is equally consistent with this that the dreams were all unconscious and only seem to be conscious when we reflect on them in the morning. Miguel seemed to think that it was obvious that dreams were conscious. I suggested that perhaps the kind of work that Eric does on dreams suggests that our naive views about dreams are wrong. Pete suggested that we had good experimental evidence that dreams were conscious from teh kind of studies where subjects are given instructions of the sort that if they see a flashing object in the dream they should clap five times. During the discussion the phenomenon of lucid dreaming came up and David reported that in lucid dreaming the DLPFC was active and so lucid dreams count as conscious mental states.During REM sleep subjects then can be seen to make clapping motions. But is it clear that this counts as a report in the relevant sense? This activity could be the result of unconscious dreams just as well as the result of conscious dreams. In David’s terminology we can ask whether the clapping is an expression of their mental states or whether it is a report. If it truly counts as a report and there is no activity in the DLPFC then David’s view would be in trouble.

This got me to thinking; how could we devise an actual empirical test of these kinds of issues? Hakwan suggested an interesting conceptual approach earlier which led me to think about binocular rivalry. If you could just have subjects in a scanner looking at  stimuli that are known to induce binocular rivalry without having the subjects do any kind of reporting we could then look at the DLPFC and see if the activity there reliably correlates with the conscious percept. A quick search on this led me to this article which seems to get results that line up with HOT theory very nicely, though with scalp EEG and with a button push which is a confound…