Prefrontal Cortex, Consciousness, and…the Central Sulcus?

The question of whether the prefrontal cortex (PFC) is crucially involved in conscious experience is one that I have been interested in for quite a while. The issue has flared up again recently, especially with the defenders of the Integrated Information Theory of Consciousness defending an anti-PFC account of consciousness (as in Christof Koch’s piece in Nature). I have talked about IIT before (here, here, and here) and I won’t revisit it but I did want to address one issue in Koch’s recent piece. He says,

A second source of insights are neurological patients from the first half of the 20th century. Surgeons sometimes had to excise a large belt of prefrontal cortex to remove tumors or to ameliorate epileptic seizures. What is remarkable is how unremarkable these patients appeared. The loss of a portion of the frontal lobe did have certain deleterious effects: the patients developed a lack of inhibition of inappropriate emotions or actions, motor deficits, or uncontrollable repetition of specific action or words. Following the operation, however, their personality and IQ improved, and they went on to live for many more years, with no evidence that the drastic removal of frontal tissue significantly affected their conscious experience. Conversely, removal of even small regions of the posterior cortex, where the hot zone resides, can lead to a loss of entire classes of conscious content: patients are unable to recognize faces or to see motion, color or space.

So it appears that the sights, sounds and other sensations of life as we experience it are generated by regions within the posterior cortex. As far as we can tell, almost all conscious experiences have their origin there. What is the crucial difference between these posterior regions and much of the prefrontal cortex, which does not directly contribute to subjective content?

The assertion that loss of the prefrontal cortex does not affect conscious experience is one that is often leveled at theories that invoke activity in the prefrontal cortex as a crucial element of conscious experience (like the Global Workspace Theory and the higher-order theory of consciousness in its neuronal interpretation by Hakwan Lau and Joe LeDoux (which I am happy to have helped out a bit in developing)). But this is a misnomer or at least is subject to important empirical objections. Koch does not say which cases he has in mind (and he does not include any references in the Nature paper) but we can get some ideas from a recent exchange in the Journal of Neuroscience.

One case in particular is often cited as evidence that consciousness survives extensive damage to the frontal lobe. In their recent paper Odegaard, Knight, and Lau have argued that this is incorrect. Below is figure 1 from their paper.

Figure 1a from Odegaard, Knight, and Lau

This is brain of Patient A, who was reportedly the first patient to undergo bi-lateral frontal lobectomy.  In it the central sulcus is labeled in red along with Brodman’s areas 4, 6, 9, and 46. Labled in this way it is clear that there is an extensive amount of (the right) prefrontal cortex that is intact (basically everything anterior to area 6 would be preserved PFC). If that were the case then this would hardly be a complete bi-lateral lobectomy! There is more than enough preserved PFC to account for the preserved conscious experience of Patient A.

Boly et al have a companion piece in the journal of neuroscience and a response to the Odegaard paper (Odegaard et al responded to Boly as well and made these same points). Below is figure R1C from the response by Boly et al.

Figure R1C from response by Melanie Boly, Marcello Massimini, Naotsugu Tsuchiya, Bradley R. Postle, Christof Koch, and Giulio Tononi

Close attention to figure R1C shows that Boly et al have placed the central sulcus in a different location than Odegaard et al did. In the Odegaard et al paper they mark the central sulcus behind where the 3,1,2 white numbers occur in the Boly et al image. If Boly et al were correct then, as they assert, pretty much the entire prefrontal cortex is removed in the case of patient A, and if that is the case then of course there is strong evidence that there can be conscious experience in the absence of prefrontal activity.

So here we have some experts in neuroscience, among them Robert T. Knight and Christof Koch, disagreeing about the location of the central sulcus in the Journal of Neuroscience –As someone who cares about neuroscience and consciousness (and has to teach it to undergraduates) this is distressing! And as someone who is not an expert on neurophysiology I tend to go with Knight (surprised? he is on my side, after all!) but even if you are not convinced you should at least be convinced of one thing: it is not clear that there is evidence from “neurological patients in the first half of the 20th century” which suggests that the prefrontal cortex is not crucially involved in conscious experience. What is clear is that is seems a bit odd to keep insisting that there is while ignoring the empirical arguments of experts in the field.

On a different note, I thought it was interesting that Koch made this point.

IIT also predicts that a sophisticated simulation of a human brain running on a digital computer cannot be conscious—even if it can speak in a manner indistinguishable from a human being. Just as simulating the massive gravitational attraction of a black hole does not actually deform spacetime around the computer implementing the astrophysical code, programming for consciousness will never create a conscious computer. Consciousness cannot be computed: it must be built into the structure of the system.

This is a topic for another day but I would have thought you could have integrated information in a simulated system.

Mary, Subliminal Priming, and Phenomenological Overflow

Consider Mary, the super-scientist of Knowledge Argument fame. She has never seen red and yet knows everything there is to know about the physical nature of red and the brain processing related to color experience. Now, as a twist, suppose we show her red subliminally (say with backward masking or something). She sees a red fire hydrant and yet denies that she saw anything except the mask (say). Yet we can say that she is primed from this exposure (say quicker to identify a fire truck than a duck subsequently or something). Does she learn what it is like to see red from this? Does she know what it is like to see red and yet not know that she knows this?

It seems to me that views which accept phenomenological overflow, and allow that there is phenomenal consciousness in the absence of any kind of cognitive access, have to say that the subliminal exposure to red does let Mary learn what it is like for her to see red (without her knowing that she has learned this). But this seems very odd to me and thus seems to me that this is a kind of a priori consideration that suggests there is no overflow.

Of course I have had about 8 hours of sleep in the last week so maybe I am missing something?

 

Papa don’t Teach (again!)

IMG_4628

The Brown Boys

2018 is off to an eventful start in the Brown household. My wife and I have just welcomed our newborn son Caden (pictured with older brother Ryland and myself to the right) and I will soon be going on Parental Leave until the end of April. Because of various reasons I had to finish the last two weeks of the short Winter semester after Caden was born (difficult!). That is all wrapped up now and there is just one thing left to do before officially clocking out.

Today I will be co-teaching a class with Joseph LeDoux at NYU. Joe is teaching a course on The Emotional Brain and he asked me to come in to discuss issues related to our recent paper. I initially recorded the below presentation to get a feel for how long the presentation was (I went a bit overboard I think) but I figured once it was done I would post it. The animations didn’t work out (I used powerpoint instead of Keynote), I lost some of the pictures, and I was heavily rushed and sleep-deprived (plus I seem to be talking very slow when I listen back to it) but at any rate any feedback is appreciated. Since this was to be presented to a neuroscience class I tried to emphasize some of the points made recently by Hakwan Lau at his blog.

Ian Phillips on Simple Seeing

A couple of weeks ago I attended Ian Phillips’ CogSci talk at CUNY. Things have been hectic but I wanted to get down a couple of notes before I forget.

He began by reviewing change blindness and inattentional blindness. In both of these phenomena subjects sometimes fail to recognize (or report) changes that occur right in front of their faces. These cases can be interpreted in two distinct ways. On one interpretation one is conscious only of what what is able to report on, or attend to. So if there is a doorway in the background that is flicking in and out of existence as one searches the two pictures looking for a difference and when one is asked one says that they see no difference between the two pictures one does not consciously experience the door way or its absence. This is often dubbed the ‘sparse’ view and it is interpreted as the claim that conscious perception contains a lot less detail in it than we naively assume.

Fred Dretske was a well known defender of a view on which distinguishes two components of seeing. There is what he called ‘epistemic seeing’ which, when a subject sees that p, “ascribes visually based knowledge (and so a belief) to [the subject]”. This was opposed to ‘simple seeing’ which “requires no knowledge or belief about the object seen” (all quoted material is from Phillips’ handout). This ‘simple seeing’ is phenomenally conscious but the subject fails to know that they have that conscious experience.

This debate is well known and been around for a while. In the form I am familiar with it it is a debate between first-order and higher-order theories of consciousness. If one is able to have a phenomenally conscious experience in the absence of any kind of belief about that state then the higher-order thought theory on which consciousness requires a kind of higher-order cognitive state about the first-order state for conscious perception to occur, is false. The response developed by Rosenthal, and that I find pretty plausible, is that in change blindness cases the subject may be consciously experiencing the changing element but not conceptualize it as the thing which is changing. This, to me, is just a higher-order version of the kinds of claims that Dretske is making, which is to say that this is not a ‘sparse’ view. Conscious perception can be as rich and detailed as one likes and this does not require ‘simple seeing’. Of course, the higher-order view is also compatible with the claim that conscious experience is sparse but that is another story.

At any rate, Phillips was not concerned with this debate. He was more concerned with the arguments that Dretske gave for simple seeing. He went through three of Dretske’s arguments and argued that each one had an easy rejoinder from the sparse camp (or the higher-order camp). The first he called ‘conditions’ and involved the claim that when someone looks at a (say) picture for 3-5 minutes scanning every detail to see if there is any difference between the two, we would ordinarily say that they have seen everything in the two pictures. I mean, they were looking right at it and their eyes are not defective! The problem with this line of argument is that it does not rule out the claim that they unconsciously saw the objects in question. The next argument, from blocking, meets the same objection. Dretske claims that if you are looking for your friend and no-one  is standing in front of them blocking them from your sight, then we can say that you did see your friend even if you deny it. The third argument involved that when searching the crowd for your friend you did saw no-one was naked. But this meets a similar objection to the previous two arguments. One could easily not have (consciously) seen one’s friend and just inferred that since you didn’t see anyone naked your friend was naked as well.

Phillips then when on to offer a different way of interpreting simple seeing based on signal detection theory. The basic intuition for simple seeing, as Phillips sees it, lies in the idea that the visual system delivers information to us and then there is what we do with the information. The basic metaphor was a letter being delivered. The delivery of the letter (the placing of it into the mailbox) is one thing, you getting the letter and understanding the contents, is another. Simple seeing can then be thought of as the informative part and the cognitive noticing, attending, higher-order thought, etc, can be thought of as a second independent stage. Signal detection theory, on his view, offers a way to capture this distinction.

Signal detection theory starts with treating the subject as an information channel. They then go on to quantify this, usually by having the subject perform a yes/no task and then looking at how many times they got it right (hits) versus how many times the got it wrong (false alarms). False alarms, specifically, involve the subject saying the see something but being wrong about it, because there was no visual stimulus. This is distinguished from ‘misses’ where there was a target but the subject did not report it. The ‘sensitivity to the world’ is called d’, pronounced “d prime”. On top of this there is another value which is computed called ‘c’. c, for criterion, is thought of as measuring a bias in the subjects response and is typically computed from the average of hits versus false alarms. One can think of the criterion as giving you a sense of how ‘liberal’ or ‘conservative’ the subjects’ response is. If they will say they saw something all the time then the seeming have a very liberal criterion for determine whether they saw something (that is to say they are biased towards saying ‘yes I saw it’ and is presumably mistaking noise for a signal). If they never say the say it then they are very conservative (they are biased towards saying ‘no I didn’t see it). This gives us a sense of how much of the noise in the system the subject treats as actually carrying information.

The suggestion made by Phillips was that this distinction could be used to save Dretske’s view if one took d’ to track simple seeing and c to track they subjects knowledge. He then went on to talk about empirical cases. The first involved memory across saccades and came from Hollingworth and Henderson, Accurate Visual Memory for Previously Attended Objects in Natural Scenes, the second f rom Mitroff and Levin Nothing Compares 2 Views: Change Blindness can occur despite preserved access to the changed information, and the third Ward and Scholl Inattentional blindness reflects limitation on perception, not memory. Each of these can be taken to suggest that there is “evidence of significant underlying sensitivity in [change blindness] and [inattentional blindness],”.

He concluded by talking about blindsight as a possible objection. Dretske wanted to avoid treating blindsight as a case of simple seeing (that is of there being phenomenal consciousness that the subject was unaware (in any cognitive sense) of having). Dretske proposed that what was missing was the availability of the relevant information to act as a justifying reason for their actions. He then went on to suggest various responses to this line of argument. Perhaps blindsight subjects who do not act on the relevant information (say by not grabbing the glass of water in the area of their scotoma) are having the relevant visual experience but are simply unwilling to move (how would we distinguish this from their not having the relevant visual experience)? Perhaps blindsight patients can be thought of as adjusting their criterion and so as choosing the interval with the strongest response and if so this can be thought of as reason responsive. Finally, perhaps, even though they are guessing, they really can be thought of as knowing that the stimulus is there?

In discussion afterwards I asked whether he though this line of argument was susceptible o the same criticism he had leveled against Dretske’s original arguments. One could interpret d’ as tracking conscious visual processing that the subject doesn’t know about, or one could interpret it as tracking the amount of information represented by the subjects mental states independently of what the subject was consciously experiencing (at leas to some extent). So, one might think, the d’ is good so the subject represents information about the stimulus that is able to guide its behavior, but that may be going on while the subject is conscious of some of it but not all of it, or different aspects of it, etc. So there is no real reason to think of d’ as tracking simple (i.e. unconceptualized, unnoticed, uncategorized, etc) content that is conscious as opposed to non-conscious. He responded that he did not think that this constituted an argument. Rather he was trying to offer a model that captured what he took to be Dretske’s basic intuition, which was that there was the information represented by the visual system, which was conscious, and then there was the way that we were aware of that information. This view was sometimes cast as unscientific and he thought of the signal detection material as proving a framework that, if interpreted in the way he suggested, could capture, and thus make scientifically acceptable, something like what Dretske (and other first-order theorists) want.

There was a lot of good discussion, a lot of which I don’t remember, but I do remember Ned Block asking about Phillips’ response to cases like the famous Dretske example of a wall, painted a certain color, having a piece of wallpaper in one spot. The little square of wallpaper has been painted and so is the same color as the wall. If one is looking at the wall and doesn’t see that there is a piece of wallpaper there, does one see (in the simple seeing kind of way) the wallpaper? Phillips seemed to be saying we did (but didn’t know it) and Block asked whether it wasn’t the case that when we se something we represent it visually and Phillips responded by saying that on the kind of view he was suggesting that wasn’t the case. Block didn’t follow up and didn’t come out after so I didn’t get the chance to follow up on that interesting change.

Afterwards I pressed him on the issue I raised. I wondered what he thought about the kinds of cases, discussed by Hakwan Lau (and myself) where the d’ is matched but subjects give differing answers to questions like ‘how confident are you that you saw it?’ or ‘rate the visibility of the thing seen’. In those cases we have, due to matched d’, the same information content (worldly sensitivity) and yet one subject says they are guessing while the other says they are confident they saw it (or rates its visibility lower while the other rates it higher (so as more visible)). Taking this seriously seems to suggest that there is a difference in what it is like for these subjects (a difference in phenomenal consciousness) while there is no difference in what they represent about the world (so at the first-order level). The difference in what it is like for them seems to track the way in which they are aware of the first-order information (as tracked by their visibility/confidence ratings). If so then this suggests that d’ doesn’t track phenomenal consciousness. Phillips responded by suggesting that there may be a way to talk about simple seeing involving differences in what it is like for the subject but didn’t elaborate.

I still am not sure how he responds to the argument Hakwan and I have given. If there is differing conscious experience with the same first-order states each in each case then the difference in conscious experience can only be captured (or is best captured) by some kind of difference in our (higher-order) awareness of those first-order states.

In addition, now that I have thought about it a bit, I wonder how he would respond to Hakwan’s argument (more stemming from his own version of higher-order thought theory) that the setting of the criterion in Phillips’ appeal to it in blindsight cases, depends on a higher-order process and so amounts to a cognitive state having a constitutive role in determining how the first-order state is experienced. This suggests that an ‘austere’ notion of simple seeing where there is no cognitive states involved in phenomenal consciousness is harder to find than Phillips originally thought.

Cognitive Prosthetics and Mind Uploading

I am on record (in this old episode of Spacetime Mind where we talk to Eric Schwitzgebel) as being somewhat of a skeptic about mind uploading and artificial consciousness generally (especially for a priori reasons) but I also think this is largely an empirical matter (see this old draft of a paper that I never developed). So even though I am willing to be convinced I still have some non-minimal credence in the biological nature of consciousness and the mind generally, though in all honesty it is not as non-minimal as it used to be.

Those who are optimistic about mind uploading have often appealed to partial uploading as a practical convincing case. This point is made especially clearly by David Chalmers in his paper The Singularity: A Philosophical Analysis (a selection of which is reprinted as ‘Mind uploading: A Philosophical Analysis),

At the very least, it seems very likely that partial uploading will convince most people that uploading preserves consciousness. Once people are confronted with friends and family who have undergone limited partial uploading and are behaving normally, few people will seriously think that they lack consciousness. And gradual extensions to full uploading will convince most people that these systems are conscious at well. Of course it remains at least a logical possibility that this process will gradually or suddenly turn everyone into zombies. But once we are confronted with partial uploads, that hypothesis will seem akin to the hypothesis that people of different ethnicities or genders are zombies.

What is partial uploading? Uploading in general is never very well defined (that I know of) but it is often taken to involve in some way producing a functional isomorph to the human brain. Thus partial uploading would be the partial production of a functional isomorph to the human brain. In particular we would have to reproduce the function of the relevant neuron(s).

At this point we are not really able to do any kind of uploading as Chalmers’ or others describe but there are people who seem to be doing things that look like a bit like partial uploading. First one might think of cochlear implants. What we can do now is impressive but it doesn’t look like uploading in any significant way. We have computers analyze incoming sound waves and then stimulate the auditory nerves in (what we hope) are appropriate ways. Even leaving aside the fact that subjects seem to report a phenomenological difference, and leaving aside how useful this is for a certain kind of auditory deficit, it is not clear that the role of the computational device has anything to do with constituting the conscious experience, or of being part of the subject’s mind. It looks to me like these are akin to fancy glasses. They causally interact with the systems that produce consciousness but do not show that the mind can be replaced by a silicon computer.

The case of the artificial hippocampus gives us another nice test case. While still in its early development it certainly seems like it is a real possibility that the next generation of people with memory problems may have neural prosthetics as an option (there is even a startup trying to make it happen and here is a nice video of Theodore Berger presenting the main experimental work).

What we can do now is fundamentally limited by our lack of understanding about what all of the neural activity ‘means’ but even so there is impressive and suggestive evidence that homelike like a prosthetic hippocampus is possible. They record from an intact hippocampus (in rats) while performing some memory task and then have a computer analyze and predict what the output of the hippocampus would have been. When compared to actual output of hippocampal cells it is pretty good and the hope is that they can then use this to stimulate post-hippocampal neurons as they would have been if the hippocampus was intact. This has been done as proof of principle in rats (not in real time) and now in monkeys, and in real time and in the prefrontal cortex as well!

The monkey work was really interesting. They had the animal perform a task which involved viewing a picture and then waiting through a delay period. After the delay period the animal is shown many pictures and has to pick out the one it saw before (this is one version of a delayed match to sample task). While they were doing this they recorded activity of cells in the prefrontal cortex (specifically layers 2/3 and 5). When they introduced a drug into the region which was known to impair performance on this kind of task the animal’s performance was very poor (as expected) but if they stimulated the animal’s brain in the way that their computer program predicted that the deactivated region would respond (specifically they stimulated the layer 5 neurons (via the same electrode they previously used to record) in the way that the model predicted they would have been by layer 2/3) the animal’s performance returned to almost normal! Theodore Berger describes this as something like ‘putting the memory into memory for the animal’. He then shows that if you do this with an animal that has an intact brain they do better than they did before. This can be used to enhance the performance of a neuroscience-typical brain!

They say they are doing human trials but I haven’t heard anything about that. Even so this is impressive in that they use it successfully in rats for long term memory in the hippocampus and then they also use it in monkeys in the prefrontal cortex in working memory. In both cases they seem to get the same result. It starts to look like it is hard to deny that the computer is ‘forming’ the memory and transmitting it for storage. So something cognitive has been uploaded. Those sympathetic to the biological view will have to say that this is more like the cochlear implant case where we have a system causally interacting with the brain but it is the biological brain that stores the memory and recalls it and is responsible for any phenomenology or conscious experiences. It seems to me that they have to predict that in humans there will be a difference in the phenomenology that stands out to the subject (due to the silicon not being a functional isomorph) but if we get the same pattern of results for working memory in humans are we heading towards Chalmers’ acceptance scenario?

Eliminative Non-Materialism

It struck me today that all of the eliminativists about the mind are physicalists (or materialists) and a quick google search didn’t reveal any eliminativist dualist out there. But why is that?

I can see why a particular kind of dualist would reject eliminativism. If one held that the mind was transparent to itself in a strong way then the existence of beliefs and other mental states can be known directly via the first-person method of introspection. But does that exhaust the possibilities? Suppose one thought that there was a robust correlation (or even causation) between the brain and mind. Then one would expect a robust NCC for every conscious state (assuming a law-like connection or at least correlation between the brain and mental states).

To give us a model to work with let’s assume that there is correlation between function states of the brain and consciousness such that whenever certain functional states are realized that guarantees (given our laws of physics, etc) that a certain (non-physical) state of consciousness is also instantiated. Now suppose that we have a pretty good functional definition for what the functional correlate of a given metal state should be. That is, suppose we have worked out in a fair amount of detail what kinds of functional states we expect would be correlated with the conscious mental states posited by folk-psychology. Now further suppose that when we advanced far enough into our neuroscience we saw that there were no such states realized in the brain or that the states were somewhat what we thought but varied in some dramatic way from what we had worked out folk-psychologically.

At that point it seems we would have two options. One thing we could do is to maintain that there is after all no law-like correlation between brain states and mental states. There is a belief or a red quale, say, but it is somehow instantiated in a way independently from the neural workings. This seems like a bad option. The second option would be to abandon folk-psychology and say that the non-physical states of mind are better captured by what the correlates are suggesting. The newly non-physical states might be so different from the original folk-psychological postulates that we might be tempted to say that the originally postulated states don’t exist. Wouldn’t we then have arrived at an eliminative non-materialism?

As a corollary, doesn’t this possibility suggest that there aren’t any truly a priori truths knowable from introspection?

LeDoux and Brown on Higher-Order Theories and Emotional Consciousness

On Monday May 1st Joe LeDoux and I presented our paper at the NYU philosophy of mind discussion group. This was the second time that I have presented there (the first was with Hakwan (back in 2011!)). It was a lot of fun and there was some really interesting discussion of our paper.

There were a lot of inter-related points/objections that came out of the discussion but here I will just focus on just a few themes that stood out to Joe and I after the discussion. I haven’t yet had the chance to talk with him extensively about this so this is just my take on the discussion.

One of the issues centered on our postulation that there are three levels of content in emotional consciousness. On the ‘traditional’ higher-order theory there is the postulation of two distinct states. One is ‘first-order’ where this means that the state represents something in the world (the animal’s body counts as being in the world in this sense). A higher-order mental state is one that has higher-order content, where this means that it represents a mental state as opposed to some worldly-non-mental thing. It is often assumed that the first-order state will be some basic, some might even say ‘non-representational’ or non-conceptual, kind of content. We do not deny that there are states like the but we suggested that we needed to ‘go up a level’ so to speak.

Before delving into this I will say that I view this as an additional element in the theory. The basic idea of HOROR theory is just that the higher-order state is the phenomenally conscious state (because that what phenomenal consciousness is). I am pretty sure that the idea of the lower-order state being itself a higher-order state is Joe’s idea but to be fair I am not 100% sure. The idea was that the information coming in from the senses needed to be assembled in working memory in such a way as to allow the animal to connect memories, engage schemas etc. We coined the term ‘lower-order’ to take the place of ‘first-order’. For us a lower-order state is just one that is the target of a higher-order representation. Thus, the traditional first-order states would count as lower-order on our view but so would additional higher-order states that were re-represented  at a higher-level.

Thus on the view we defended the lower-order states are not first-order states. These states represent first-order states and thus are higher-order in nature. When you see an apple, for example, there must be a lot of first-order representations of the apple but these must be put together in working memory and result in a higher-order state which is an awareness of these first-order states. That higher-order representation is the ‘ground floor’ representation for our view. It is itself not conscious but it results in the animal behaving in appropriate ways. At this lower-order level we would characterize the content as something like ‘(I am) seeing an apple’. That is, there is an awareness of the first-order states and a characterization of those states as being seeing of red but there is no explicit representation of the self. There is an implicit referring to the self, by which we mean these states are attributed to the creature who has them but not in any explicit way. This is why we think of this state as just an awareness of the first-order activity (plus a characterization of it). At the their level we have a representation of this lower-order state (which is itself a higher-order state in that it represents first-order states).

Now, again, I do not really view this three-layer approach as essential to the HOROR theory. I think HOROR theory is perfectly compatible with the claim that it is first-order states that count as the targets. But I do think it is an interesting issue at state here and that is what role exactly the ‘I’ in “I am seeing a red apple’ is playing and also whether first-order states can be enough to play the role of lower-order states. Doesn’t the visual activity related to the apple need to be connected to concepts of red and apple? If so then there needs to be higher-order activity that is itself not conscious.

Another issue focused on our methodological challenge to using animals in consciousness research. Speaking for myself I certainly think that animals are conscious but since they cannot verbally report, and as long as we truly believe that the cognitive unconscious is as robust as widely held, then we cannot rule out that animal behavior is produced by non-conscious processes. What this suggests is that we need to be cautious when we infer from an animal’s behavior to the cause of it being a phenomenally conscious mental state. Of course that could be what is going on, but how do we establish that? It cannot be the default assumption as long as we accept the claims about the cognitive unconscious. Thus we do not think that animals do or do not have conscious experience but rather that the science of consciousness is best pursued in Humans (for now at least). For me this is related to what I think of as the biggest confound in all of consciousness science and that is the confound of behavior. If an animal can perform a task then it is assumed this is because its mental states are conscious. But if this kind of task can be performed unconsciously then behavior by itself cannot guarantee consciousness.

One objection to this claim (sadly I forgot who made this…maybe they’ll remind me in the comments?) was that maybe verbal responses themselves are non-conscious. When I asked if the kind of view that Dennett has, where there is just some sub-personal mechanism which results in an utterance of “I am seeing red” and this is all there is to the conscious experience of seeing red, counts as the kind of view the objector had in mind. The response was that no they had in mind that maybe the subjects are zombies with no conscious experience at all and yet were able to answer the question “what do you see” with “I see red,” just like zombies are thought to do. I responded to this with what I think is the usual way to respond to skeptical worries. That is, I acknowledge that there is a sense in which such skeptical scenarios are conceivable (though maybe not exactly as the conceiver supposes), but there are still reasons for not getting swept up in skepticism. For example I agree with the “lessons” from fading, dancing, and absent qualia cases that we would be in an unreasonable sense detached from our conscious experiences if this were happening. The laws of physics don’t give us any reason to suppose that there are radical differences between similar things (like you and I), though if we discovered an important brain area missing or damaged then I suppose we could be led to the conclusion that some member of the population lacked conscious experience. But why should we take this seriously now? I know I am conscious from my own first-person point of view and unless we endorse a radical skepticism then science should start from the view that report is a reliable(ish) guide to what is going on in a subject’s mind.

Another issue focused on our claim that animal consciousness may be different from human conscious experience. If you really need the concept ‘fear’ in order to feel afraid and if there is a good case to be made that animals don’t have our concept of fear then their experience would be very different from ours. That by itself is not such a bad thing. I take it that it is common sense that animal experience is not exactly like human experience. But it seems as though our view is committed to the idea that animals cannot have anything like the human experience of fear, or other emotions. Joe seemed to be ok with this but I objected. It is true that animals don’t have language like humans do and so are not able to form the rich and detailed kinds of concepts and schemas that humans do but that does not mean that they lack the concept of fear at all. I think it is plausible to think that animals have some limited concepts and if they are able to form concepts as basic as danger (present) and harm then they may have something that approaches human fear (or a basic version of it). A lot of this depends on your specific views about concepts.

Related to this, and brought up by Kate Pendoley was the issue of whether there can be emotional experiences that we only later learn to describe with a word. I suggested that I thought the answer may be yes but that even so we will describe the emotion in terms of its relations to other known emotions. ‘It is more like being afraid than feeling nausea’ and the like. This is related to my background view about a kind of ‘quality space’ for the mental attitudes.

Afterwards, over drinks, I had a discussion with Ned Block about the higher-order theory and the empirical evidence for the role of the prefrontal cortex in conscious experience. Ned has been hailing the recent Brascamp et al paper (nice video available here) as evidence against prefrontal theories. In that paper they showed that if they take away report and attention (by making the two stimuli barely distinguishable) then you can show that there is a loss of the prefrontal fMRI activation. I defended the response to this that fMRI is too crude of a measure to take this null result too seriously. This is what I take to be the line argued in this recent paper by Brain Odgaard, Bob Knight, and Hakwan, Should a few null findings falsify prefrontal theories of consciousness? Null results are ambiguous as between the falsifying interpretation and it just being missed by a crude tool. As Odgaard et al argue if we use more invasive measures like single cell or ECoG then we would find prefrontal activity. In particular the Mante et al paper referred to in Odgaard et all is pretty convincing demonstration that there is information decodable from prefrontal areas that would be missed by an fMRI. As they say in the linked to paper,

There are numerous single- and multi- unit recording studies in non-human primates, clearly demonstrating that specific perceptual decisions are represented in PFC (Kim and Shadlen, 1999; Mante et al., 2013; Rigotti et al., 2013). Overall, these studies are compatible with the view that PFC plays a key role in forming perceptual decisions (Heekeren et al., 2004; Philiastides et al., 2011; Szczepanski and Knight, 2014) via ‘reading out’ perceptual information from sensory cortices. Importantly, such decisions are central parts of the perceptual process itself (Green and Swets, 1966; Ratcliff, 1978); they are not ‘post-perceptual’ cognitive decisions. These mechanisms contribute to the subjective percept itself (de Lafuente and Romo, 2006), and have been linked to specific perceptual illusions (Jazayeri and Movshon, 2007).

In addition to this Ned accused us of begging the question in favor of the higher-order theory. In particular he thought that there really was no conscious experience in the Rare Charles Bonnett cases and that our appeal to Rahnev was just question begging.

Needless to say I disagree with this and there is a lot to say about these particular points but I will have to come back to these issue later. Before I have to run, and just for the record, I should make it clear that, while I have always been drawn to some kind of higher-order account, I have also felt the pull of first-order theories. I am in general reluctant to endorse any view completely but I guess I would have to say that my strongest allegiance is to the type-type identity theory. Ultimately I would like it to be the case that consciousness and mind are identical to brain states and/or states of the brain. I see the higher-order theory as compatible with the identity theory but I am also sympathetic to to other versions (for full-full disclosure, there is even a tiny (tiny) part of me that thinks functionalism isn’t as bad as dualism (which itself isn’t *that* bad)).

Why, then, do I spend so much time defending the higher-order theory? When I was still an  undergraduate student I thought that the higher-order thought theory of consciousness was obviously false. After studying it for a while and thinking more carefully about it I revised my credence to ‘not obviously false’. That is, I defended it against objections because I thought they dismissed the theory unduly quickly.

Over time, and largely because of empirical reasons, I have updated my credence  from ‘not obviously false’ to ‘possibly true’ and this is where I am at now. I have become more confident that the theory is empirically and conceptually adequate but I do not by any means think that there is a decisive case for the higher-order theory.