Consciousness Science & The Emperor’s Arrival

Things have been hectic around here because I have been teaching 4 classes (4 preps) in our short 6-week winter session. It is almost over, just in time for our Spring semester to start! Even so February has been nice with a couple of publications coming out.

The first is Opportunities and Challenges for a Maturing Science of Consciousness. I was very happy to see this piece come out in Nature Human Behavior. Matthias Michel, Steve Flemming, and Hakwan Lau did a great job of co-ordinating the 50+ co-authors (Open access viewable pdf here). As someone who was around as an undergraduate towards the beginning of the current enthusiasm for the science of consciousness it was quite an honor to be included in this project!

In addition to that Blockheads! Essays on Ned Block’s Philosophy of Mind and Consciousness is out! This book has a lot of interesting papers (and replies from Ned) and I am really looking forward to reading it.

fullsizeoutput_62c0

 

Hakwan Lau and I wrote our contribution back in 2011-2012  and a lot has happened in the seven years since then! Of course I had to read Ned’s response to our paper first and I will have a lot to say in response (we actually have some things to say about it in our new paper together with Joe LeDoux) but for now I am just happy it is out!

Gennaro on Higher-Order Theories

I was asked to review the Bloomsbury Companion to the Philosophy of Consciousness and had some things to say about the chapter on higher-order theories of consciousness by Rocco Gennaro that I could not fit into a paragraph or two so I am extending them here.


In the fourth paper of this second section Rocco Gennaro gives us his interpretation of “Higher-Order Theories of Consciousness”. Higher-order theories of consciousness claim that consciousness as we ordinarily experience it requires a kind of inner awareness, an awareness of our own mental life. To consciously experience the red of a tomato is to be aware of oneself as seeing a red object. Gennaro offers a survey of the traditional higher-order accounts but anyone reading this chapter who was new to the area would get a very biased account of the lay of the land. Specifically there are three things that are misleading about Gennaro’s overview.  The first is how he presents the theory itself. The second is how he responds to the classic misrepresentation objection to higher-order thought theories of consciousness. And the third is in presenting the case for whether or not the prefrontal cortex is a possible neural realizer of the relevant higher-order thoughts.

Gennaro interprets the higher-order theory as what I have called the ‘relational view’. As he says on page 156,

Conscious mental states arise when two unconscious mental states are related in a certain specific way, namely that one of them (the [higher-order representation]) is directed at the other ([mental state]).

This makes it clear that on his way of doing things it is necessary that there be two states, with one directed at the other and that these two states together ‘give rise’ to a (phenomenally) conscious mental state. Rosenthal and those who follow him interpret the higher-order thought theory as what I have called the ‘non-relation view’. On the non-relational view consciousness consists in having the relevant higher-order state. There is some discussion of this distinction in Pete Mandik’s chapter at the end of the book (under heading of ‘cognitive approaches to phenomenal consciousness’) but if one just read Gennaro’s chapter on higher-order theory one would be misled about the other approach.

This comes out clearly in Gennaro’s discussion of the ‘mismatch’ objection. A familiar objection to higher-order theories is that they allow the possibility of differing contents in higher-order and lower-order states. If one sees a red object but has a higher-order thought of the right kind that represents that one as seeing a green object, what is it like for the subject? The non-relational view answers that it is like seeing green even though one will behave as though one is seeing red. Gennaro disagrees and says that there must be a partial or complete match between the concepts in the HOT and the first-order state (or the concepts in the higher-order state must be more fine-grained than in the lower-order state or vice versa) or there is no conscious experience at all. He considers cases like associative agnosia, where someone can see a whistle and consciously see the silver color of it and its shape, can draw it really well, etc, but doesn’t know that it is a whistle. They just can’t identify what it is based on how it looks (though they can identify a whistle by its sound). Gennaro holds that the right way to interpret this is that the subject has a higher-order thought that represents the first-order representation of the whistle incompletely. It represents that one is seeing a silver object that has such and such a shape. But it does not represent that one is seeing a whistle (p 156). He argues that in a case of associative agnosia there is a partial match between the HO and FO state and that results in a conscious experience that lacks meaning.

First it is strange to be talking in terms of ‘matching’ between contents. What determines whether there is a match? Gennaro talks of the ‘faculty of the understanding,’ and it ‘operating on the data of the sensibility’ by ‘applying higher-order thoughts’, and of the higher-order state ‘registering’ the content of the first-order state but it is not clear what these things really mean. Second he makes the assumption that one consciously experiences the whistle as a whistle, or that high level concepts figure in the phenomenology of a subject. This is a controversial claim and even if it is true (or one thinks that it is) one should recognize that this is not a required part of the higher-order view. On the way Rosenthal has set the theory up one has higher-order thought of the appropriate kind about sensory qualities and their relations to each other but one does not have concepts like ‘whistle’ in the consciousness-making higher-order thoughts. One will then come to judge/make an inference that one is seeing a whistle which will result in a belief that one is seeing that whistle, but this belief will be a first-order belief (that is, a belief which is not about something mental, in this case it is about the whistle).

Gennaro says that these kinds of cases e support the claim that there must be some kind of match between first-order and higher-order states but it is not clear that it really does. What he has argued for is the claim that the content of the higher-order state determines what it is like for the subject. What reason do we have to think that the match between first-order and higher-order state is playing a role? In other words, what reason do we have to think that the same would not be case when the first-order state represented red and the higher-order state that one was seeing green, as the non-relational view holds?

His sole criticism of the non-relational view comes when he says,

but the problem with this view is that somehow the [higher-order thought] alone is what matters. Doesn’t this defeat the purpose of [higher-order thought] theory which is supposed to explain state consciousness in terms of a relation between two states? Moreover, according to the theory the [lower-order] state is supposed to be conscious when one has an unconscious HOT,” (p 155; italics in the original).

This is a really bad objection to the non-relational version of the higher-order thought theory. The first part merely asserts that there is no non-relational version of the higher-order thought theory. The second part is something that Rosenthal accepts. The lower-order state is conscious when one has an appropriate higher-order state because that is what that property consists in. What it is for a first-order state to have the property of being conscious, for Rosenthal, is for one to have an appropriate higher-order thought which attributes that first-order state to .

In addition, Gennaro goes on to criticize the recent speculation by higher-order theorists that the prefrontal cortex is crucially involved in producing conscious experience. It is of course an open empirical question as to whether the prefrontal cortex is required for conscious experience and, if so, whether it is because it instantiates the relevant kind of higher-order awareness. However, Gennaro’s arguments are extremely weak and do nothing to cast doubt on this empirical hypothesis. He first appeals to work by Rafi Malach that there is decreased PFC activity when subjects are absorbed by watching a film. However, he does not note that Rosenthal and Lau responded to this. He then appeals to the fact that PFC activation is seen only when there is a required report. This has also been recently addressed (by Lau). Finally, he appeals to lesion studies suggesting that there is no change in conscious experience when the PFC is lesioned. However, there is considerable controversy over the correct interpretation of these results and Gennaro merely appeals to second and third hand literature reviews (see the recent debate in the Journal of Neuroscience between Lau and colleagues and Koch and colleagues).

Consciousness, Higher-Order Theories of

I have been asked to write an entry on higher-order theories of consciousness for the Routledge Encyclopedia of Philosophy, which apparently has not ever had an entry on this! Below is a very (very) rough draft of the entry so far. It really isn’t much more than a first-draft and obviously needs a lot of work but it will give you some idea of the direction I am heading. Any feedback/criticism would be most welcome!

 

  1. Introduction

Higher-order theories of consciousness take a variety of forms but they are united by the claim that consciousness crucially involves some kind of inner awareness of one’s own mind. Though there are clear historical precedents and inspirations in the work of Aristotle, Descartes, Locke, and Kant it is not clear which version (if any) of higher-order theory these historical figures had. There is among these thinkers seemingly a commitment to the idea that consciousness requires some kind of inner awareness but higher-order theories were most clearly formulated in contemporary philosophy of mind. This entry will focus on contemporary developments.

  1. The Higher-Order Approach to Consciousness

When giving a theory of consciousness one must first delineate what the target phenomenon is supposed to be, especially when pursuing something as ambiguous as consciousness.

We say of creatures that they are conscious or unconscious, that they are awake or asleep, etc. This has been called creature consciousness(Rosenthal). This can be contrasted with what is often called state consciousness, which marks the contrast between a particular mental state being conscious versus unconscious (as in subliminal perception).

Phenomenal consciousnesscaptures the subjective ‘what it is like’ component of consciousness. When we taste chocolate, see red, experience pain, hunger, or anger, there is something that it is like for us to have those experience. The specific way that it is for us to have those experience consist in various phenomenal properties (Chalmers).

Higher-order theories are often cast as theories of state consciousness.  That is, higher-order theories are often aimed at explaining what the difference is between a state which is conscious and a state which is unconsciousness. The higher-order strategy is to appeal to the inner awareness that we have of our own mental lives. A conscious state, on this approach, consists in my being aware of myself as being in that state.

Some higher-order theorists go so far as to deny that phenomenal consciousness exists (Rosenthal). However, there is a natural way to connect these two notions of consciousness. When one is in a mental state that one is in no way at all aware of being in, there is nothing that it is like for one. For example, when subliminally presented with a red strawberry, so that one denies seeing it, it is natural to say that it is not like seeing red for one. It is also natural to say that the state which represents the strawberry and its redness is unconscious. The converse of this is that when there is something that it is like for one to see the red strawberry one is in some way aware of oneself as being in the state that represents it. Thus when a state is conscious there is something that it is like for one to be in that state. This is the way in which these terms will be used in this entry.

Construed in this way higher-order theories of consciousness aim to explain phenomenal consciousness which is the same as trying to explain state consciousness. Traditionally we recognize two ways in which we can become aware of things in our environment, which are by perceiving and by thinking. First-order theories argue that phenomenal consciousness can be understood by appeal to the awareness of the world. Higher-order theories argue that these first-order states are not enough and in addition to an awareness of things, properties, and facts about the world we must also have an awareness of our outer-directed awareness. This inner awareness is higher-order in that it is an awareness of something that is mental rather than in the environment or the animal’s body.

  1. Higher-Order Thought Theories

Classical higher-order theories often appealed to inner sense or inner perception as a way to capture inner awareness (Armstrong; Lycan). But this kind of view has faced difficulties which have rendered it all but obsolete. First, we do not have any reason to posit higher-order mental qualities (Rosenthal). In addition, we have not discovered any kind of inner sense (Lycan and Suret).

Since we can also be aware of things with the appropriate thought higher-order thought theories appeal to intentional thought-like states to explain the way in which we are aware of our mental lives.

Perhaps the earliest explicit version of this kind of theory is that of David Rosenthal. On his view we become conscious of our first-order mental states via having a thought to the effect that we are occurently in those states. This thought must have assertoric force and indicate that the relevant mental qualities are currently present.

Higher-order thought theories themselves come in many different varieties, each with a different structure posited. What units them is the postulation that there are two levels of content in the mind. The first level of content represents the environment, the second, higher-order level, represents the first level.

One model of the relation between these, which I will call the Relational Model (RM), is as follows. One starts with an unconscious mental state and then one adds a higher-order representation of that state which results in the first-order state becoming conscious. The consciousness of the first-order state is explained, on this model, by the relation-the awareness relation- that holds between the first order state and the higher-order state. The first-order state is conscious because you are aware of it. On this way of thinking the higher-order state is a distinct mental representation.

Some have felt that this is unsatisfactory because my awareness of non-mental items like rocks does not result in the rocs becoming conscious (Goldman). RM theorists have responded that theirs is a theory of mental state consciousness and so does not include rocks. To be made conscious, on RM, we require a mental state to become conscious in the first place. Whatever the merits of this response there is an additional a well-known objection based on the possibility of misrepresentation. Since RM claims that there are two distinct states one may misrepresent the other. So, if one is representing that there is a red tomato in the environment but then has a higher-order state that represents one as seeing a green tomato, what is it like for the individual in question? (Lycan) According to RM it is the first-order representation of red that is conscious but it is also the case that the higher-order state determines what it is like for you. This suggests that there are deep problems with RM (Block).

Because of this some have moved to what I will call the Joint-Determination Model (JM).On this model the first-order state is postulated not to be a distinct mental state but rather to be part of the conscious state itself. JDM posits that there is one state with two contents. Part of the content is first-order and part of the content is higher-order. JM comes in different varieties (Kriegel, Gennarro, Lau).  One major difference between these models is that of whether the higher-order state itself employs conceptual content (Kriegel, Lau). Some versions, which I will call Same-Order Models(SOM) claim that the higher-order content is itself conceptual and then seek to rule out misrepresentation worries by putting restrictions on the kind of higher-order content that results in a conscious mental state. Gennaro is the most vigorous defender of this kind of view. On his account a conscious mental state results only when there is a (full or partial) conceptual match between first-order and higher-order states, or when the first-order content is more specific than the higher-order content, or when the higher-order content is more specific than the first-order content, or when the higher-order concepts can combine to match the first-order representations (2012 p 179). All of the provisos are arrived at so as to block the claim that there can be a conscious mental state in cases of mismatched content between higher-order and first-order states. However, they seem ad hoc. When examining the cases presented in detail it seems straightforwardly the case that the higher-order content determines what it is like for one. Why wouldn’t it be that way for case of radical misrepresentation as well?

Other versions of JDM that I will call Split-level Models (SLM) deny that the higher-order state is itself conceptual in this way (Lau, Lau and Brown). On these versions the higher-order state is some kind of ‘mere’ pointer, which points to the relevant first-order state. The content of the conscious state is given by the content of the first-order state, but that it is a conscious experience at all is given by the higher-order state. In its most recent iteration the higher-order state ‘toggles’ between three states indicating that the first-order state is veridical, held in working memory, or just noise. SLM is distinct from the other versions of JDM because of what the theory claims happens in radical misrepresentation. On SOM, when one just has the higher-order representation and no first-order target at all there is no conscious experience at all. On Model SLM one will have some kind of conscious experience but it will not be specific. That is to say on SLM the higher-order state will indicate that one is verdically perceiving something but if one has no relevant first-order state then there will be no content to experience other than that one is veridically perceiving something. When one goes to report what it is one will fail.

This extravagant disjunctive theory has been resisted by those who endorse what I will call the Non-Relational Model (NRM).NRM rejects the claim that the first-order state is made conscious by the higher-order state (Rosenthal, Brown). On NRM it is the higher-order state itself that accounts for conscious experience. There is some disagreement among those who endorse this model as to which state is the conscious state. Rosenthal has suggested that it is the notional state that becomes conscious (Rosenthal, Weisberg). Berger has suggested that it is the individual that becomes conscious and not the state at all (Berger). Brown has suggested that it is the higher-order state itself that is phenomenally conscious (Brown).

  1. Still Further Varieties of Higher-Order Theory

In addition to these kinds of theories there are non-traditional ways to account for the inner awareness that many think is a crucial part of phenomenal consciousness.

On the one hand are those theories that explicitly seek to find some non-traditional form of inner awareness. On the other hand, are those that deny this and yet end up appealing to something like inner awareness.

Lycan has recently argued that his version of higher-order perception really is a version of the attention hypothesis. In his paper with Wesley Sauret he argues that attention is one of the ways in which we can become aware of things. On this view attention makes us aware of our mental states but it does so not by representing the states in question. They appeal to analogies like a funnel or sieve. A funnel directs something, a fluid say, towards a target but not by representing what is being directed. As recognized by these authors work remains to be done to explain what exactly the relation is, they suggest that it may be some kind of acquaintance.

In a similar vein other theorist have adopted some kind of ‘inner acquaintance’ view (Hellie). Hellie presents a version of higher-order acquaintance as a non-intentional relation of awareness to one’s first-order qualitative states. Chalmers has also endorsed a non-reductive, non-physical version of higher-order acquaintance. On Chalmers view to be aware of x is also, by the very nature of phenomenal awareness, to be acquainted with one’s awareness of x (Chalmers). This may be a (non-reductive, non-physical) version of SOM above.

There also have been philosophers who have sought to implement inner awareness via a quotational model (Coleman,Picciuto). On Coleman’s model one quotes a quality and thereby becomes conscious of it. This view requires that the mental quality is already primitively red and is fundamental (Coleman endorses pan-qualityism). The quotation of that red quality makes it a phenomenally conscious experience. On Picciuto’s view one forms a phenomenal concept of the relevant mental quality. As Picciuto formulates it, the mental quality does not have an intrinsic redness to it but becomes qualitatively red once one quotes it.

Finally there are those who seek radically non-traditional ways. For example Ned Block has agreed that some kind of inner awareness is necessary for phenomenally conscious experiences (Block). He denies that this kind of inner awareness is any kind of cognitive awareness. He has suggested that it may be a deflationary kind of awareness. Much as I walk my own walk or smile my own smiles, so to I am aware of my own phenomenally conscious states. This kind of deflationary move seems to include every mental state as phenomenally conscious. On the other hand Block has suggested that some kind of same-order awareness may do the trick (i.e. a version of SOM). However it is unclear how this notion of non-cognitive awareness differs from any of the models canvased above. Perhaps Block will ultimately settle on something like JDM but if so the relevant notion of awareness will seem to be cognitive after all. Or perhaps he will ultimately settle on something like acquaintance but then that needs to be spelled out.

Coming up on Consciousness Live!

I haven’t been very good at posting anything here lately (5 classes, two kids, and trying to write a couple of papers sucks up a lot of time!!) but I have been keeping up with the discussions on Consciousness Live! Here are some of the upcoming discussions planned.

R. Scott Bakker

Michael Silberstein

Nicholas D’Aloisio-Montilla

Keith Frankish

Also, in case you missed it, check out my discussions with Adriana Renero (on introspection) and Monica Gagliano  (on plant cognition).

 

Consciousness Live!

*UPDATE: check out the Consciousness Live! page for all of the details*

————————————————————-

I have recently started a new YouTube series I have been calling Consciousness this Month. My original idea was to pick a theme and record some discussions about it. So far I have six “episodes” (and one bonus discussion) and some exciting things lined up for upcoming months…It has been hard sticking with the theme idea because of issues scheduling discussions, and I have been using Google Hangouts On Air to livestream the discussions so maybe I should have called it Consciousness Live…is it too late to change it? Not sure but I am sure I have some exciting guests lined up. Because of a mishap recording a conversation with Ruth Millikan (I announced it well in advance and then we had technical issue recording) I won’t announce when these guests will be joining me but upcoming guests include:

Philip Goff

Carlos Montemayor

Jumana Morciglio

Michael Rodriguez

Javier Gomez-Lavin

Adriana Renero

Romina Padro

I may end up writing something here about various discussion I have had but if you want to keep up with what’s going on, subscribe to my YouTube Channel or follow me on Twitter. And, if you can think of someone that may be interested in talking about consciousness/mind with me then I am probably interested in talking to them! Feel free to suggest people I should contact.

Review of The Consciousness Instinct by Michael Gazzaniga

Summer is here and I have finally started on my summer reading list. First up was Michael Gazzaniga’s new book The Consciousness Instinct. Gazzaniga is of course we’ll known for his work on split brain patients and for helping to found the the discipline of cognitive neuroscience. I was very excited to read the book but after having done so I am very disappointed. There are some interesting ideas in the book but overall it does not strike me as a serious contribution to the study of consciousness.

The book begins with the standard potted history of the mind body problem with Descartes invoked as the primary villain. It was Descartes who initiated the-brain-is-a-machine ethos and Gazzaniga thinks that is a mistake. This part of the book was well written but could be found almost anywhere. He then goes completely off the rails and invokes quantum mechanics as a non-mechanical foundation for solving the mind-body problem. In particular he invokes the notion of complementarity as his solution. According to him Quantuum mechanics tells us that a physical system can be in two different states at once (p. 175). So the brain can be a mechanical system and also a mind at the same time. No problem.

I am of course no expert on quantuum mechanics (though I have put it in a fair amount of time trying to figure it out). But as far as I understand it this is a gross misuse of the idea of complimentary. Quantum mechanics does not say that a physical system can be in two contradictory states at the same time! Rather what it says is that the state of the system *before measurement* cannot be described by classical concepts  like ‘wave’ or ‘particle’ yet once a measurement is made (and depending on the type of measurement we make) we will find that it does have one of these properties (and had we done the measurement different we would have found that it had the other property). How, then, should we think of poor Schrodinger’s cat? Isn’t the poor cat both dead and alive (as Gazzaniga says on p. 181)? Not as I understand it! When we have a vector, represented by |A> and we add it to another vector |B> then, yes,  we do get a new vector that represents the state the system has entered but saying that 1/2|Alive> + 1/2|Dead> represents the cat’s state before measurement doesn’t mean it is both dead AND alive; it means that when we measure it it will EITHER be dead OR alive (with probabilities given by the 1/2).

But what about before we measure it? What state is the cat in then? As far as I understand it quantum mechanics (i.e. the mathematical formalism) is silent on that question but the reply I prefer is that the cat has no determinate properties before the measurement.

But all of this is highly controversial and does not help us at all with the mind-body problem! Suppose, as Gazzaniga assumes, that the mind and brain are two irreducible complimentary descriptions of the single system, then we would only be able to know (i.e. measure) one of the at a time, at the expense of the other. But that is manifestly not the situation. We can measure our own brain activity even as we are having conscious experiences produced by/identical with that neural activity.  No complementarity required.

I am leaving out a lot of the details, and as I said some of his views are interesting, but what is it about consciousness that drives people to these kinds of extreme intellectual gyrations? Why do people trust their intuitions so much that they are ready to jettison all of the progress made by psychology and neuroscience as wasted time?

Consciousness Online -10 Years Later

It was way back in May of 2008 that I finally decided to try to organize an online consciousness conference. Ten years later and the conference material from the resulting five online conferences is for the most part still there. There is the inevitable link rot that creeps in and I try to keep up with it but in some cases it is unavoidable. The first conference was hit the hardest because back then I used google video which went under (and somehow I lost all of the videos I had there) and hosted papers and related material on a server I can’t access anymore (I also let the custom url lapse a while back and it is now the original http://consciousnessonline.wordpress.com). A lot survives, though, and I have been glad to see people linking to it in their courses and some of the discussion (all of which is still there) has been cited in scholarly papers!

I actually had the idea for a consciousness-specific online conference in the summer of 2007, and bothered a few people about trying to get something like this going over the next year or so (I vaguely remember pitching the idea of a Kripke & Consciousness online conference to the early Kripke Center in 2007).  My experience at the Tucson consciousness conference in April of 2008 finally goaded me to act. I had just recently started blogging (happy 11th birthday to this blog by the way!) and seen the Online Philosophy Conference so I thought that was an ideal format (but with more video). People warned me that it was too much work and that the previous online philosophy conferences had not really succeeded. I thought if it was kept small it could be done, and since I was tired of waiting for someone else to do it, I set out to do it myself. I spent the summer getting ready,  announced the conference in August of 2008 and held the conference in February of 2009 (papers published in April of 2010)…a lot of work but also a lot of fun!

I organized the last one in 2013 and the final special issue I edited as a result of that came out in 2015. All in all that is seven years I invested into that project! It was a shame that I had to stop because I really enjoyed working on it and was trying to grow it into something but as I was coming up for tenure it was communicated to me that my scholarly work was excellent and that I needed to focus on contributing something to the college. In other words, another conference, and another publication was not going to help me get tenure (this is how I interpreted it anyway). So I turned my focus to organizing things at LaGuardia and I just could not do both with my teaching load (5/4 plus extra classes). I was awarded tenure in the fall of 2015 and I briefly thought about trying to revive it but by then I had kids! Plus, I have been happy to see the Brains Blog, and their Minds Online Conference (and now Neural Mechanisms Online) spring up to fill the void (by the way, here is the excellent special session for CO5 organized by John Schwenkler).

One thing I have learned is that it is possible for one person/a small group of people to make an impact but what we really need is to ‘institutionalize’ online conferences, by which I mean have them sponsored by professional organizations like the APA or the Association for the Scientific Study of Consciousness  (or the Tucson Center for Consciousness Studies, etc) but it is not clear how to do that without money entering the picture (I footed the small bill for any costs related to running Consciousness Online and everyone else worked for free!).

The thing I am most proud of is that the conferences all resulted in publications (4 journal issues and one book). My basic idea was to have the conference itself count as part of the review process. The papers were usually rewritten after discussion and then sent out for a more traditional review before finally being published (and so the result was not just a conference proceedings but a new paper sharpened by the conference (remember I was still an idealistic graduate student at the time!)). I was very lucky to have the general editor from the Journal of Consciousness Studies initially approach me about editing a special issue and I ran with it from there. A quick check of Google Scholar shows that the six resulting papers from the first conference have (mostly) done pretty well since being published in 2010.

Prefrontal Cortex, Consciousness, and…the Central Sulcus?

The question of whether the prefrontal cortex (PFC) is crucially involved in conscious experience is one that I have been interested in for quite a while. The issue has flared up again recently, especially with the defenders of the Integrated Information Theory of Consciousness defending an anti-PFC account of consciousness (as in Christof Koch’s piece in Nature). I have talked about IIT before (here, here, and here) and I won’t revisit it but I did want to address one issue in Koch’s recent piece. He says,

A second source of insights are neurological patients from the first half of the 20th century. Surgeons sometimes had to excise a large belt of prefrontal cortex to remove tumors or to ameliorate epileptic seizures. What is remarkable is how unremarkable these patients appeared. The loss of a portion of the frontal lobe did have certain deleterious effects: the patients developed a lack of inhibition of inappropriate emotions or actions, motor deficits, or uncontrollable repetition of specific action or words. Following the operation, however, their personality and IQ improved, and they went on to live for many more years, with no evidence that the drastic removal of frontal tissue significantly affected their conscious experience. Conversely, removal of even small regions of the posterior cortex, where the hot zone resides, can lead to a loss of entire classes of conscious content: patients are unable to recognize faces or to see motion, color or space.

So it appears that the sights, sounds and other sensations of life as we experience it are generated by regions within the posterior cortex. As far as we can tell, almost all conscious experiences have their origin there. What is the crucial difference between these posterior regions and much of the prefrontal cortex, which does not directly contribute to subjective content?

The assertion that loss of the prefrontal cortex does not affect conscious experience is one that is often leveled at theories that invoke activity in the prefrontal cortex as a crucial element of conscious experience (like the Global Workspace Theory and the higher-order theory of consciousness in its neuronal interpretation by Hakwan Lau and Joe LeDoux (which I am happy to have helped out a bit in developing)). But this is a misnomer or at least is subject to important empirical objections. Koch does not say which cases he has in mind (and he does not include any references in the Nature paper) but we can get some ideas from a recent exchange in the Journal of Neuroscience.

One case in particular is often cited as evidence that consciousness survives extensive damage to the frontal lobe. In their recent paper Odegaard, Knight, and Lau have argued that this is incorrect. Below is figure 1 from their paper.

Figure 1a from Odegaard, Knight, and Lau

This is brain of Patient A, who was reportedly the first patient to undergo bi-lateral frontal lobectomy.  In it the central sulcus is labeled in red along with Brodman’s areas 4, 6, 9, and 46. Labled in this way it is clear that there is an extensive amount of (the right) prefrontal cortex that is intact (basically everything anterior to area 6 would be preserved PFC). If that were the case then this would hardly be a complete bi-lateral lobectomy! There is more than enough preserved PFC to account for the preserved conscious experience of Patient A.

Boly et al have a companion piece in the journal of neuroscience and a response to the Odegaard paper (Odegaard et al responded to Boly as well and made these same points). Below is figure R1C from the response by Boly et al.

Figure R1C from response by Melanie Boly, Marcello Massimini, Naotsugu Tsuchiya, Bradley R. Postle, Christof Koch, and Giulio Tononi

Close attention to figure R1C shows that Boly et al have placed the central sulcus in a different location than Odegaard et al did. In the Odegaard et al paper they mark the central sulcus behind where the 3,1,2 white numbers occur in the Boly et al image. If Boly et al were correct then, as they assert, pretty much the entire prefrontal cortex is removed in the case of patient A, and if that is the case then of course there is strong evidence that there can be conscious experience in the absence of prefrontal activity.

So here we have some experts in neuroscience, among them Robert T. Knight and Christof Koch, disagreeing about the location of the central sulcus in the Journal of Neuroscience –As someone who cares about neuroscience and consciousness (and has to teach it to undergraduates) this is distressing! And as someone who is not an expert on neurophysiology I tend to go with Knight (surprised? he is on my side, after all!) but even if you are not convinced you should at least be convinced of one thing: it is not clear that there is evidence from “neurological patients in the first half of the 20th century” which suggests that the prefrontal cortex is not crucially involved in conscious experience. What is clear is that is seems a bit odd to keep insisting that there is while ignoring the empirical arguments of experts in the field.

On a different note, I thought it was interesting that Koch made this point.

IIT also predicts that a sophisticated simulation of a human brain running on a digital computer cannot be conscious—even if it can speak in a manner indistinguishable from a human being. Just as simulating the massive gravitational attraction of a black hole does not actually deform spacetime around the computer implementing the astrophysical code, programming for consciousness will never create a conscious computer. Consciousness cannot be computed: it must be built into the structure of the system.

This is a topic for another day but I would have thought you could have integrated information in a simulated system.

Mary, Subliminal Priming, and Phenomenological Overflow

Consider Mary, the super-scientist of Knowledge Argument fame. She has never seen red and yet knows everything there is to know about the physical nature of red and the brain processing related to color experience. Now, as a twist, suppose we show her red subliminally (say with backward masking or something). She sees a red fire hydrant and yet denies that she saw anything except the mask (say). Yet we can say that she is primed from this exposure (say quicker to identify a fire truck than a duck subsequently or something). Does she learn what it is like to see red from this? Does she know what it is like to see red and yet not know that she knows this?

It seems to me that views which accept phenomenological overflow, and allow that there is phenomenal consciousness in the absence of any kind of cognitive access, have to say that the subliminal exposure to red does let Mary learn what it is like for her to see red (without her knowing that she has learned this). But this seems very odd to me and thus seems to me that this is a kind of a priori consideration that suggests there is no overflow.

Of course I have had about 8 hours of sleep in the last week so maybe I am missing something?

 

Gottlieb and D’Aloisio-Montilla on Brown on Phenomenological Overflow

Last year I started to try to take note of papers that engage with my work in some way (previous posts here, here, here, here, here, here, and here). The hope was to get some thoughts down as a reference point for future paper writing. So far not much in that department has been happening; with a 3 year old and a 1 month old it is tough to find time to write (understatement!) but I am hoping I can “normalize” my schedule in the next few weeks and try to get some projects off of the back burner. At any rate I have belatedly noticed a couple of papers that came out and thought I woud quickly jot down some notes.

The first paper is one by Joseph Gottlieb and came out in Philosophical Studies in October of 2017. It is called The Collapse Argument and makes the argument that all of the currently available mentalistic first-order theories of consciousness turn out to really be versions of the higher-order theory of consciousness. I don’t know Joseph IRL (haha) but we have emailed about his papers several times, though I usually get back him too late for it to matter on account of the 16 classes a year I have been teaching since 2015 (for anyone who cares: I am contractually obligated to teach 9 a year and  in addition I teach another 7 as an adjunct (the maximum allowed by my contract)…sadly this is what is required in order for my family to live in New York! ) and I have blogged about his work here before (linked to above) but I really, really like this paper of his. First, I obviously agree with his conclusion and it is nice to see some discussion of this issue. I took some loose steps in this direction myself in the talk I gave at the Graduate Center’s Cognitive Science Speaker Series back in 2015. I thought about writing it up but then had my first son and then found out about Joseph’s paper, which is better than what I could have come up with anyway! I suppose the only place we might disagree is that I think this applies to Block’s first-order theory as well.

But even though I really like the paper there is a bit I would quibble about (but not very much). Gottlieb seems to take seriously my argument that higher-order theories are in principle compatible with phenomenological overflow but I am not sure I agree with how he puts it. He says,

As Richard Brown (2014) points out, HO theorists don’t need to claim that we are aware of our conscious states in all their respects. I might be aware that I am seeing letters (a fairly generic property) but not the identity of every letter I am seeing. In other words, I can be unaware of some of the information represented by the first- order state without the state itself being unconscious (ibid). What happens, then, is: I am phenomenally conscious of the entire 3 X  4 array, with representations of the identities of all the letters available prior to cuing. But only a small number (usually around four) ever get through, accessed by working memory. That’s overflow, and perfectly consistent with HO theory.

In the paper he is citing I was trying to make the point that the higher-order theories which deny overflow do not thereby also commit themselves to the existence of unconscious *states* which are doing heavy lifting. If the states are targeted by the appropriate higher-order representation then those states are conscious. Yet one may not represent all of the properties of the state and so, even though the state is conscious, there is information encoded in the state which you are not aware of (and so is unconscious). That unconscious information (that is to say, that aspect of the conscious state)  is (presumably) what you come to be aware of when you get the cue in the relevant experiments. So it is a bit strange to see this part of the paper cited as supporting overflow (though I do think the position is compatible with overflow I wasn’t thinking of it in this way). But I think I see his point. On the higher-order view it will true to say that one has a phenomenally conscious experience of all of the letters and the details but only access a few (even though what it is like for one may not have all of the details, which is really what I think the overflow people mean to be saying).

This point, though, is I think they key difference between higher-order theories and Global Workspace theories (which is what Block is really targeting with his argument). The basic idea behind the higher-order approach is this. When one is presented with the stimulus all or most of the details of the stimulus are encoded in first-order visual states (that is, states which represent the details of the visual scene). Let’s call the sum-total representational state S. S represents all (or most) of the letters and their specific identifies. One can have S without being aware that one is in S. In this case S is unconscious. Now suppose that one comes to have a (suitable) higher-order awareness that one is in S. According to the higher-order theory of consciousness one thereby comes to have a phenomenally conscious experience of S and becomes consciously aware of what S represents. But since one’s higher-order awareness is (on the theory) a cognitive thought-like state, it will describe its target. Thus one can be aware of S in different ways. Suppose that one is aware of S merely as a clock-like formation of rectangles. Then what it is like for one will be like seeing a clock-like formation of rectangles. Being aware of S seems to keep S online and as one is cued one may come to have a different higher-order awareness of S. One may become aware of some of the details already encoded in S. One was already aware of them, in a generic way, but now one comes to be aware of the same details but just in more detail. Put more in terms of the higher-order theory, one’s higher-order thought(s) come to have a different content than they previously did. The first higher-order state represented you as merely seeing a bunch of rectangles and now you have a state that represents you as seeing a bunch of rectangles where the five-o’clock position is occupied by a horizontal bar (or whatever). Notice that in this way of thinking about the case there are no unconscious states (except the higher-order ones). S is conscious throughout (just in different respects) and it will be true that subjects consciously see all of the letters (just not all of the details).

I want to keep this in mind as I turn to the second paper but before we do I also like Gottlieb’s paper because it actually references this blog! I think this may be the first time my personal blog has been cited in a philosophy journal! I will have more to say about that at some point but for now: cool!

The second paper is by Nicholas D’Aloisio-Montilla and came out in Ratio in December 2017. It is called A Brief Argument for Consciousness without Access. This paper is very interesting and I am glad I became aware of it and D’Alosio-Montilla’s work in general. He is trying to develop a case for phenomenological overflow based on empirical work on aphantasics. These are people who report lack of the ability to form mental imagery. I have to admit that I think of myself this way (with the exception of auditory imagery) so I find this very interesting. But at any rate the basic point seems to be that there is no correlation between one’s ability to form mental imagery (as measured in various ways) and one’s ability to perform the Sperling-like tasks under discussion in the overflow debate.  His basic argument is that if you deny phenomenological overflow then you must think that unconscious representations are the basis of subject’s abilities. Further, if that is the case then it must be because subjects form a (delayed) mental image of the original (unconscious) representation. But there is evidence that subject’s don’t form mental images and so evidence that we should not deny overflow.

I disagree with the conclusion but it is nice to see this very interesting argument and I hope it gets some attention. Even so, I think there is some mis-characterization of my view related to what we have just been talking about in Gottlieb’s paper. D’Alosio-Montilla begins by setting the problem up in the following way,

The reports of subjects [in Sperling-like tasks] imply that their phenomenology (i.e. conscious experience) of the grid is rich enough to include the identities of letters that are not reported (Block, 2011, p.1; Land- man et al., 2003; cf. Phillips, 2011b). As Sperling (1960, p.1) notes, they ‘enigmatically insist that they have seen more than they can … report afterwards’. Introspection therefore suggests that subjects consciously perceive almost all 12 items of the grid, even if they are limited to accessing the contents of just one row (Block 2011; Carruthers, 2015). The ‘overflow’ argument uses this phenomenon as evidence in favor of the claim that the capacity of consciousness outstrips that of access. Overflow theorists maintain that almost all items of the grid are consciously represented by perceptual and iconic representations (D’Aloisio-Montilla, 2017; Block, 1995, 2007, 2011, 2014; Bronfman et al., 2014; for further discussion, see Burge, 2007; Dretske, 2006; Tye, 2006).

This is a nice statement of the overflow argument and the claim that it is the specific identifies of the items of the grid which are consciously experienced but this way of framing the argument begs the question against the higher-order interpretation. The reports in question do not imply rich phenomenology because, as we have just discussed, subjects are correct that they have consciously seen all of the letters even if they are wrong that they consciously experienced the details. Because of this the higher-order no-overflow theorist can accept that there is no correlation between mental imagery ability and Sperling-like task performance and for pretty much the same reasons that the first-order theorist does: because there is a persisting conscious experience.

D’Aloisio-Montilla then goes on to give two objections to his interpretation of my account. He puts it this way,

A final way out for the no-overflow theorist might be to allow for a limited phenomenology of the cued item to occur without visual imagery (Brown, 2012, 2014; Carruthers, 2015). Brown (2012, p. 3) suggests that subjects can form a ‘generic’ experience of the memory array’s items while the array is visible, since attention can be thinly distributed to bring fragments of almost all items to both phenomenal and access consciousness. Phenomenology, for example, might include the fact that ‘there is a formation of rectangles in front of me’ without specifying the orientation of each rectangle (Block, 2014). However, there a still number of problems with an appeal to generic phenomenology. First, subjects report no shift in the precision of their conscious experience when they are cued to a subset of items that they subsequently access (Block, 2007; Block, 2011).

First, I would point out that my goal has always been to show that the higher-order theory of consciousness is both a.) compatible with the existence of overflow but also b.) compatible with no-overflow views and gives a different account of this than Global Workspace Theories (or other working memory-based views). So I am not necessarily a ‘no-overflow theorist’ though I am someone who thinks that i.) overflow has not been established but assumed to exist and ii.) even if there is overflow it is mostly an argument against a particular version of the Global Workspace theory of consciousness, not generally against cognitive theories of consciousness.

But ok, what about his actual argument? I hope it is clear from what we have said above that one would not expect subjects to report ‘a shift in precision’ of their phenomenology. One has a conscious experience (generic or vague in certain respects) but in so doing you help to maintain the first-order (detailed) state. When you get the cue you focus on the aspect of the state which you had only generically been aware of (by coming to have a higher-order awareness with a different content) but what it is like for you is just like feeling like you see all of the details and then focusing in on some of the details. No change in precision. But even so these appeals to the subject’s reports are all a bit suspect.  I use the Sperling stimulus in my classes every semester as a demo of iconic memory and an illustration of how philosophical issues connect to empirical ones and my students seem to be mixed on whether they think they “see all of the letters”. Granted we only do 10-20 trials in the classroom and not in the lab (in Sperling they did thousands of trials) and these are super informal reports made orally in the classroom…but I still think there is a issue here. I have long wanted there to be some experimental philosophy done on this question. It would be nice to see someone replicate Sperling’s results but also include some qualitative comments from subjects about their experience. I almost tried to get this going with Wesley Buckwalter years ago but it didn’t go through. I still think someone should do this and that the results would be useful in this debate.

D’Aloisio-Montilla goes on to say,

Second, subjects are still capable of generating a ‘specific’ image – that is, a visual image with specific content – when the cue is presented. Assuming that the cued item is generically conscious on the cue’s onset, imagery would necessarily be implicated in maintaining any persisting consciousness of the cued item (whether gist-like or specific) throughout the blank interval. Thus, we can still expect to see a correlation between imagery abilities and task performance, because subjects can generate either (1) a visual image with specific phenomenology, or (2) a visual image with generic phenomenology (Phillips, 2011a; Brown, 2014). In any case, subjects who generate a specific phenomenology of the cued item should perform better than those who rely solely on a gist-like experience, and so Brown’s interpretation is also called into question.

But again this seems to miss the point of the kind of no-overflow account the higher-order thought theory of consciousness delivers. It is not committed to mental imagery as a solution. Subjects have a persisting conscious experience which may be less detailed than they experience it as.

Shesh that is a lot and I am sure there is a lot more to say about it but nap time is over and I have to go and play Dinosaur now.