Consciousness as Representing One’s Mind: Rethinking the Higher-Order Thought Theory of Consciousness

Word on the street is that I am allegedly writing a book on consciousness…woah betide us, this is certainly the final indicator that we are in the most absurd of simulations! At any rate,I don’t have a contract or anything but there is ‘some interest’ in my completing a draft from a press, which is cool I guess (I can think of at least 4 people who would actually read a book like this!). To help motivate me I have decided to put up chapters here as I draft them. My plan is to officially start this summer, since I seem to be only teaching one class in my summer session (I usually do 3 so it feels light). I don’t have anything I want to share yet, but below is the plan of attack i.e. the proposal for the book). Stay tuned!

  1. Brief Description – The cognitive neuroscience of conscious is establishing itself as a viable branch of neuroscience. Currently the field is at the point where there are a variety of theories about the nature of consciousness on offer and we would like to compare the predictions that various theories make to narrow down the plausible candidates. This makes it especially important to develop the candidate theories in enough detail that they could meaningfully confront the tribunal of experience (i.e., be subject to possible empirical falsification). Higher-order theories of consciousness have enjoyed some attention in philosophy and have very recently been taken seriously by neuroscientists aiming to empirically test the theory. However, because of the way that the higher-order approach has been presented a version of the theory -which I think of as one of the most promising versions- has been overlooked. This book will develop and defend this Higher-Order Representation of a Representation (HOROR) theory of phenomenal consciousness. My goal, as stated above, will be to develop the theory in enough detail that it can be empirically compared to other versions of the higher-order approach. Once this is done we can take stock of recent neuroscience and see that the HOROR theory, like a few theories of consciousness, is consistent with current neuroscience. We will also be able to see what kind of neuroscientific evidence we could find that would falsify this kind of theory. Thus whether one is sympathetic to the higher-order approach or not this project will help to clarify what would count as empirical evidence for or against this approach. If one is interested in trying to show that theory is wrong, or in vindicating it, one needs to look at versions of the theory that are taken seriously by people who hold these kinds of theories. 
  2. Outline –       

I. Traditional HOT is mistaken

The science of consciousness is at a point where we would like to narrow the range of theories that are serious contenders. The field has seen a number of theories being presented, from various disciplines, but as of now we have not seen empirical evidence that the various theories can’t interpret in a way favorable to, or at least compatible with, their theory. Theories should be developed in enough detail to see what predictions they make and how we could falsify them. Higher-order theories have enjoyed some support recently but as of yet it is not clear what would count as falsifying the theory. As they are generally presented they (a) aim to explain the difference between a conscious mental state and an unconscious mental state and (b) this explanation takes the form of positing two mental states, one of which is directed at the other. Each of the above claims problematic. The problem with (a) is that it begs the question as to whether there are any unconscious mental states, which is an empirical question and (b) obscures the distinction between relational and non-relational higher-order views. I want to present and defend a version of the non-relation view I call the Higher-Order Representation of a Representation (HOROR) theory. This seems to me to be a promising version of the theory and as of yet has not been developed in detail -because of the way the debate has been set up- and distinguished from other versions of the theory.

II. Starting Over: The HOROR Theory of Phenomenal Consciousness 

Higher-order theories of consciousness appeal to inner awareness as part of the explanation of phenomenal consciousness. ‘Inner’ means higher-order, an awareness of my own mental life (‘first-order’ thus mean awareness of something which isn’t mental). ‘Awareness’ at a commonsense level means sensing, perceiving or thinking that something is present -these are all representational states. ‘Phenomenal consciousness’ means: what it is like to be a creature or what it is like to be in a mental state/ so appealing to inner awareness amounts to appealing to ‘higher-order representations of representations’ -HORORs- as part of the explanation of phenomenal consciousness. I take a theory of consciousness to primarily be a theory of phenomenal consciousness. Panpsychism, Global Workspace Theory, whatever theory one may have, if it is a theory of consciousness, it is a theory of phenomenal consciousness. This makes the theory I will defend not a version of illusionism. HOROR theory is an empirical conjecture about the nature of phenomena consciousness. 

III. Relational v. Non-Relational higher-order theories

Relational theories are familiar from the traditional approach. Non-relational theories deny the traditional account and instead hold that the relevant higher-order representation is itself enough to account for phenomenal consciousness. Non-relational higher-order theories can be understood to be versions of representationalism, but from the higher-order point of view. Representationalism about consciousness, as I will defend it, holds that for any phenomenal experience we have there is some representational content such that having the phenomenal experience consists in having the representational content. Non-relational theories hold that the right kind of content is a higher-order representation with the content ‘I am aware of (perceptible) red in a distinctly visual way’. Having that state is all by itself enough to account for phenomenal consciousness. Higher-order theories can be distinguished by the kind of content they posit at the higher-order level as well as by the proposed relation, if any, between the higher-order representations and their targets, the states they represent. 

IV. Two Kinds of Relational Theory: Joint-Determination and Split-Level

Joint-determination views hold that both the higher-order and lower-order state contribute to phenomenal consciousness. Split-level views hold that the higher-order state is a mere pointer that points to a first-order state, which then contributes to phenomenal consciousness. The difference between these theories is that on Joint-Determination views just having the higher-order state without its target will result in an atypical experience. On Split-level views the first-order state and its content, once it has been pointed to, completely determines what it is like for one but without the higher-order pointer the content remains unconscious and so not experienced. Traditional objections to higher-order thought theories (like those from Dretske on change blindness, the problem of the rock) apply mostly to relational theories. In addition it is not clear that these theories can offer an explanation of consciousness and have to settle for fitting the data. Some find these objections serious enough to disregard these kinds of theories but nature may just foil our desire for understanding and explanation. Relational theories make empirical predications and though I would prefer non-relational theories we should base our credence on the data not philosophical objections.

V. Two kinds of non-relational theory: HOT and HOROR

Traditional HOT (THOT) theory is non-relational. But none the less maintains that the first-order state is the conscious state. HOROR theory holds that the higher-order state is itself the phenomenally conscious state. The higher-order state is itself the state which there is something that there is like to be in. The THOT theory holds that the higher-order state ‘engenders’ phenomenal consciousness but that is difficult to make sense of and the ‘traditional’ objections to higher-order theories can be seen to be problems only for the relational view. For example, ‘the problem of the rock’ which asks why thinking about a rock doesn’t make the rock conscious when it does make my mental state conscious, clearly assumes the relational account of the higher-order theory. Which version of non-relation theory is preferable? HOROR theory is more plausible for several reasons to be discussed.

VI. HOROR theory and Current Neuroscience

There is much debate about the neural correlates of consciousness and there have been some attempts to use empirical work to challenge higher-order theories. One area where this has occurred is with Ned Block’s argument from phenomenological overflow. If consciousness ‘overflows’ what we can report on at any given moment then is that a problem for higher-order theories? Do we have any reason to believe that there is phenomenological overflow in this sense? HOROR theory is compatible with either view in this area. We could have a ‘Rich’ HOROR theory on which the rich contents of the higher-order states overflow what we can report or what is in working memory. We could also have a ‘sparse’ HOROR theory on which there are sparse contents of HORORs. Because of this there are versions of HOROR theory that are compatible with either way you interpret the findings from Sperling, Landman, etc. Part of the problem is that we haven’t really seen scientists explicitly try to falsify versions of non-relation theories and so we need to get clear on what kind of predictions the theory actually makes. The same issue arises with other previous empirical attacks on the higher-order approach (like Dretske’s change blindness argument, and Ned Block’s response to Lau and Brown’s use of Rahnev et al as evidence for HOROR).

VII. Empirical Predictions: A Study in Misrepresentation

Misrepresentation occurs when HO state misrepresents a first-order state. Radical misrepresentation occurs when FO state missing. Misrepresentation can be seen as an empirical predication of higher-order theories. Split-level views predict there will be a first-order explanation of misrepresentation. Joint-determination views predict there will be a ‘partial’ experience. Non-relational views predict conscious experience follows higher-order representation (and could be sparse or rich depending on the contents of the HORORs).THOT has to say non-existent first-order state is conscious! One can make sense of this with certain views about intentionality and representation but those views are very controversial. HOROR theory says misrepresentation is evidence that it is the higher-order representation which is phenomenally conscious. Radical misrepresentation is a case which shows that the state which there is something that it is like for one to be in is the HOROR itself. The overall lesson from thinking though cases of misrepresentation is that we should postulate that there are two kinds of content in the relevant HORORs: a descriptive content and a pointer/teleological/causal-historical content. The descriptive content accounts for phenomenology while the other content accounts for which first-order state is picked out. This pointer kind of content will have a functional role (keeping a represented state online, sending it to the global workspace, etc). This puts HOROR theory in between relational views on one end with solely pointer content and THOT on the other with solely descriptive content. Thus different versions of relational and non-relational higher-order theories can be tested by looking at/for cases of misrepresentation.

VIII. Implementing the Theory in the Brain

The theories as so far presented is a psychological theory. By itself they make no predications about how these various kinds of states are implemented in the brain. We have various proposals about how the implementation should go. Starting at the first-order level we might ask, ‘where are the first-order states, the targets?’ We have several candidates: recurrent processing, mid-level contents, globally broadcast contents, etc. Some of these candidates are in ‘sensory’ areas, others may be in frontal cortex (there is evidence that representations we would think of a first-order are in the prefrontal cortex at least sometimes). So it is not yet clear where the first-order states are. What about higher-order states? Lau argues they have overlap with metacognition and so prefrontal cortex. LeDoux argues anatomy suggests different circuits in prefrontal cortex with different jobs. Genaro suggests that we look for self-consciousness and focus more on parietal cortex. Cleermans argues anywhere in cortex. Not clear where higher-order states are. This doesn’t mean we can try to empirically test these theories.  We should formulate different versions  in as much detail as we can and test them (like the Prefrontally-implemented Rich version of HOROR theory Templeton is testing). To really test these theories though we need a hypothetical ‘Brain clamp’ -something that allows us to hold the activity of the first-order representations constant while we vary higher-order content (and vice-versa).

IX. Animals, Infants, and Robots

The discussion so far has centered for the most part on adult human beings and sought to develop a possible account of the kind of conscious experience we enjoy. But can these ideas help us answer questions about whether animals are conscious? Can we know if infants and newborns are conscious? Is artificial consciousness possible? And what does the HOROR theory predict about these kinds of cases? Can we use animal models to test the HOROR theory? These questions may be somewhat more speculative and less connected directly to the issue of empirical testing but the higher-order approach has been thought to have a certain position on these questions. My own view would be that animals and infants are conscious and *maybe* we could have artificial consciousness but that as of right now we don’t have any real strong evidence that this is the case. However the ideas presented here would at least give us some ideas about how we might -at least in principle- be able to empirically test HORORs predictions in animals and infants. 

Season Four of Consciousness Live! Continues

It has been a very busy spring and I have been having a great time discussing all things consciousness, having already done 13 discussions! Because of various reasons I will only be able to do two discussions a month over the summer. Below is the schedule through September.

I also have people who have agreed to be guests but haven’t scheduled a date. Check back here for updates! Or follow me on twitter!

I also have ideas for season five but let’s leave that for another day!

Sensory Qualities, the Meta-Problem of Consciousness, and the Relocation Story

I have been so swamped lately with teaching, research and Consciousness Live! that I haven’t been able to do much else, but I have had a couple of blog posts kicking around in my head that I wanted to get to. I’ll try to jot them down when I get the chance.

Chalmers’ Meta-Problem of Consciousness is at this point well known. The central issue there is: why do we think there is a Hard Problem of Consciousness? Chalmers’ takes the main physicalist response to be some kind of Illusionism. The source of the ‘problem judgements’ (i.e. about Mary and zombies, and inverts, etc) is some kind introspective illusion. Consciousness seems introspectively to have certain properties that it does not in fact have. When I first read Dave’s paper I suggested an alternative account in terms of our (tacitly) having a bad theory of what phenomenal consciousness is. An account like this can be seen in the work of David Rosenthal (which is why I was surprised his commentary on Chalmers’ paper did not bring up these issues directly).

The account basically proceeds as follows. We begin with the common sense fact that experience seems to present objects in the environment as having properties like color. These properties seem to peskily resist mathematization and so in the modern period they are moved into the head. However, they are moved into the head as we consciously experience them. Thus we arrive at the idea that we have this simple phenomenal property because that is how the physical object seemed to be when we consciously experienced seeing it. But now when we come to theorize about this simple property we find that there is not much to say. It seems simple, or primitive, because we are thinking of it as we first encountered it in experience. We thus arrive at a view where consciousness is itself ‘built into’ the mental qualities and that the only way to know about these mental qualities is via introspecting our first-personal experience.

This re-location-story-based explanation of the problem judgements (as involving a bad theoretical conception of what consciousness is) seems to me different from the one involving introspective error. If we don’t see phenomenal consciousness as some primitive property built into every mental quality, then we can try to construct independent theories of each. On the one hand we construct a theory of the mental qualities (independently of whether they are conscious) and on the other hand we can construct a theory of phenomenal consciousness.

Since we have separated phenomenal consciousness from mental quality we can see that phenomenal consciousness just is an awareness of mental qualities. That in turn suggests that we look for an account of that kind of awareness. Perhaps it is a cognitive kind of higher-order awareness, or some kind of deflationary first-order awareness, or maybe even some kind of first-order acquaintance.

When I floated this idea to Dave his response was that we would encounter the very same problem once we tried to explain our awareness of mental qualities and so this isn’t really a solution to the meta-problem. After all, the whole thing started because of phenomenal consciousness! There is a sense in which I agree with this but also a sense in which I don’t. I don’t agree with it because the problem seems different now. If we really can separate mental qualities from phenomenal consciousness and give independent accounts of each then we can construct theories and evaluate them. Are inverts possible according to the theory? What about zombies? Suppose it turns out that we could construct a plausible theory on which they weren’t?

True, some would find these theories implausible but now we can ask: is the reason they find it implausible because of an implicit acceptance of an alternative theory of what phenomenal consciousness is? So, instead of a theory of consciousness having to explain why people find consciousness puzzling, I see the right strategy as one where we explain why people find consciousness puzzling by attributing to them a (possibly implicitly-held) bad theory of consciousness.

We solve the meta-problem the same way we solve the ‘regular’ Hard Problem on this view, which is by getting people to think of consciousness differently (not in the sense of thinking of it as an illusion but in the sense of coming to hold a different theory about what it is).

I am not sure I 100% agree with this response to the Meta-Problem but it is one that I haven’t seen explicitly explored and I think it deserves one attention!

Consciousness Live! Season 4!

I am very excited about my upcoming guests on Consciousness Live! There are more possibly in the works so stay tuned, either by checking back here, or follow me on Twitter for updates @onemorebrown

All times listed in Eastern Standard Time.

To be scheduled. Check back for updates!

2021 here we come!

2020 was a rough year and I am ready to put it in the rear view! Luckily for me and my family we were able to carry on online and the year went by for the most part like normal.

I taught 17 classes in 2020, almost all (13) of them completely online! That’s up from last year. I taught online before so with the addition of zoom I explored synchronous online classes. I have been at CUNY for 17 years, (four years at Brooklyn College and) LaGuardia for 13, and I’ve never seen anything like this!

Since we were all trapped at home for quite a while I redoubled my efforts on Consciousness Live! having 28 conversations over the course of the year. I learn a lot from these discussions and I have been very glad to be able to have them…I have plans to do more in 2021, though likely not at the same volume!

I feel like I got a fair bit of writing done but only managed to get two papers to see the light of day. One with Jake Berger for a book edited by Josh Weisberg in honor of David Rosenthal. The other with Joe LeDoux was a short comment on a paper by Graziano et al.

My blogging was light. I have been attending a lot of talks and also a seminar with Romina Padro and Saul Kripke at the Graduate Center on the Adoption Problem and the Epistemology of Logic. I have been meaning to write a series of blog posts about the class but have not been able to get to it.

Coming up on Consciousness Live!

It has been an exhausting, year but one of the things that has kept me going is having some great conversations with an amazingly diverse group of people who share my love of all things consciousness. I have already done 19 this year (more than either of the previous two years!) but I have at least nine more coming up to round out the year! Check back here for updates or follow me on twitter @onemorebrown





Introspection and the Content of Higher-Order Thoughts

I am finally getting around to working on a paper on higher-order thought theory and introspection. I started thinking about this back in 2015 and presented an early version of it at the CUNY Cognitive Science Speaker Series (draft of paper here…sadly it is at which I do not use anymore). That was just a month before my son was born and I don’t think I had it quite nailed down. I put it one the back burner and then got caught up doing all kinds of other things.

But I’m back on it now, and am working with Adriana Renero. I am excited about this project because I don’t think that this aspect of the theory has been given enough attention. The basic idea that I had was that the traditional model of introspection offered by higher-order theorists -that one has a conscious higher-order state- needed to be supplemented. It seems to me that when one has an ordinary conscious experience of blue one is representing the first-order state as presenting a property of the physical world and when one introspects one represents the first-order state as presenting a property of one’s mind. In ordinary conscious experience it seems to me like I am being presented with objects which have colors, or make sounds, etc but when I introspect I seem to be presented with properties of my own experience. Thus conscious experience and introspection of this sort both rely on second-order thoughts that represent the relevant first-order states. Both of these second-order thoughts deploy a concept of the relevant mental quality: one concept attributes it to the physical object and the other attributes it to one’s own experience.

Rosenthal seems to agree with this, for example saying

When one sees a red tomato consciously but unreflectively, one conceptualizes the quality one is aware of as a property of the tomato. So that is how one is conscious of that quality. One thinks of the quality differently when one’s attention has shifted from the tomato to one’s experience of it. One then reconceptualizes the quality one is aware of as a property of the experience: one then becomes conscious of that quality as the qualitative aspect of an experience in virtue of which that experience represents a red tomato

Consciousness and Mind page 121

However, the way he fleshes out this distinction is in terms of *conscious* thoughts. So, when one is conceptualizing the mental quality as a property of the tomato, on his view, this amounts to one having, in addition to the higher-order state which renders one conscious, conscious thoughts abut the tomato. When one ‘reconceptualizes’ it as a property of experience, one’s higher-order state is itself conscious. Thus the difference for him is one of what one’s conscious thoughts are representing. But this doesn’t seem to me to do the trick.

The reason for this is that it seems to me that this would still be the case even if I had no conscious thoughts about the object I am perceiving. Suppose I am consciously perceiving a blue box and yet I am not consciously thinking about the blue box. In such a case it still seems to me that my conscious experience presents the blueness of the box as a property of the box itself. To bring this out even more we can consider the case of an animal that does not have any conscious thoughts, ay a squirrel. Our squirrel may nonetheless have conscious experiences and it seems to me strange to think that the squirrel’s experience does not present the blueness of the box as a property of the box.

Another issue here is that the relevant higher-order thought is the same throughout on Rosenthal’s account. So it must conceptualize the blueness of the box as a property of my experience the entire time. So why think that I ‘reconceptualize’ it when I have a conscious higher-order thought?

The same seems true for the case of introspection. If I am introspecting my experience of the box then it seems to me that the blueness is a property of the experience even if I am not having any conscious thoughts about the mental blue quality. I am not denying that I ever consciously think about my experience, only that this is required for introspection.

So what, on my view, is the content of these higher-order states? My current thinking is that in the case of typical conscious experience one has a higher-order thought with the content ‘I am seeing blue’ and when one introspects one has a higher-order state with the content ‘I am in a blue* state’ or ‘I am experiencing mental blue’. Of course to see blue is just to be in a blue* state and these two intentional contents are different ways of saying the same thing but they still seem to me to result in different experiences.

I am still thinking through this and any feedback would be appreciated!

No Euthyphro Dilemma for Higher-order Theories

I just came across Daniel Stoljar’s forthcoming paper A Euthyphro Dilemma for Higher-order theories. In it he tries to present a kind of dilemma for the higher-order thought theory but I find his reasoning highly suspect.

He assumes throughout that the higher-order theory is offering a definition of ‘consciousness,’ which is not exactly right. At least as I understand the theory it is an empirical conjecture about the nature of phenomenal consciousness and so not in the business of offering a definition. However, if we mean by definition something like what Socrates is seeking, viz., the thing which all conscious states have in common in virtue of which they count as conscious states, there there is a sense in which the higher-order view is after a definition, so I will go along with him on this.

The basic thrust of the paper is that we can ask two questions, one is ‘are we aware of ourselves as being in the state because the state is conscious?’ and the other is ‘is the state conscious because we are aware of ourselves as being in it?” Obviously the first ‘horn’ is not going to be taken as it effectively assumes that the higher-order theory is in fact false. The second ‘horn’ is the one the higher-order theories will take. So, what is the problem with it? Here is what Stoljar says:

Alternatively, if you say the second, that the state is conscious because you believe you are in it, you need to deal with the possibility of being in the state and yet failing to believe that you are. On the higher-order thought theory, the state is in that case no longer conscious. But as before that is questionable. Suppose you are so consumed by the fox that you completely forget (and so have no beliefs about) what you are doing, at least for a short interval. On the face of it, you remain conscious of the fox, and so your state of perceiving the fox remains conscious. If so, it can’t be the case that the state is conscious because you believe that you are in it. After all, you do not believe this, having temporarily forgotten completely what you are doing.

I am not sure how ‘on the face of it’ is supposed to work! It seems as though he is just assuming that the theory is false and then saying ‘ahah! The theory could be false!’ Even if we interpret him charitably it seems like he is assuming that the higher-order states in question would be like conscious beliefs. Calling the higher-order thoughts beliefs is a bit of a misnomer since I take beliefs to be dispositions to have occurrent assortoric thoughts. But as long as one means by ‘belief’ something like an occurrent thought then we can go along with this as well. If one is ‘so absorbed in the fox’ that one forgets (consciously) what one is doing it does not follow that one has no unconscious thoughts about oneself.

Stoljar recognizes this and goes on to say:

Friends of the theory may insist that you do hold the belief in question. Maybe the belief is not so demanding. Or maybe it is suppressed or inarticulate, not the sort of belief that you could formulate in words if asked. Maybe, but it doesn’t matter. For even if you do believe you are in the state of perceiving the fox, it doesn’t follow that this state is conscious because you believe this. Further, even if you do believe this, it remains as true as ever that, if you didn’t, the state of perceiving would nevertheless be conscious. After all, even if you didn’t believe that you are in the state of perceiving the fox, you would still focus on the fox, and so be conscious of it, as much as before.

I find this passage to be extremely puzzling and I am not sure how to interpret it. There are arguments given for the higher-order theory and this does not address any of them. Further, there is no justification given for the final claim, that even if one did not have the relevant higher-order thought one would still be (phenomenally) conscious of the fox in the same way. What reason is there to accept this? It is just assumed by fiat. So there is no dilemma for higher-order theories here. There is just someone with differing intuitions about what conscious states are.

Stoljar goes on to consider a version of the view that os closer to what is actually defended by Rosenthal. he says:

Rosenthal says you must believe that you are in the state in a way that is non-perceptual and non-inferential (Rosenthal 2005).

This is incorrect. What Rosenthal says is that the relevant higher-order state must be arrived at in a way that does not subjectively seem to be inferential. That is compatible with its actually being the product of inference. But ok, subtle points aside what is the issue? He goes on to say:

But even this is not sufficient. Suppose again you are in and an amazing and unlikely thing happens. Before you even open Linguistic Inquiry, you get banged on the head and freakishly come to believe that you are in S. In this case, three things are true: you are in S, you believe you are in S, and you came to believe this in a way that is neither perceptual nor inferential. Even so it does not follow that is conscious; on the contrary, it remains as unconscious as it was before.

But again what reason is there to think this? If one is in a higher-order state to the effect that one is in S and this is arrived at in a way that subjectively seems to be non-inferential then according to the theory on will be in a conscious state! That is just what the theory claims. So there is no need to use introspection in the way that Stoljar claims.

Stoljar also briefly discusses the argument from empty higher-order thoughts, saying:

It is worth noting that many proponents of the higher-order theory insist on a different response to this objection. They say the belief can be empty but that the state that is conscious exists not as such but only according to the belief, rather as certain things may exist not as such but only according to the National Inquirer. I won’t attempt to discuss this idea here, since it is extensively discussed elsewhere; see, e.g., (Rosenthal 2011, Weisberg 2011, Berger 2014, Brown 2015, Gottlieb 2020). But it is worth noting that interpreting the view this way has the consequence that it is no longer a definition of a conscious state in the way that it is normally taken to be, and as I have taken it to be throughout this discussion. After all, adefinition of a conscious state either is or entails something of the form ‘x is a conscious state if and only if x is…’. This entails in turn that the state that is conscious must turn up on the right-hand side of the definition. But if you say that something is a conscious state if and only if you believe such and such, and if the belief in question does not entail the existence of the relevant state, then the state does not turn up as it should on the right-hand side; hence you have not defined anything.

But again, this is incorrect. According to Rosenthal the state which turns up on the right hand side is the state you represent yourself as being in, -whether or not one is actually in that state is irrelevant!-

There is a lot more to say about these issues, and other issues in Stoljar’s paper but I have to help get the kids their lunch!

Shombies vs. Zombies vs. Anti-Zombies and Popular Sessions from the Online Consciousness Conference

Ten years ago, way back in February 2010, the 2nd online consciousness conference would have been just starting and the papers from the first conference were coming out in the Journal of Consciousness Studies.

Even though I would change some things if I could, I am still very happy with my paper Deprioritizing the A Priori Arguments Against Physicalism . I think it is especially cool that this paper is cited by both the Stanford Encyclopedia of Philosophy’s entry on Zombies as well as the Wikipedia entry on Philosophical Zombies. In addition I have yet to see a good response to the argument I developed there. David Chalmers assimilates the objection to a ‘meta-modal’ objection involving conceiving that physicalism is true (or that necessarily (P –> Q) is possibly true). I went to Tucson in 2012 to talk about this and we talked about it a bit here (and I wrote up a version here) but I have never seen a real response to the actual argument.

If the best response, as the SEP and Dave’s 2D argument against Materialism paper/chapter suggest (though to be fair they are talking about conceiving that physicalism is true, which is not what I am talking about), is that they find shombies inconceivable then they have revealed that the a priori arguments should be deprioritized (that’s always been my point). I find zombies inconceivable and they find shombies inconceivable. How can we tell who is doing it right? These thought experiments can give an individual who finds the first premise plausible (the conceivability of zombies/shombies) some reason to think that their view (physicalism, dualism, whatever) is rational to hold but they cannot be used as a way to show that some metaphysical view about the mind/conscious is actually true. In this sense they are sort of like the ‘victorious’ Ontological Argument of Plantinga.

I would also say that I am more convinced than ever that shombies are not Frankish’s Anti-Zombies. In fact given Keith’s views on illusionism I am pretty sure he is committed to the claim that shombies, as I envision them, must be inconceivable (or not possible).

Oh yeah, this was supposed to be a post about the Online Consciousness Conference 🙂 Below are links to the most viewed sessions from the five conferences as well as to the most commented on sessions.

Most viewed sessions

Most commented on sessions