Cognitive Prosthetics and Mind Uploading

I am on record (in this old episode of Spacetime Mind where we talk to Eric Schwitzgebel) as being somewhat of a skeptic about mind uploading and artificial consciousness generally (especially for a priori reasons) but I also think this is largely an empirical matter (see this old draft of a paper that I never developed). So even though I am willing to be convinced I still have some non-minimal credence in the biological nature of consciousness and the mind generally, though in all honesty it is not as non-minimal as it used to be.

Those who are optimistic about mind uploading have often appealed to partial uploading as a practical convincing case. This point is made especially clearly by David Chalmers in his paper The Singularity: A Philosophical Analysis (a selection of which is reprinted as ‘Mind uploading: A Philosophical Analysis),

At the very least, it seems very likely that partial uploading will convince most people that uploading preserves consciousness. Once people are confronted with friends and family who have undergone limited partial uploading and are behaving normally, few people will seriously think that they lack consciousness. And gradual extensions to full uploading will convince most people that these systems are conscious at well. Of course it remains at least a logical possibility that this process will gradually or suddenly turn everyone into zombies. But once we are confronted with partial uploads, that hypothesis will seem akin to the hypothesis that people of different ethnicities or genders are zombies.

What is partial uploading? Uploading in general is never very well defined (that I know of) but it is often taken to involve in some way producing a functional isomorph to the human brain. Thus partial uploading would be the partial production of a functional isomorph to the human brain. In particular we would have to reproduce the function of the relevant neuron(s).

At this point we are not really able to do any kind of uploading as Chalmers’ or others describe but there are people who seem to be doing things that look like a bit like partial uploading. First one might think of cochlear implants. What we can do now is impressive but it doesn’t look like uploading in any significant way. We have computers analyze incoming sound waves and then stimulate the auditory nerves in (what we hope) are appropriate ways. Even leaving aside the fact that subjects seem to report a phenomenological difference, and leaving aside how useful this is for a certain kind of auditory deficit, it is not clear that the role of the computational device has anything to do with constituting the conscious experience, or of being part of the subject’s mind. It looks to me like these are akin to fancy glasses. They causally interact with the systems that produce consciousness but do not show that the mind can be replaced by a silicon computer.

The case of the artificial hippocampus gives us another nice test case. While still in its early development it certainly seems like it is a real possibility that the next generation of people with memory problems may have neural prosthetics as an option (there is even a startup trying to make it happen and here is a nice video of Theodore Berger presenting the main experimental work).

What we can do now is fundamentally limited by our lack of understanding about what all of the neural activity ‘means’ but even so there is impressive and suggestive evidence that homelike like a prosthetic hippocampus is possible. They record from an intact hippocampus (in rats) while performing some memory task and then have a computer analyze and predict what the output of the hippocampus would have been. When compared to actual output of hippocampal cells it is pretty good and the hope is that they can then use this to stimulate post-hippocampal neurons as they would have been if the hippocampus was intact. This has been done as proof of principle in rats (not in real time) and now in monkeys, and in real time and in the prefrontal cortex as well!

The monkey work was really interesting. They had the animal perform a task which involved viewing a picture and then waiting through a delay period. After the delay period the animal is shown many pictures and has to pick out the one it saw before (this is one version of a delayed match to sample task). While they were doing this they recorded activity of cells in the prefrontal cortex (specifically layers 2/3 and 5). When they introduced a drug into the region which was known to impair performance on this kind of task the animal’s performance was very poor (as expected) but if they stimulated the animal’s brain in the way that their computer program predicted that the deactivated region would respond (specifically they stimulated the layer 5 neurons (via the same electrode they previously used to record) in the way that the model predicted they would have been by layer 2/3) the animal’s performance returned to almost normal! Theodore Berger describes this as something like ‘putting the memory into memory for the animal’. He then shows that if you do this with an animal that has an intact brain they do better than they did before. This can be used to enhance the performance of a neuroscience-typical brain!

They say they are doing human trials but I haven’t heard anything about that. Even so this is impressive in that they use it successfully in rats for long term memory in the hippocampus and then they also use it in monkeys in the prefrontal cortex in working memory. In both cases they seem to get the same result. It starts to look like it is hard to deny that the computer is ‘forming’ the memory and transmitting it for storage. So something cognitive has been uploaded. Those sympathetic to the biological view will have to say that this is more like the cochlear implant case where we have a system causally interacting with the brain but it is the biological brain that stores the memory and recalls it and is responsible for any phenomenology or conscious experiences. It seems to me that they have to predict that in humans there will be a difference in the phenomenology that stands out to the subject (due to the silicon not being a functional isomorph) but if we get the same pattern of results for working memory in humans are we heading towards Chalmers’ acceptance scenario?


Consciousness and Category Theory

In the comments on the previous post I was alerted, by Matthias Michel, to a couple of papers that I had not yet read. The first was a paper in Neuroscience Research which came out in 2016:

And the second was a paper in Philosophy Compass that came out in March 2017:

After reading these I realized that I had heard an early version of this stuff when I was part of a plenary session with Tsuchiya in Tucson back in April of 2016. The title of his talk is the same as the title of the Philosophy Compass paper and some of the ideas are floated. I had intended writing something about this after my talk but I apparently didn’t get to it (yet?). I am in the midst of battling a potty-training toddler so it may not be anytime soon but I did want to get out a few (inchoate) reactions to these papers now that I have read them.

Both of these papers were very interesting. The first was interesting because it is the first time I have seen proponents of IIT acknowledge that they need to examine their ‘axioms’ more carefully. Are these axioms self-evident? Not to many people! Might there be alternate formulations? Yes! At the very least there should be some discussion of higher-order awareness (or awareness at all). There ideally should be an axiom like:

Awareness: Consciousness is for one. If one is in no way aware of oneself as being in a mental state then one is not consciously in that mental state

Of course they don’t want to add anything like this because as it stands the theory clearly assumes (without argument) that higher-order theories of consciousness are false. This is a problem that will not go away for IIT. But I’ll come back to that (by the way, the first ‘axiom’ of IIT sometimes seems to me to suggest a higher-order interpretation so one might assimilate this to an unpacking of the first axiom).

The central, and very interesting, idea of these papers that they are presenting is that category theory can help IIT address the hard problem (and some of the issues I raised in the previous post). There are a lot of mathematical details that are not relevant (yet) but the basic idea is that category theory lets us look at the structures that mathematical objects have and compare it to the structure of other mathematical structures. They want to exploit this by making a category out of the integrated information cause-effect space and one for quaila and then use category theory to examine how similar these two categories are.

First, can qualia form a category? They address this issue in the first paper but (to use a low hanging pun) this looks like a category mistake. Qualia are not mathematical objects. I suppose you could form the set of qualia and that would be a mathematical (i.e. abstract) object. But if you show that this structure overlaps with IIT have you shown anything about qualia themselves? Only if the structure captured in this category exhausts  the nature of qualia, but that is highly controversial! My guess is that there will be many categories that we could construct that would have some functors to both the category of qualia and the category of IIT structures. So, take the category of the set of Munsel color chips (not the experience of them, the actual chips). Won’t they stand in relations to each other that can be mapped onto the IIT domain in pretty much exactly the same way as the set of qualia!? If so, then IIT is Naive Realism? That is a joke but the point is that one would not want to claim that this shows that IIT is a theory of color chips. All we have shown is that there is a similar structure that runs in common in these two mathematical structures that at first seemed unrelated. That is interesting, but I don’t see how it can help us.

To their credit they recognize that this is a bit controversial and here is what they say about the issue:

In the narrow sense, a quale refers to a particular content of consciousness, which can be compared or characterized as a particular aspect of one moment of experience or a quale in the broad sense (Balduzzi and Tononi, 2009; Kanai and Tsuchiya, 2012). Can category theory consider any qualia we experience as objects or arrows? Some qualia in the narrow sense are straightforward to consider as objects: a quale for a particular object or its particular aspect, such as color. There are, however, some aspects of experience that are apparently difficult to consider as objects. For example, we can experience a distance between the two cups, which is a relationship between the objects but itself has no physical object form. Such abstract conscious perception can be naturally regarded as a relationship between objects: an arrow. Further, there are some types of qualia that seem to emerge out of many parts, such as a face. A whole face is perceived as something more than a collection of its constituent parts; there is something special about a whole face. Psychological and neuroscientific studies of faces point to configural processing, that is, a web of spatial relationship among the constituent parts of a face is critical in perception of a whole face (Maurer et al., 2002). In category theory, a complicated object, like a quale for a face, can be considered as an object that contains many arrows. Considered this way, any quale in the narrow sense can be considered as either an object, an arrow, or an object or arrow that contains any combinations of them.

But even if this is ok with you (and you set aside questions about whether ‘to the right of’ can be an arrow in category theory (will it obey the axiom of composition?)) what goes into the qualia category? They seem to assume that (at least some of) it is non-controversial but that isn’t so clear to me. Even so, what about Nagel’s bat? In order to use this procedure we would have to already know what kinds of qualities, conscious experiences, the bat had in order to form the category. But we have no idea what kinds of ‘objects’ and ‘arrows’ to populate that category with! That was kinda Nagel’s point!

To hammer this point home recall the logic gates that serve as simple illustrations of IIT. How are we to use this approach on it? We know what IIT says and so we can form that category without any problems. But what goes into the category of ‘qualia’ for the logic gate system’? We have no idea. In response to a question about Scott Aaronson’s objection Tsuchiya says that the expander grid may have a huge conscious field but would not have any visual experience. But what justifies this assertion?

They conclude their paper with the following remarks:

We proposed the three steps to apply the category theory approach in consciousness studies. First, we need to characterize our own phenomenological experience with detailed and structured descriptions to the extent to accept the domain of qualia as a category.

This may prove to be a difficult task and not just for the reasons having to do with higher-order awareness. Phenomenology is tricky stuff and it is notoriously hard to get people to agree on it (N.B. this is an understatement!) and since that is the case this general strategy seems doomed.


Another frustrating assertion with minimal evidence comes in the second paper linked to above and it has to do with the No-Report paradigm.

Noreport paradigms have implied that certain parts of the brain areas, such as the prefrontal areas, may not be related to consciousness, but more to do with the act of the reports (Koch, Massimini, Boly, & Tononi, 2016).

IF one buys this then one will see the IIT irreducible ‘concepts’ as corresponding to phenomenally conscious states but if instead one thinks that these results are overrated then one will see these irreducible IIT ‘concepts’ as picking out mental representations that may or may not be conscious. Thus we cannot extrapolate from the results of IIT until the debate with higher-order theories is resolved.

And that cannot happen until the proponents of IIT actually address the empirical case for higher-order theories. This is something that they have been very reluctant to do and when they discuss other theories of consciousness they studiously avoid any mention or discussion of higher-order theories. Higher-order theories need to be taken as seriously as Global Workspace, local re-entry, and other theories one finds in neuroscience and for the same reasons; because there is a significant (not decisive) evidence in favor of the theory.

But ok, what about the limited claim that we could in principle know whether the bat’s phenomenology was more like our seeing or our hearing? If we could generate the relevant category for the human conscious visual experience versus auditory experience and then if we could generate the IIT category for the bat’s echolocation we could compare them and see if it resembles our visual or auditory categories. According to Tsuchiya if we found that it resembled the IIT category for our auditory experiences (instead of our visual) or vice versa then we would have some evidence that they experienced the world in the same way we did.

But this seems to me to be a fundamental misunderstanding of Nagel’s point. His point was that there is no reason to expect that the bat’s experience would be anything like our seeing or our hearing. To know what it is like for the bat requires that we take up the bat’s point of view (according to Nagel). It is not clear that this addresses this issue at all! Even if we found that the bat’s brain integrated information in the way our brain integrates auditory information, and which results in the conscious experience of hearing for us, even if (stress on the IF) we discovered that why should we think that the bat’s experience was just like our experience of hearing? The point that Nagel wanted to make was that conscious experience seems somehow essentially bound up with the idea of subjectivity, of being accessible only from one’s own point of view. This is entirely missed in the proposal by Tsuchiya et al.

Integrated Information Theory doesn’t Address the Hard Problem

Just in case you are not aware Hakwan Lau has started a blog, In Consciousness we Trust, where he is blogging his work on his upcoming book on consciousness. He has lately been taking fire at the Integrated Information Theory of Consciousness and has a nice (I think updated) version of his talk (mentioned previously here) in his post How to make IIT (and other Theories of Consciousness) Respectable. I have some small quibbles with some of what he says but overall we agree on a lot (surprised? 😉 At any rate I was led to this paper by Sasai, Boly, Menson, and Tononi arguing that they have achieved a “functional split brain” in an intact subject. This is very interesting, and I enjoyed the paper a lot but right at the beginning it has this troublesome set of sentences:

A remarkable finding in neuroscience is that after the two cerebral hemispheres are disconnected to reduce epileptic seizures through the surgical sectioning of around 200 million connections, patients continue to behave in a largely normal manner (1). Just as remarkably, subsequent experiments have shown that after the split-brain operation, two separate streams of consciousness coexist within a single brain, one per hemisphere (2, 3). For example, in many such studies, each hemisphere can successfully perform various cognitive tasks, including binary decisions (4) or visual attentional search (5), independent of the other, as well as report on what it experiences. Intriguingly, anatomical split brains can even perform better than controls in some dual-task conditions (6, 7).

Really?!?! Experiments have shown this? I was surprised to read such a bold statement of a rather questionable assumption. In the first place I think it is important to note that these patients do not verbally report on what it ‘experiences’. I have argued that these kinds of (anatomical) spit brains may have just one stream of consciousness (associated with the one capable of verbally reporting) and that the other ‘mute’ hemisphere is processing information non-consciousnesly.

This is one of the problems that I personally have with the approach that IIT takes. They start with ‘axioms’ which are really (question begging) assumptions about the way that consciousness is, and they tout his as a major advance in consciousness research because it takes the Hard Problem seriously. But does it? As they put it,

The reason why some neural mechanisms, but not others, should be associated with consciousness has been called ‘the hard problem’ because it seems to defy the possibility of a scientific explanation. In this Opinion article, we provide an overview of the integrated information theory (IIT) of consciousness, which has been developed over the past few years. IIT addresses the hard problem in a new way. It does not start from the brain and ask how it could give rise to experience; instead, it starts from the essential phenomenal properties of experience, or axioms, and infers postulates about the characteristics that are required of its physical substrate.

But this inversion doesn’t serve to address the Hard Problem, (by the way, I agree with the way the formulate it for the most part). I agree that the Hard Problem is one of trying to explain why a given neural activation is associated with a certain conscious experience rather than another one, or none at all. And I even agree that in order to address this problem we need a theory of what consciousness is but IIT isn’t that kind of theory.  And this is because of the ‘fundamental identity claim’ of IIT that an experience is identical to a conceptual structure, where ‘experience’ means phenomenally conscious experience and ‘conceptual structure’ is a technical term of Integrated Information Theory.

This is a postulated identity, and they do want to try to test it, but even if it was successfully confirmed would it really offer us an explanation of why the experiences are associated with a particular brain activity? To see that the answer is no consider their own example from Figure 1 of their paper and what they say about it. nrn.2016.44_IIT - From Consciousness to Physical Substrate

They begin,

The true physical substrate of the depicted experience (seeing one’s hands on the piano) and the associated conceptual structure are highly complex. To allow a complete analysis of conceptual structures, the physical substrate illustrated here was chosen to be extremely simple1,2: four logic gates (labelled A, B, C and D, where A is a Majority (MAJ) gate, B is an OR gate, and C and D are AND gates; the straight arrows indicate connections among the logic gates, the curved arrows indicate self-connections) are shown in a particular state (ON or OFF).

So far so good. We have a simplified cause-effect structure in order to make the claim clear.

The analysis of this system, performed according to the postulates of IIT, identifies a conceptual structure supported by a complex constituted of the elements A, B and C in their current ON states. The borders of the complex, which include elements A, B, and C but exclude element D, are indicated by the green circle. According to IIT, such a complex would be a physical substrate of consciousness

So, when A=B=C=1 (i.e. on) in this system it is having a conscious experience (!), as they say,

The fundamental identity postulated by IIT claims that the set of concepts and their relations that compose the conceptual structure are identical to the quality of the experience. This is how the experience feels — what it is like to be the complex ABC in its current state 111. The intrinsic irreducibility of the entire conceptual structure (Φmax, a non-negative number) reflects how much consciousness there is (the quantity of the experience). The irreducibility of each concept (φmax) reflects how much each phenomenal distinction exists within the experience. Different experiences correspond to different conceptual structures.

Ok then. Here we have a simple system that is having a conscious experience, ex hypothesi, and we know everything about this system. We know that it has these  concepts specified by IIT, but what is it’s conscious experience like? What it is like to be this simple system of 4 logic gates when its elements A, B, and C are on? We aren’t told and there doesn’t seem to be any way to figure it out based on IIT. It seems to me that there should be no conscious experience associated with this activity, so it is easy to ‘conceive of a physical duplicate of this system with no conscious experience’…is this a zombie system? That is tongue in cheek but I guess that IIT proponents will need to say that since the identity is necessary I can’t really conceive of it (or that I can but it is not really possible). Can’t we conceive of two of these systems with inverted conscious experiences (same conceptual structures)? Why or why not? I can’t see anything in IIT that would help to answer these questions.

If IIT is attempting to provide a solution to the Hard Problem of Consciousness then should allow us to know what the conscious experience of this system is like, but it seems like it could be having any, or none (how difficult would it then be to extend this to Nagel’s bat!?!?). There are some who might object that this is asking too much. Isn’t this more like Ned Block’s “Harder Problem” than Chalmers’ Hard Problem? Here I suppose that I disagree with the overly narrow way of putting the Hard Problem. It isn’t merely about how this brain state is associated with a particular phenomenal quality rather than none at all, it is how it is associated with any physical, functional state at all that os the Hard Problem. Sure brain states are one kind of physical state and so the problem arises there but more generally the Hard Problem is answering the question of why any physical state is associated with any qualitative state at all instead of another or none at all.

IIT, and Tononi in particular, seem committed to giving us an answer. For instance, in his Scholarpedia article on IIT Tononi says,

IIT employs the postulates to derive, for any particular system of elements in a state, whether it has consciousness, how much, and of which kind.

But how do we do this for the 4 logic gates?

How do we do it in our own case?


Integrated Information Theory is not a Theory of Consciousness

The Integrated Information Theory of Consciousness has been garnering some attention lately. There was even a very high profile piece in Nature. Having just listened to Hakwan Lau’s talk on this (available at this conference website) I thought I would write down a couple of reactions.

Like everyone else who is interested in consciousness, I have been interested in the integrated Information Theory. I attended a talk by Tononi back in 2012 (and wrote about it here) but I also attended a workshop at NYU on it back in 2015. I had always meant to write something about it (John Horgan did here) and thought I would do so now. I wish I had written about this sooner, but to be completely honest I found out about the Paris attacks as I was leaving the workshop and it shook me up enough to distract me from blogging.

I had a couple of take-away’s from that workshop and these have really influenced how I have thought about IIT. I suppose I would sum it up by saying that IIT doesn’t look like a theory of consciousness. In the first place it purports to be a theory of phenomenal consciousness, what it is like for one to have a conscious experience, but it starts from the phenomenon of fading into a dreamless sleep. This makes it look like the main phenomenon is creature consciousness. Is IIT trying to give an account of the transition(s) from sleeping to wakefulness (and vice versa)? This is where ‘levels of consciousness’ talk seems most at home. Is being in hypnogogic reverie ‘in between’ sleeping and wakefulness? Probably yes, but does that translate to phenomenal consciousness being graded? There it seems less clear. You either have phenomenal consciousness or you do not (pace Dennett). It is the contents of consciousness that can be graded, distorted, etc. So right from the beginning it seems to me to be off on the wrong foot: the comparison is not that between waking and dreamless sleep, it is the comparison between conscious (i.e. reported) and unconscious (denied) states that one should begin with if one is looking to explain consciousness.

Another of the main ideas that came out of the workshop (again, for me) was that the ‘axioms’ of IIT seem to encode assumptions about conscious experience that are controversial. For example, is some kind of higher-order awareness necessary (and/or sufficient) for conscious experience? The axioms are silent on this, seeming to suggest that the answer is no, but a lot of people seem to think that there is a kind of higher-order awareness that is manifest in our phenomenology (old examples like Aristotle, and newer ones like Brentono, and even newer ones like Uriah Kriegel). So could we have another version of IIT that adds an axiom about consciousness requiring higher-order awareness? Can this axiom be mathematized? Or could we interpret the first axiom (i.e. consciousness exists from *my* perspective) as implying higher-order awareness?

The current defenders of IIT clearly have a first-order theory of consciousness in mind when they discuss Sperling. They say in their Nature Neuroscience Reviews paper,

In short, the information that specifies an experience is much larger
than the purported limited capacity of consciousness

But there is no argument for this other than that IIT predicts it! Doesn’t it seem the least bit fishy that a theory that starts off with axioms that encode first-order assumptions about consciousness ends up ‘predicting’ first-order readings of controversial experiments? There is nothing in IIT that seems to indicate that we should not instead say that the Sperling distinctions encoded in the integrated information are unconscious and what is conscious is just what the subjects report.

Thus it seems to me that IIT is best interpreted as giving an account of mental content. This mental content may be conscious but it may also be unconscious. To resolve this debate we need to go back to the usual debate between first-order and higher-order theories of consciousness. IIT seems to have added nothing to this debate and we would need to resolve it in the usual way (by argument, appeal to phenomenology, and experimental evidence).

Finally another of the main ideas to come out of the workshop, for me, was that IIT, can be interpreted differently from the metaphysical point of view as well. Is IIT physicalist or dualist? Well, it seems you could have a version of it that went ether way. You could, like David Chalmers seems to incline towards, view IIT as giving you a handle on what the physical correlates of consciousness might be, and then one would posit, in addition, a fundamental law of nature connecting states of physically integrated information with conscious states. This is clearly not the way that Tononi wants the theory to be developed but it is a consistent way to develop the theory. On the other hand one might end up with a physicalist version of IIT, identifying consciousness with the physical implementation of the integrated information. Or you could, like Tononi, claim that consciousness is identical to the ‘conceptual structure’ which exists over and above the parts which make it up (conceptual structures are irreducible to their physical parts for Tononi). So which one of these is the real IIT? Well, there is Tononi’s IIT and then there might be Chalmers’ IIT, etc.

This is not even to mention the problems others have pointed out, that it is hard to know what to make of a grid being ‘more conscious’ than a typical Human, or which of the many (many) different ways of formulating phi are correct, or whether it is even possible to measure phi in humans at all. Even if one wasn’t worried by any of that it still seems that IIT leaves open all of the most important questions about the ultimate nature of consciousness.


Eliminative Non-Materialism

It struck me today that all of the eliminativists about the mind are physicalists (or materialists) and a quick google search didn’t reveal any eliminativist dualist out there. But why is that?

I can see why a particular kind of dualist would reject eliminativism. If one held that the mind was transparent to itself in a strong way then the existence of beliefs and other mental states can be known directly via the first-person method of introspection. But does that exhaust the possibilities? Suppose one thought that there was a robust correlation (or even causation) between the brain and mind. Then one would expect a robust NCC for every conscious state (assuming a law-like connection or at least correlation between the brain and mental states).

To give us a model to work with let’s assume that there is correlation between function states of the brain and consciousness such that whenever certain functional states are realized that guarantees (given our laws of physics, etc) that a certain (non-physical) state of consciousness is also instantiated. Now suppose that we have a pretty good functional definition for what the functional correlate of a given metal state should be. That is, suppose we have worked out in a fair amount of detail what kinds of functional states we expect would be correlated with the conscious mental states posited by folk-psychology. Now further suppose that when we advanced far enough into our neuroscience we saw that there were no such states realized in the brain or that the states were somewhat what we thought but varied in some dramatic way from what we had worked out folk-psychologically.

At that point it seems we would have two options. One thing we could do is to maintain that there is after all no law-like correlation between brain states and mental states. There is a belief or a red quale, say, but it is somehow instantiated in a way independently from the neural workings. This seems like a bad option. The second option would be to abandon folk-psychology and say that the non-physical states of mind are better captured by what the correlates are suggesting. The newly non-physical states might be so different from the original folk-psychological postulates that we might be tempted to say that the originally postulated states don’t exist. Wouldn’t we then have arrived at an eliminative non-materialism?

As a corollary, doesn’t this possibility suggest that there aren’t any truly a priori truths knowable from introspection?

LeDoux and Brown on Higher-Order Theories and Emotional Consciousness

On Monday May 1st Joe LeDoux and I presented our paper at the NYU philosophy of mind discussion group. This was the second time that I have presented there (the first was with Hakwan (back in 2011!)). It was a lot of fun and there was some really interesting discussion of our paper.

There were a lot of inter-related points/objections that came out of the discussion but here I will just focus on just a few themes that stood out to Joe and I after the discussion. I haven’t yet had the chance to talk with him extensively about this so this is just my take on the discussion.

One of the issues centered on our postulation that there are three levels of content in emotional consciousness. On the ‘traditional’ higher-order theory there is the postulation of two distinct states. One is ‘first-order’ where this means that the state represents something in the world (the animal’s body counts as being in the world in this sense). A higher-order mental state is one that has higher-order content, where this means that it represents a mental state as opposed to some worldly-non-mental thing. It is often assumed that the first-order state will be some basic, some might even say ‘non-representational’ or non-conceptual, kind of content. We do not deny that there are states like the but we suggested that we needed to ‘go up a level’ so to speak.

Before delving into this I will say that I view this as an additional element in the theory. The basic idea of HOROR theory is just that the higher-order state is the phenomenally conscious state (because that what phenomenal consciousness is). I am pretty sure that the idea of the lower-order state being itself a higher-order state is Joe’s idea but to be fair I am not 100% sure. The idea was that the information coming in from the senses needed to be assembled in working memory in such a way as to allow the animal to connect memories, engage schemas etc. We coined the term ‘lower-order’ to take the place of ‘first-order’. For us a lower-order state is just one that is the target of a higher-order representation. Thus, the traditional first-order states would count as lower-order on our view but so would additional higher-order states that were re-represented  at a higher-level.

Thus on the view we defended the lower-order states are not first-order states. These states represent first-order states and thus are higher-order in nature. When you see an apple, for example, there must be a lot of first-order representations of the apple but these must be put together in working memory and result in a higher-order state which is an awareness of these first-order states. That higher-order representation is the ‘ground floor’ representation for our view. It is itself not conscious but it results in the animal behaving in appropriate ways. At this lower-order level we would characterize the content as something like ‘(I am) seeing an apple’. That is, there is an awareness of the first-order states and a characterization of those states as being seeing of red but there is no explicit representation of the self. There is an implicit referring to the self, by which we mean these states are attributed to the creature who has them but not in any explicit way. This is why we think of this state as just an awareness of the first-order activity (plus a characterization of it). At the their level we have a representation of this lower-order state (which is itself a higher-order state in that it represents first-order states).

Now, again, I do not really view this three-layer approach as essential to the HOROR theory. I think HOROR theory is perfectly compatible with the claim that it is first-order states that count as the targets. But I do think it is an interesting issue at state here and that is what role exactly the ‘I’ in “I am seeing a red apple’ is playing and also whether first-order states can be enough to play the role of lower-order states. Doesn’t the visual activity related to the apple need to be connected to concepts of red and apple? If so then there needs to be higher-order activity that is itself not conscious.

Another issue focused on our methodological challenge to using animals in consciousness research. Speaking for myself I certainly think that animals are conscious but since they cannot verbally report, and as long as we truly believe that the cognitive unconscious is as robust as widely held, then we cannot rule out that animal behavior is produced by non-conscious processes. What this suggests is that we need to be cautious when we infer from an animal’s behavior to the cause of it being a phenomenally conscious mental state. Of course that could be what is going on, but how do we establish that? It cannot be the default assumption as long as we accept the claims about the cognitive unconscious. Thus we do not think that animals do or do not have conscious experience but rather that the science of consciousness is best pursued in Humans (for now at least). For me this is related to what I think of as the biggest confound in all of consciousness science and that is the confound of behavior. If an animal can perform a task then it is assumed this is because its mental states are conscious. But if this kind of task can be performed unconsciously then behavior by itself cannot guarantee consciousness.

One objection to this claim (sadly I forgot who made this…maybe they’ll remind me in the comments?) was that maybe verbal responses themselves are non-conscious. When I asked if the kind of view that Dennett has, where there is just some sub-personal mechanism which results in an utterance of “I am seeing red” and this is all there is to the conscious experience of seeing red, counts as the kind of view the objector had in mind. The response was that no they had in mind that maybe the subjects are zombies with no conscious experience at all and yet were able to answer the question “what do you see” with “I see red,” just like zombies are thought to do. I responded to this with what I think is the usual way to respond to skeptical worries. That is, I acknowledge that there is a sense in which such skeptical scenarios are conceivable (though maybe not exactly as the conceiver supposes), but there are still reasons for not getting swept up in skepticism. For example I agree with the “lessons” from fading, dancing, and absent qualia cases that we would be in an unreasonable sense detached from our conscious experiences if this were happening. The laws of physics don’t give us any reason to suppose that there are radical differences between similar things (like you and I), though if we discovered an important brain area missing or damaged then I suppose we could be led to the conclusion that some member of the population lacked conscious experience. But why should we take this seriously now? I know I am conscious from my own first-person point of view and unless we endorse a radical skepticism then science should start from the view that report is a reliable(ish) guide to what is going on in a subject’s mind.

Another issue focused on our claim that animal consciousness may be different from human conscious experience. If you really need the concept ‘fear’ in order to feel afraid and if there is a good case to be made that animals don’t have our concept of fear then their experience would be very different from ours. That by itself is not such a bad thing. I take it that it is common sense that animal experience is not exactly like human experience. But it seems as though our view is committed to the idea that animals cannot have anything like the human experience of fear, or other emotions. Joe seemed to be ok with this but I objected. It is true that animals don’t have language like humans do and so are not able to form the rich and detailed kinds of concepts and schemas that humans do but that does not mean that they lack the concept of fear at all. I think it is plausible to think that animals have some limited concepts and if they are able to form concepts as basic as danger (present) and harm then they may have something that approaches human fear (or a basic version of it). A lot of this depends on your specific views about concepts.

Related to this, and brought up by Kate Pendoley was the issue of whether there can be emotional experiences that we only later learn to describe with a word. I suggested that I thought the answer may be yes but that even so we will describe the emotion in terms of its relations to other known emotions. ‘It is more like being afraid than feeling nausea’ and the like. This is related to my background view about a kind of ‘quality space’ for the mental attitudes.

Afterwards, over drinks, I had a discussion with Ned Block about the higher-order theory and the empirical evidence for the role of the prefrontal cortex in conscious experience. Ned has been hailing the recent Brascamp et al paper (nice video available here) as evidence against prefrontal theories. In that paper they showed that if they take away report and attention (by making the two stimuli barely distinguishable) then you can show that there is a loss of the prefrontal fMRI activation. I defended the response to this that fMRI is too crude of a measure to take this null result too seriously. This is what I take to be the line argued in this recent paper by Brain Odgaard, Bob Knight, and Hakwan, Should a few null findings falsify prefrontal theories of consciousness? Null results are ambiguous as between the falsifying interpretation and it just being missed by a crude tool. As Odgaard et al argue if we use more invasive measures like single cell or ECoG then we would find prefrontal activity. In particular the Mante et al paper referred to in Odgaard et all is pretty convincing demonstration that there is information decodable from prefrontal areas that would be missed by an fMRI. As they say in the linked to paper,

There are numerous single- and multi- unit recording studies in non-human primates, clearly demonstrating that specific perceptual decisions are represented in PFC (Kim and Shadlen, 1999; Mante et al., 2013; Rigotti et al., 2013). Overall, these studies are compatible with the view that PFC plays a key role in forming perceptual decisions (Heekeren et al., 2004; Philiastides et al., 2011; Szczepanski and Knight, 2014) via ‘reading out’ perceptual information from sensory cortices. Importantly, such decisions are central parts of the perceptual process itself (Green and Swets, 1966; Ratcliff, 1978); they are not ‘post-perceptual’ cognitive decisions. These mechanisms contribute to the subjective percept itself (de Lafuente and Romo, 2006), and have been linked to specific perceptual illusions (Jazayeri and Movshon, 2007).

In addition to this Ned accused us of begging the question in favor of the higher-order theory. In particular he thought that there really was no conscious experience in the Rare Charles Bonnett cases and that our appeal to Rahnev was just question begging.

Needless to say I disagree with this and there is a lot to say about these particular points but I will have to come back to these issue later. Before I have to run, and just for the record, I should make it clear that, while I have always been drawn to some kind of higher-order account, I have also felt the pull of first-order theories. I am in general reluctant to endorse any view completely but I guess I would have to say that my strongest allegiance is to the type-type identity theory. Ultimately I would like it to be the case that consciousness and mind are identical to brain states and/or states of the brain. I see the higher-order theory as compatible with the identity theory but I am also sympathetic to to other versions (for full-full disclosure, there is even a tiny (tiny) part of me that thinks functionalism isn’t as bad as dualism (which itself isn’t *that* bad)).

Why, then, do I spend so much time defending the higher-order theory? When I was still an  undergraduate student I thought that the higher-order thought theory of consciousness was obviously false. After studying it for a while and thinking more carefully about it I revised my credence to ‘not obviously false’. That is, I defended it against objections because I thought they dismissed the theory unduly quickly.

Over time, and largely because of empirical reasons, I have updated my credence  from ‘not obviously false’ to ‘possibly true’ and this is where I am at now. I have become more confident that the theory is empirically and conceptually adequate but I do not by any means think that there is a decisive case for the higher-order theory.

Dispatches from the Ivory Tower

In celebration of my ten years in the blogosphere I have been compiling some of my past posts into thematic meta-posts. The first of these listed my posts on the higher-order thought theory of consciousness. Continuing in this theme below are links to posts I have done over the past ten years reporting on talks/conferences/classes I have attended. I wrote these mostly so that I would not forget about these sessions but they may be interesting to others as well. Sadly, there are several things I have been to in the last year or so that I have not had the tim to sit down and write about…ah well maybe some day!

  1. 09/05/07 Kripke
    • Notes on Kripke’s discussion of existence as a predicate and fiction
  2. 09/05/2007 Devitt
  3. 09/05 Devitt II
  4. 09/19/07 -Devitt on Meaning
    • Notes on Devitt’s class on semantics
  5. Flamming LIPS!
  6. Back to the Grind & Meta-Metaethics
  7. Day Two of the Yale/UConn Conference
  8. Peter Singer on Climate Change and Ethics
    • Notes on Singer’s talk at LaGuardia
  9. Where Am I?
    • Reflections on my talk at the American Philosophical Association talk in 2008
  10. Fodor on Natural Selection
    • Reflections on the Society of Philosophy and Psychology meeting June 2008
  11. Kripke’s Argument Against 4-Dimensionalism
    • Based on a class given at the Graduate Center
  12. Reflections on Zoombies and Shombies Or: After the Showdown at the APA
    • Reflections on my session at the American Philosophical Association in 2009
  13. Kripke on the Structure of Possible Worlds
    • Notes on a talk given at the Graduate Center in September 2009
  14. Unconscious Trait Inferences
    • Notes on social psychologist James Uleman‘s talk at the CUNY Cogsci Speaker Series September 2009
  15. Attributing Mental States
    • Notes on James Dow‘s talk at the CUNY Cogsci Speaker Series September 2009
  16. Busy Bees Busily Buzzing ‘Bout
  17. Shombies & Illuminati
  18. A Couple More Thoughts on Shombies and Illuminati
    • Some reflections after Kati Balog’s presentation at the NYU philosophy of mind discussion group in November 2009
  19. Attention and Mental Paint
    • Notes on Ned Block’s session at the Mind and Language Seminar in January 2010
  20. HOT Damn it’s a HO Down-Showdown
    • Notes on David Rosenthal’s session at the NYU Mind and Language Seminar in March 2010
  21. The Identity Theory in 2-D
    • Some thoughts in response to theOnline Consciousness Conference in February 2010
  22. Part-Time Zombies
    • Reflections on Michael Pauen‘s Cogsci talk at CUNY in March of 2010
  23. The Singularity, Again
    • Reflections on David Chalmers’ at the NYU Mind and Language seminar in April of 2010
  24. The New New Dualism
  25. Dream a Little Dream
    • Reflections on Miguel Angel Sebastian’s cogsci talk in July of 2010
  26. Explaining Consciousness & Its Consequences
    • Reflections on my talk at the CUNY Cog Sci Speaker Series August 2010
  27. Levine on the Phenomenology of Thought
    • Reflections on Levine’s talk at the Graduate Center in September 2010
  28. Swamp Thing About Mary
    • Reflections on Pete Mandik’s Cogsci talk at CUNY in October 2010
  29. Burge on the Origins of Perception
    • Reflections on a workshop on the predicative structure of experience sponsored by the New York Consciousness Project in October of 2010
  30. Phenomenally HOT
    • Reflections on the first session of Ned Block and David Carmel’s seminar on Conceptual and Empirical Issues about Perception, Attention and Consciousness at NYU January 2011
  31. Some Thoughts About Color
  32. Stazicker on Attention and Mental Paint
  33. Sid Kouider on Partial Awareness
    • a few notes about Sid Kouider’s recent presentation at the CUNY CogSci Colloquium in October 2011
  34. The 2D Argument Against Non-Materialism
    • Reflections on my Tucson Talk in April 2012
  35. Peter Godfrey-Smith on Evolution And Memory
    • Notes from the CUNY Cog Sci Speaker Series in September 2012
  36. The Nature of Phenomenal Consciousness
    • Reflections on my talk at the Graduate Center in September 2012
  37. Giulio Tononi on Consciousness as Integrated Information
    • Notes from the inaugural lecture of the new NYU Center for Mind and Brain by Giulio Tononi
  38. Mental Qualities 02/07/13: Cognitive Phenomenology
  39. Mental Qualities 02/21/13: Phenomenal Concepts
    • Notes/Reflections from David Rosenthal’s class in 2013
  40. The Geometrical Structure of Space and Time
    • Reflections on a session of Tim Maudlin’s course I sat in on in February 2014
  41. Towards some Reflections on the Tucson Conferences
    • Reflections on my presentations at the Tucson conferences
  42. Existentialism is a Transhumanism
    • Reflections on the NEH Seminar in Transhumanism and Technohumanism at LaGuardia I co-directed in 2015-2016