You Must be Joking

A few years ago I had the terrible idea of taking classic jokes and “translating” them into philosophical lingo. Some work has been done in this area on lightbulb jokes but there are so many other kinds of jokes. Some are pretty obvious…like

  • Yo mama is so fat, when she sits around the house she sits AROUND the house; in all possible worlds
  • Yo mama is so dumb she has the B relation of taking more than an hour to watch 60 minutes

Some are just plain silly,

  • Yo mama is so fat she is the truthmaker for ‘your mama is fat’
  • If you mow your lawn and find the nonbeing of four cars…you might be a philosopher
  • If you go to a psychology conference hoping to meet women…you might be a philosopher
  • If someone asks you to fill out a form and you think of Plato…you might be a philosopher
  • If you think “it depends on what the meaning of ‘is’ is” actually was a good defense…you might be a philosopher

Some are just plain ridiculous as in

  • Yo mama is so dumb she thinks the transcendental deduction is a tax break for club kids
  • Yo mama is so dumb she thinks the T-schema was the code name for the Boston Tea Party

Others?

On an unrelated note, thanks to Netflix I just rewatched Return of the Living Dead II and I realized that whenever I am asked the name of the blog that I contribute to I should say Braaaaaiiiinnnnnsssss!

The Unintelligibility of Substance Dualism

Over at Siris Brandon offers some interesting criticism of my argument against substance dualism. He distinguishes two senses in which we may say that a theory is viable. In one sense we simply mean to be asking what reasons someone might have for believing in that kind of thing. In that sense a viable theory is one which there is reason to believe. In another sense we may be asking not what the reasons are to believe it but instead what the thing in question is in the first place.  A viable theory in this sense is one that can tell us what the thing is. Brandon then goes on to show that this distinction corresponds to a distinction between things that a problem for a theory and things that a problem within the theory. Brandon then goes on to argue that my complaint is not a problem for the theory that there are immaterial substances but is rather a problem within the theory of immaterial substances itself and so should be answered by more research into immaterial substance and not with a dismissal of the theory.

The picture that Brandon seems to have is this. We decide whether or not there are good theoretical/common sense reasons to believe that there are immaterial substances and if we decide that there are we then try to construct a theory of what they are. Naturally in doing so we do not know very much about the immaterial substances and so one of the projects of the theory is to say more about what they are. Given this it is a mistake to think that our lack of understanding about what immaterial substances are is any reason to think that they don’t exist.

I completely agree with the spirit of Brandon’s comments but I do not agree with his conclusions. First, to where I agree. We clearly must recognize the kind of distinction that Brandon draws. And while I disagree that there are any real reasons or evidence for immaterial substances I agree that if there were, or if one thought there were, one should then go on to try and give a theoretical account of what they are.

Let us be generous and grant that there are reasons to think that some kind of substance dualism is true. When we then ask what an immaterial substance is we get told that it is the immaterial substrate of thinking and consciousness and that it is not located in space-time as we know it. David Chalmers has offered one way of making sense of this in terms of the matrix, and I won’t rehash it here but it seems clear that this kind of move makes the immaterial substance material outside of the matrix and so isn’t really a threat. What else can we do? At this point we have no further ideas. All we can say is that it is an X we know not what which underlies thinking and consciousness. If the theory never progresses past this point then we may start to think that it is in trouble.

So, to take Brandon’s example of evolution in biology, people had proposed accounts that looked evolution-ish as far back as Democritus, who seemed to have proposed that life as we know it was built up over time from simpler parts but this was not the theory of evolution because he did not have the right mechanism (natural selection). If the theory of evolution had stayed at the level of “evolution is whatever it is that underlies speciation and isn’t God doing it” no one would care about it. So too if the best that substance dualism can do is to say that an “immaterial substance is whatever it is that underlies consciousness and thinking and isn’t physical” it seems uninteresting. One might think this shouldn’t be a problem because lots of theories have been like that in the past (gravity seems to be a notable one) but the problem is that  it has been this way since its inception and not one step forward has been taken in 3,000 years. The most significant advance, if one were to call it that, has been the post-Humean nonchalance to the issue of physical/non-physical causation. If all there is to causation is constant conjunction, and the non-physical events are constantly conjoined with the physical ones then voila! mind-body problem (dis)solved!!

The upshot then is that fleshing out the theory will ultimately shed some light on the reasons for believing it. If we seem in principle unable to advance in specifying what a immaterial substance is, and we have physicalist alternatives that are relatively well understood, substance dualism starts to look impossible and we seem to loose our reason to believe it, which will in turn cause us to re-evaluate the reasons we used to have for believing it.

More HOTter, More Better

In an earlier post I outlined the case for qualia realism from the higher-order perspective as I see it. Dave Chalmers worried that one of the moves was too quick. The move in question is the move from concepts making a difference to phenomenal experience to their determining phenomenal experience. Basically the line I was pushing was that if it is the case that applying concepts changes our phenomenal experience then “perhaps it is not too crazy to think that applying concepts is what results in phenomenal feel in the first place,” but Dave is right that there is a lot more that needs to be said.

As I also said, I think that a crucial step in securing this premise in the argument is showing that there can be unconscious states with qualitative character which are not like anything for the creature that has them. If we established that then we would have evidence that it is solely applying concepts that constitutes phenomenal consciousness. There is another line of argument which might show this as well which is given by David Rosenthal in a few different places (see page 155 in Consciousness and Mind for a representative example). Basically it is a subtraction argument. Take some phenomenally conscious experience, like listening to music. We already agree that applying new concepts will change the character of the experience. So, if I were to learn what a bass clarinet was then listening to Herbie Hancock’s Chameleon will sound differently to me. Now suppose that we subtract this concept. My experience will change. More specifically it will lack the bass clarinetiness that my experience had when I applied that concept. Now we can continue subtracting out concepts one by one without altering the first-order state in any way. Since subtracting the concept produces a phenomenal experience that lacks precisely the element corresponding to the concept we can conclude that subtracting these concepts will produce phenomenal consciousness that is sparser and sparser. What are we to say when we have reached teh point where there is just one concept characteriing the first-order state? Suppose that we are at the point where we are only applying the concept SOUND to the experience. Phenomenally it will be like hearing a sound for me but not any particular sounds. Now suppose we subtract that concept. What will it be like for the creature?

The higher-order theorist says that at that point it is no longer like anything for the creature. The other side says that there is still something that it is like, though it may not be like anything for the creature) but what argument could show this? What reason is there for thinking that there is anything phenomenal left over?

Summa Contra Plantinga

I recently reread Alvin Plantinga’s paper Against Materialism and needless to say I am less than impressed. Plantinga presents two “arguments” against materialism each of which is utterly ridiculous.

The first is what he calls the replacement argument (sic). It is possible, Plantinga tells us, that one could have one’s body replaced while one continues to exist; therefore one is not one’s body. Of course the obvious problem with this argument is that it at best shows that I am not identical to a particular body but it does not show that minds are not physical for it does not show that the mind exists with out any body whatsoever. To show that Plantinga needs to appeal to disembodiment and he doesn’t.

It is also clearly possible that one could have one’s immaterial substance replaced and continue to exist; thus one is not an immaterial substance. This is because there is nothing contradictory in supposing that materialism is true and what this shows, as I have argued at length before, is that these a priori arguments are of no use to us at this point.

Now Plantinga, to his credit, realizes that these kinds of intuitions are ultimately question begging so his second argument appeals to an alleged impossibility, which turns out to be none other than the problem of intentionality. The argument turns on our ability to ‘just see’ that it is impossible that a physical thing can think. Just as the the number 7 can not weigh 5 pounds neither can a brain think. Never mind computers and naturalized theories of content, those couldn’t be belief contents. Oh, I see…wait, I don’t.

But of course the real problem here is that it is even more mysterious how an immaterial substance could think. Plantinga spends some time in the paper responding to Van Inwagen’s argument along these lines. Plantinga focuses on Van Inwagen’s claim that we can’t imagine an immaterial substance. The response should be obvious: we can’t imagine lots of stuff (like what a number looks like) but that doesn’t show that they are impossible. Van Inwagen’s second swipe at immaterial substances is that we cannot see how an underlying reality that is immaterial can give rise to thinking any more than we can see how an underlying physical reality can. Plantinga’s response to this is to claim that the soul is a simple and has thinking as an essential attribute in much the same way as an electron is said to be simple and have its charge essentially.

But all of this seems to me to miss the fundamental point that Van Inwagen wants to make. The very concept of an immaterial substance is unintelligible. Attempts to make them intelligible render them into ordinary physical substances at the next level up, so to speak. And it is of course out of the question to simply say that an immaterial substance is perfectly intelligible since they are just minds (as Plantinga seems to do). It is obvious that there is thinking but it is not at all obvious that an immaterial substance could think. What would that even mean?  The upshot then is that substance dualism is not a viable theory.

Two Concepts of Transitive Consciousness

In celebration of my three years in the Blogosphere I will be reposting some of my earlier posts that I am particularly fond of. This piece was originally published May 10th, 2007.
——————————-

In his youthful exuberance Rosenthal argued that for a first-order state to count as a conscious state the first-order state had to cause the higher-order state to occur. But he has come to explicitly reject this causal requirement. He now talks about the higher-order thought ‘accompanying’ the target state. It need not have any causal connection to the first-order state at all. What this amounts to is that there are at least two different ways of thinking about the relation between the first-order state and the higher-order state depending on whether you think intentionality is at bottom a matter of description and functional role and holism or a matter of word-world relations and causation, and compositionality. This leads us to what I have called Q-higher-order thoughts and K-higher-order thoughts.

A K-higher-order thought is a higher-order thought that is caused by its target state and so picks it out in a something like a causally complex-demonstrative way. Something like ‘I am, myself, in (dthat) red state.’ In order to count some first-order state as a conscious state it has to be the cause of the higher-order state that targets it. On the other hand a Q-higher-order thought need not be caused by the state that it represents in order to be about it and for us to be conscious of it.  It picks out the target state purely by description. The Q-higher-order thought characterizes the first-order state in terms of its resemblances and differences to and from other sensory states like it. Something like ‘I, myself, am in a state that is more like pink than it is like blue and more like orange than it is like green…and etc’. So which of these should we prefer? I have been arguing here, and in response to Pete over at the brain Hammer that this kind of higher-order theory allows us answer the ubiquitous objection from the so-called empty higher-order thought, and more recently, that it gives us a nice response to Pete’s Unicorn argument against Higher-order theories. These, I think, are already powerful reasons to think this is the right way to cast the theory, but one may wonder what else speaks in its favor.

Rosenthal gives two very quick arguments against his former K-Higher-order view, both in a footnote that he added in 2005 (p56). His first argument is that requiring the causal connection between the first-order state and the higher-order state in order for the first-order state to count as a conscious state is theoretically unmotivated. The idea behind this is that the transitivity principle requires only that one be conscious of being in the first-order state; it seems to be silent on what actually causes you to become so conscious. However the causal antecedents of the higher-order state will seem to matter very much if one is influenced by the Grice-Kripke-Fodor picture of the mind. So the claim that the causal requirement is theoretically unmotivated by the transitivity principle is more a revealing fact about Rosenthal that about the higher-order theory. A causal theory of reference is itself powerfully motivated, and if it turns ut to be correct, then we had better have a higher-order thoery that incorperates it (that is, if we want to have a higher-order theory in the first place).

The footnote continues by pointing out that one reason why the idea that the first-order states causes the higher-order state is so intuitive is because it is a way of saving the Cartesian insight that there is an intimate connection between mental states and consciousness. If first-order states are in the business of causing higher-order states about themselves we could easily explain why so many philosophers have thought that being conscious is essential to being a mental state. It also explains why we are conscious of our mental states in an immediate, non-inferential way, which is required by higher-order theories. This looks like some kind of theoretical motivation, so what is it that he finds so problematic?

He argues that if we require that the first-order state cause the higher-order state in order for it to be a conscious state, we end up having to say that being conscious is the ‘normal condition’ for mental states. The reason that we do not want to say that being conscious is the normal condition for mental states is because it obscures the important fact that they may occur unconsciously and that that seems like a pretty normal condition for mental states to be in as well.  If the normal condition of a mental state was conscious then it is only if some special causal mechanism intervened in the normal procedure that we would end up with unconscious mental states. But this is wrong because as we saw in the first part of the paper the transitivity principle predicts that any state can occur unconsciously. One is not more normal than the other.

But it is natural to think that some kinds of states are more normally conscious than others. For instance it is natural to think that the sensory sates and other kinds of states that we most likely share with other animals, do normally cause higher-order states about them. Or in other words, it is natural to think that in the case of the sensory states it is more natural to occur consciously, though there are plenty of times when they do not. In the case of thoughts and other more complex forms of mental phenomena it is natural to think that they would be less likely to have to occur consciously being newer perhaps and less in the business of day to day survival. And there are all kinds of stories we can tell about why that is the case and how it would be implemented in a complex system like neural representation. There may be filters, thresholds, feedback networks, both inhibitory and excitatory, and who knows what else.  We can do all this without falling into the trap of thinking that the sensory states must always be conscious.

Containing Phenomenological Overflow

I am going to the Association for the Scientific Study of Consciousness meeting in Toronto to do a poster presentation of the higher-order response to Block’s phenomenological overflow argument. This is important since it is a crucial step in the argument for the naturalization of qualia. The core argument is in this video.

This shows that phenomenological overflow is no threat to the higher-order theory. Is there any reasn to prefer it?  I was rereading Huxley’s On the Hypothesis that Animals are Automata, and Its History and I came across this very interesting passage,

If the spinal cord is divided in the middle of the back, for example, the skin of the feet may be cut, or pinched, or burned, or wetted with vitrol, without any sensation of touch, or of pain, arising in consciousness. So far as the man is concerned, therefore, the part of the central nervous system which lies beyond the injury is cut off from consciousness. It must be admitted, that, if any one think fit to maintain that the spinal cord below the injury is conscious, but that it is cut off from any means of making its consciousness known to the other consciousness in the brain, there is no means of driving him from his position by logic. But assuredly there is no way of proving it, and in the matter of consciousness, if anything, we may hold the rule, “De non apparentibus et de non existentibus eadem est ratio.”

As far as I can tell the latin phrase there means something like “things that can’t be detected don’t exist,” though my latin is rusty. If this is roughly right then Huxley seems to be making an argument similar to the one I was pushing at the Online Consciousness Conference. If the mesh argument doesn’t decide between a Blockian or a Rosenthalian view then we should decide the issue on philosophical grounds. One way of reading the Huxley passage is as a semi-verificationalist move. Since there can be no empirical test of the matter we may treat it as a meaningless hypothesis. I would read this passage differently.

A state is phenomenologically consciousness when there is something that it is like for the creature that has the state. When there is nothing that it is like for the creature then there is no phenomenal consciousness. Thus when there is no what it is likeness around we can assume that there is no phenomenal consciousness hanging about. To imagine otherwise is to imagine that there is something that it is like for me that is not like anything for me…and that sounds like a contradiction.

Importantly, none of this is to deny that unconscious pains have qualitative properties. These qualitative characters, when unconscious, do not have any phenomenal feel but they do resemble and differ other qualitative characters in the right ways and they have causal connections as usual. It is only when we are conscious of them that they have the phenomenology we associate with pain. True, this seems to violate our common sense thinking about pains, though there are some platitudes that cit the other way which just again illustrates that folk theory is often inconsistent.

As Aristotle recommended we must try to save as many of the most basic pre-theoretical platitudes as we can but it may be the case that some will have to go; perhaps the common sense idea that there are unconscious pains that are phenomenally conscious is one of them. The claim turns out to be either paradoxical or merely terminological.

HOT Block

In celebration of my three years in the Blogosphere I will be reposting some of my earlier posts that I am particularly fond of. This piece was originally published July 11th, 2007.
——————————-

I was recently reading Block’s forthcomming BBS paper Consciousness, Accessibility, and the Mesh between Psychology and Neuroscience. It is an interesting paper and I am looking forward to seeing the commentary. The basic puzzle is one that I have heard him talk about before; How could we tell whether the transitivity principle is right or not? What would empirically decide whether there can be a phenomonally conscious state that we are unconscious of having? So, to take Block’s example, suppose that we have a person who is subliminally percieving a face and there is activation in that person’s fusiform face area. Since the subject sincerely reports that they do not see a face it seems we can agree that this is the sensory state in the absence of the higher-order state.

How do we describe this situation? Do we say that the face expeirience is phenomonally unconscious? That there is nothing that it is like to see the face? Does it, as Rosenthal would say, have unconscious qualitative properties? Or do we say that there is something that it is like for the perseon to see the face but that they are unconscious of what it is like for them? The puzzle is that both theories make the same prediction about what the person will report (they don’t see a face) and so we need to find someother way to distinguish the two claims empirically. I don’t really want to talk about Block’s argument that phenomenology overflows our access to it (unless someone does want to talk about it), as all I could do it to repeat the Rosenthal line that the evidence that Block presents (i.e. the change blindness stuff) isn’t good evidence because the subjects can report, as Block acknowledges, that they saw some letters or ‘a rectangle’. Rosenthal can explain this on his account in the following way. In one case we are conscious of the first-order experience as just some rows of some letters or as just a rectangle while in the other we are conscious of the experience as being a row of some specific letters or shapes. So the fact that subjects report that they have some phenomenally conscious experience as Block rightly points out, needen’t be evidence for his claim that there is phenomenology without Awareness.

I think that if one steps far enough back from this debate one can see that it is the distinction between analytic and psyco-functionalism that is causing a lot of the local flare-ups and that this has some bearing on the empirical testability issue and the debate with Mandik that I have been suffering through, but I will leave that for another day.

What I do want to talk about is Block’s dismissal of Rosenthal’s kind of higher-order theory. He makes it very clear that he thinks that the higher-order thought theory is not even a candidate for a serious theory of phenomenal consciousness. As I have said many times before, I do not know if the higher-order thought theory is true or not, but it is at least not obviously false. It is a well formulated theory that could turn out to be right. So what’s Block’s problem?

He makes his case at the beggining of the paper in this rather longish quote.

We may suppose that it is platitudinous that when one has a phenomenally conscious experience, one is in some way aware of having it. Let us call the fact stated by this claim – without committing ourselves on what exactly that fact is – the fact that phenomenal consciousness requires Awareness. Sometimes people say Awareness is a matter of having a state whose content is in some sense “presented” to the self or having a state that is “for me” or that comes with a sense of ownership or that has “meishness” (as I have called it; Block 1995a).

Very briefly, three classes of accounts of the relation between phenomenal consciousness and Awareness have been offered. Ernest Sosa (2002) argues that all there is to the idea that in having an experience one is necessarily aware of it is the triviality that in having an experience, one experiences one’s experience just as one smiles one’s smile or dances one’s dance. Sosa distinguishes this minimal sense in which one is automatically aware of one’s experiences from noticing one’s experiences, which is not required for phenomenally conscious experience. At the opposite extreme, David Rosenthal (2005) has pursued a cognitive account in which a phenomenally conscious state requires a higher order thought to the effect that one is in the state. That is, a token experience (one that can be located in time) is a phenomenally conscious experience only in virtue of another token state that is about the first state. (See also Armstrong 1977, 1978; Carruthers 2000; Lycan 1996) for other varieties of higher order accounts.) A third view, the “Same Order” view says that the consciousness-of relation can hold between a token experience and itself. A conscious experience is reflexive in that it consists in part in an awareness of itself. (This view is discussed in Brentano 1874/1924; Burge 2006; Byrne 2004; Caston 2002; Kriegel 2005; Kriegel & Williford 2006; Levine 2001, 2006; Metzinger 2003; Ross 1961; Smith 1986).

So he is telling us here that his target in the paper is people who think that there is no phenomenology without awareness. Now we could (and should) quibble with the way that Block cast’s Rosenthal’s theory. For instance when he says that it is the view that a token experience that is located in time is a conscious state in virtue of a higher-order thought that is about it. But that is not quite right, as I have spent a lot of time arguing (for instance, Consciousness, Relational Properties, and Higher-Order Theories, Conscioiusness is Not a Relation Property, and The Function of Consciousness in Higher-Order Theories). but waive that for the moment.

He goes on in the next paragraph to say,

The same order view fits both science and common sense better than the higher order view. As Tyler Burge (2006) notes, to say that one is necessarily aware of one’s phenomenally conscious states should not be taken to imply that every phenomenally conscious state is one that the subject notices or attends to or perceives or thinks about. Noticing, attending, perceiving, and thinking about are all cognitive relations that need not be involved when a phenomenal character is present to a subject. The mouse may be conscious of the cheese that the mouse sees, but that is not to say that the mouse is conscious of the visual sensations in the visual field that represent the cheese or that the mouse notices or attends to or thinks about any part of the visual field. The ratio of synapses in sensory areas to synapses in frontal areas peaks in early infancy, and likewise for relative glucose metabolism. (Gazzaniga et al. 2002, p. 642–43). Since frontal areas are likely to govern higher-order thought, low frontal activity in newborns may well indicate lack of higher-order thoughts about genuine sensory experiences.

The relevance of these points to the project of the paper is this: the fact of Awareness can be accommodated by either the same order view or the view in which Awareness is automatic, or so I will assume. So, there is no need to postulate that phenomenal consciousness requires cognitive accessibility of the phenomenally conscious state. Something worth calling “accessibility” may be intrinsic to any phenomenally conscious state, but it is not the cognitive accessibility that underlies reporting.

He is making it very clear that he thinks that he has given decisive reasons for dismissing the higher-order thought theory. Has he? Not suprisingly, I don’t think that he has. Instead he displays a curious prejudice against the higher-order thought theory.

Let us look at what he says. In the first sentence he says that the same-order view, a view like Kriegel’s, is better suited to common sense and science. What follows that remark then looks like what he takes to be common sense evidence against the higher-order thought view and in favor of the same-order view, followed by some scientific evidence that illustrates the same point. The common sense evidence, evidently, rests on our intuition that “[t]he mouse may be conscious of the cheese that the mouse sees, but that is not to say that the mouse is conscious of the visual sensations in the visual field that represent the cheese or that the mouse notices or attends to or thinks about any part of the visual field.” But that is certainly true and neither Rosenthal, nor any other higher-order theoriest, denies it! The mouse is conscious of the cheese by having a first-order sensory state that represents the cheese, so it can be conscious of the cheese without any higher-order thoughts at all.

Presumably, though, what Block means here is that the mouse can have a phenomenally conscious experience of the cheese without having a thought about its first-order mental states. But that is to simply beg the question against Rosenthal. He has a story about why you wouldn’t notice the higher-order thought were it there, and yet how we can still have some evidence that they do occur, and also a story about how the concepts that occur in the higher-order thoughts about sensory states would be easy to come by. So easy to come by in fact, that animals could probably get them. So it is not crazy or absurd to think that the mouse might have a conscious experience of teh cheese by having a higher-order thought to the effect that it is seeing cheese. So the common sense evidence against the higher-order thought theory isn’t any good.

What about the scientific evidence? The suggestion here is that there is empirical evidence that newborns have very low frontal activity and that this would mean that they do not have higher-order thoughts and so do not have any conscious experiences at all. Therefore the higher-order thought theory is at odds with scientific evidence. But there is a suppressed premise in Block’s argument. Namely, the premise that it is obvious that new born infants do in fact have conscious experiences. Now, granted it does seem obvious, what with all the kicking and screaming and facial gesticulation and all, but that is really just more question beggining. According to Rosenthal, if it turned out that babies lack the part of the brain that we KNOW is responsible for higher-order thoughts the he would be committed to saying that newborn infants lack phenomenallt conscious states. And if we could show that that was absurd then his theory would be a bust. He would pack it in. But he challeneges the claim on both accounts.

First, there is some evidence that babies lack the right part of the brain for higher-order thoughts, but Rosenthal also claims that there is some evidnece that they do have it as well and we are not ABSOLUTELY sure about the role that the frontal cortex plays. The science is not in, or at least it is not a lock like Block thinks. Secondly, it is not an absurd claim to say that newborn infants lack phenomenally conscious experience. According to Rosenthal an unconscious pain will play all of the same roles that the conscious one does. It will cause kicking and screaming and hootin’ and a-hollerin’ and facial contoriations and the whole nine. We can even say that it is a bad thing and be motivated to stop it, all the while maintaining that there is nothing that it is like for the infant to have the pain. Of course Block finds this implausible and the point of the paper is to show that this doesn’t happen, but the point is that the baby stuff does not cut against Rosenthal in the way that Block thinks. Or at least he hasn’t made it clear here why it does. So neither the common sense nor the scientific evidence merits such a quick dismissal of Rosenthal’s view.

Finally, why does Block think that this evidence is more favorable for the same-order view? Block seems to assume that the same-order view does not posit a thought-like Awareness and so is more in line with his intuition about the mouse, but, at least for people like Kriegel and Gennaro, the higher-order content is thought-like. So if Rosenthal’s view is too cognitive, then so is the same-order view. Or at least there is no reason to think otherwise. And what about the scientific evidence? Block seems to assume that since the first-order and higher-order content are part of the same state that means that the frontal cortex will not play a role and so the same-order view would not be affected by the experimental evidence showing that infants have low activity there. But that isn’t obvious. On Kriegel’s view, for instance, the two contens are bound together by a ‘psychologically real’ process. But this does not require that the two contents be in the same part of the brain. In fact he explicity appeals to synchrony as a candidate for the psychologically real process and points out that it allows for binding of contents in segregated parts of the brain as one of its virtues.

So either Block’s list of positions to consider just got reduced to one (Sosa’s) or it is back up to three.