Why I am Not a Type-Z Materialist

I have been hearing various and sundry rumors about my being a type-q materialist or perhaps even a type-z materialist and I want to set the record straight. In this post I will talk about some of the things that Dave Bieseker says in his recent Zombies and the Paradox of Phenomenal Consciousness (JSC 17(3-4)).

Bieseker’s claim is that we are zombies. When Chalmers and others conceive of creatures physically just like us but which lack consciousness they actually end up conceiving of the actual world! Zombies would be just as convinced that they were not zombies as Chalmers and I are convinced that we are not zombies but they would be wrong. That, according to Beiseker, is our actual position. On his view what is conceivable but not possible is the ‘supermaterialist’ qualia that Chalmers thinks he has.

One thing to note is that Beiseker denies that the mind-brain identities of his materialism are necessary. He claims that the identities are true here but not in all possible worlds. We disagree about this. As I have previously said I accept that identities are necessary are that we are justified in believing this.

It also seems to me that type-z materialism is really just eliminativism about consciousness of the type-a variety. I am aquatinted with my qualia in a way that makes me all but certain that I have them. I agree that if zombies were conceivable they would think that they had qualia like I do and would be wrong and also that they would not know that they were wrong. I think I know I am not wrong and so know that I am not a zombie. This is important because in the shombie argument I conceive of merely physical creatures that nonetheless have the same conscious experience that I do and I mean by that not what Beiseker means but the kind of consciousness that Chalmers means to be talking about. That’s what I can conceive of as being physical: real consciousness! Not zombie consciousness! Of course that’s physical.

Beiseker warns against this strategy saying,

The overall lesson for materialists is that they must be careful not to engage the argument in the supermaterialist’s own preferred parlance, for when they do, a smart opponent like Chalmers (or Alter) is able to paint them into uncomfortable corners. Consider, for instance, my earlier suggestion that the problem with the conceivability argument lies in the vicinity of (P3)[If P & ~Q is possible, then materialism is false]. That is true, but only if we are prepared to adopt a zombie’s perspective or way of talking. Things look otherwise if we use terms in the way that Chalmers would prefer. For according to his manner of speaking, when type-Z materialists suggest that our own epistemic and conceptual perspective is no different from that of zombies, it looks instead like they are ‘really’ committed to the idea that P & ~Q is true of our own world. Thus it would seem that type-Z materialism must really be committed to…an implausible (‘Churchlandish’) type-E eliminativism about consciousness. But that of course isn’t quite right, for zombies (and their type-Z advocates), as opposed to eliminativists, do not and need not eschew everyday, ordinary talk about consciousness. Their project instead is to reconceive it in an appropriately materialist fashion.

Here Beiseker is making his complaint that people like Chalmers have tacitly assumed that our world (aka the actual world)  is a world where qualia are superphysical. I agree with the complaint but not for the reasons here. First note that Beiseker says he is not an eliminativist because he doesn’t advocate getting rid of common sense folk psychological concepts in favor of neuroscience concepts. His talk of ‘re conceiving’ qualia talk in a materialist fashion sounds like the type-a move of defining consciousness to be physical without the elimination of folk-psychological concepts.The problem with type-z materialism is not that zombies beg the question it is that it fails to take account of the first-person data that I and many others have. True a zombie would say just that but I, and not he, know that I am right because I have the relevant experiences (and my zombie wouldn’t which is why it wouldn’t know and would be wrong).

Once we have the shombie case in hand and see that it is not a type-z world that is being conceived then we can see why the zombie argument tacitly assumes that consciousness is nonphysical at the outset. Just as in the physicalist case we can divide responses to the shombie argument into type-a, -b, and -c. The type-a response will be to deny the conceivability of shombies, the type-b response will be to admit their conceivability but deny their possibility, the type-c response will admit that they seem conceivable but deny that they are ideally conceivable. Chalmers will not opt for teh type-b response because of his views about the link between conceivability and possibility so that leaves type-a and type-c. Type-c will more than likely seem dubious to him n the dualism case since he thinks it is dubious in the physicalist case. That leaves type-a. But the type-a dualist just defines qualia as nonphysical. There is no other reason, for all that can be known a priori, for thinking that shombies are inconceivable.

Beiseker goes on to say,

Something similar is going on with various ‘reverse-zombie’ arguments (see Brown 2010, Frankish 2007, Stalnaker 2002, Balog 1999). Such reverse-zombie exercises strongly suggest the unsoundness of the conceivability argument, without specifying exactly where the original argument goes wrong. In that respect, they resemble Gaunilo’s rejection of the ontological argument. Following Stalnaker, Brown suggests that the weakness in the conceivability argument lies with (P1); zombies only seem to be conceivable. Balog suggests instead that the culprit is (P2), while by way of reply, Chalmers maintains that reverse-zombie arguments are not parallel to the original conceivability argument after all. They present us with scenarios that, according to Chalmers, are not directly conceivable but rather conceivable only ‘at arms length’ or in some attenuated ‘meta’ sense.

Here Beiseker presents Chalmers as a type-a dualist (which I suspect is probably right) and it is true that Kati and I disagree on where the problem is in the argument. But that is because the response to the type-b physicalist works, or so it seems to me. But at any rate my overall argumentative strategy was to try to engage the dualist as much as possible and so I wanted to grant Chalmers the connection between conceivability and possibility. Given that and the intuitive claim that zombies and shombies are equally prima facie conceivable it follows that only one of them is ideally conceivable and the other one just seems to be conceivable to us now. Chalmers thinks that it is shombies that have this status, I think that it is zombies that have this status…who’s right? At this point we need more than a priori arguments which is why I think they should be deprioritized. If dualism turns out to be actually true then of course zombies are the ones that are ideally conceivable while if physicalism turns out to be true then it is shombies that turn out to be ideally conceivable. Since we can’t tell right now we must wait for further evidence. Just as type-a physicalism must be set aside so to must type-a dualism. That leaves us with type-c.

Beiseker continues,

However, this reply [by Chalmers to the reverse-zombie arguments] has a vague whiff of the paradoxical to it, for the reverse perspective from which things are only conceivable in this attenuated ‘meta’ sense is at the same time the very materialistic scenario that the original zombie argument so stridently insists is conceivable in a much stronger sense. Once again, we come face-to-face with the remarkably ambivalent attitude proponents of the original conceivability argument adopt towards zombies. While they are conceivable, they aren’t conceivably actual.

Now to this I object! When Chalmers (there really are too many ‘David’s out there!) says that Shombies may be negatively conceivable but are not positively conceivable he does not think of shombies as zombies! That would indeed be strange. He would then in effect be saying that zombies were not really conceivable. That is not what he is saying. He is saying that he can’t see how merely physical creatures could have consciousness like ours. Shombies are not merely zombies stipulated to have consciousness like our!

Beiseker goes on to say something that I do agree with very much right after this,

One might well take the whole point of reverse-zombie considerations to be that of showing that the notions of direct and meta-conceivability are themselves up for grabs. For whether or not one takes some situation to be directly conceivable or conceivable only ‘at arms length’ depends upon one’s presuppositions about the nature of the actual world.

This is for the most part exactly what I think. Whether one finds zombies or shombies to be ‘really’ conceivable or not depends on how you think the actual world is but since we do not know how the actual world is yet the a priori arguments do nothing but reveal where our intuitions lies (which in turn reflect theories that we accept).

Empiricism and A Priori Justification

I sometimes get asked why I take a priori reasoning seriously; after all empiricists should eschew such talk! Real empiricists do not engage the rationalist on their own turf…in true Type-Q style I should deny that there is an a priori/a posteriori or an analytic/synthetic distinction and deny as well that talk of possible worlds is meaningful. But I don’t.

Let us define A Priori knowledge as follows

APK=def justified necessarily true belief

Let us define A Priori justification as follows

APJ=def justification that is not based on experience (i.e. not based on sensing, perception, memory or introspection)

A Priori justification usually takes the form of a ‘rational seeming’ which phenomenologically is a kind of ‘seeing’ that something could or couldn’t be the case. One has an immediate intuition that the proposition couldn’t possibly be true (or false). So, for example, when I consider simple propositions like that A=A, ((P–>Q) & P) –>Q, and (P v ~P) <–> ~(P & ~P) I find it unimaginable that they could be false.  It is this phenomenology which leads people to argue along the lines of ++

++   APJ –> APK

Since it seems to one unimaginable that P is false (or true) one concludes that it must be true (i.e. that it is necessarily true). It is also taken to be the case that the history of philosophy has demonstrated that experience cannot teach that something is necessary and so APJ is the only route to APK.

Now as an empiricist I want to deny that we have a priori knowledge but I want to allow there to be a priori justification. In other words I want to allow that rational seemings can provide justification even though they don’t provide (necessary) knowledge this is because rational seemings are, according to me, ultimately themselves dependent on how the world turns out. Suppose for the sake of argument that the above simple propositions are not in fact necessarily true. Suppose that they are just extremely well confirmed empirical generalizations. That is, suppose that the regularities of our Humean world regularly, and up until now reliably, provide us the kinds of experiences that justify instances of these propositions. Suppose further that you have organisms evolving in this environment. These organisms will likely develop systems that encapsulate these propositions. To these organisms these propositions will seem to be unimaginably false (or true) but they are not necessary truths (ex hypothesi) and they are ultimately justified by the organism’s ancestor’s experiences. But these propositions are true; it’s just that they aren’t necessarily true. So one can have knowledge that has a priori justification but that is not a priori knowledge. Now I am not here trying to give an argument for this view. I only mean to be pointing out that this is perfectly compatible with the empiricist view and so if one is careful one can be an empiricist and still think that we can have knowledge on the basis of a priori reasoning.

So far I have been only talking about knowledge of how the world actually is. Nothing has been said about the way it could be. reasoning about modality seems to me to be fundamentally rooted in our ability to imagine or conceive of various situations. Conceivability has traditionally thought to be a guide to what is possible and to be bounded only by what is contradictory. That this be true is certainly conceivable (just as is the empiricist version above). We may not know that it is true but it does seem like a possibility. So, for instance, it is almost impossible to see what it could even mean to say that [](A=A) is false…I mean that would have to mean that there was some thing picked out by ‘A’ which was identical to itself in some conceivable situations but was not identical to itself in other conceivable situations. That just intuitively seems contradictory! But, wait, we can have rational seemings in the absence of necessary truth. So, famously, when some people offered “proofs” of the parallel  postulate they were accepted as correct until some mistake in the proof was discovered. If so, then there was a time when people could have a priori justification for something which turns out to be demonstrably false. So perhaps our intuition that justifies our belief in [](A=A) and the like are also suspect. As a counter example David Rosenthal talks about identity statements like [](A=A) beg the question by assuming the notion of rigid designation. If one doesn’t assume that it is of course not necessary. But it seems to me that the 2-D response has legs here: we can have both. Intuitions about rigidity are explained by the secondary intension and the corresponding kinds of possibility. Intuitions about the non-necessity of identities are explained by the primary intension and the corresponding kind of possibility. In short then as long as we see rational intuition as defeasible justification (defeasible in particular by experience) then we can accept the a priori justification of [](A=A) in the absence of defeaters which we have yet to find anyway

To sum up then; I think I can know that for any A,  A=A a priori but not that [](A=A) yet even so I think that I have good justification for believing [](A=A) and []~(P & ~P) and so we have good justification of modal talk.

The Singularity, Again

Yesterday I attended Dave Chalmers’ session of the Mind and Language Seminar where we discussed his new paper on the singularity. I have previously seen him give this talk at CUNY and I was looking forward to the commentary from Jesse and Ned and the discussion that followed.

Jesse talked for an hour summarizing the argument and making some objections. The two that stood out to me were his claim that Human extinction is more likely than the singularity (he outlined some cheery scenarios including alien attack, global pandemic, science experiment gone bad, as well as depressed teenager with a nanonuke). Jesse’s other objection was to Dave’s argument that a functional isomorph of a conscious entity would itself be a conscious entity. Dave uses his dancing qualia/fading qualia argument here. The basic idea is that if we were to actually undergo a gradual swapping of neurons for computer chips it seems counter intuitive to think that my consciousness will cease at some point, or that it will fade out. In the context of the singularity this comes up if we consider uploading our minds into a virtual environment; will the uploaded virtual entity be conscious? Dave thinks that the fading qualia/dancing qualia intuitions give us good reason to think that they will. The people who upload themselves to the virtual world will be saying things like ‘come on in; it’s fun in here! We’re all really conscious, we swear!’ so why wouldn’t we think that the uploaded entities are conscious? Jesse worried that this begs the question against the person, like him and Ned, who thinks that there is something about biology that is important for consciousness. So, yeah, the uploaded entity says that it is conscious, but of course it says that it’s conscious! We have stipulated that it is a functional isomorph! Jesse concluded that we could never know if the functional isomorph was conscious or not. Dave’s position seemed to be that when it comes to verbal reports, and the judgments they express, we should take them at face value –unless we have some specific reason to doubt them.

During discussion I asked if Dave thought this was the best that we could do. Suppose that we uploaded ourselves into the virtual world for a *free trial period* and then download ourselves back into our meat brain. Suppose that we had decided that while we were uploaded we would do some serious introspection and that after we had done this we sincerely reported remembering that we had had conscious experience while uploaded.  It seems to me that this would be strong evidence that we did have conscious experience while uploaded. Now, we can’t rule out the skeptical hypothesis that we are erroneously remembering qualia that we did not have. I suggested that this is no different than Dave’s view of our actual relationship to past qualia (as came out in our recent discussion of a similar issue). So, I cannot rule out that I did not have qualia five minutes ago with certainty but my memory is the best guide I have and the skeptical hypothesis is not enough to show that I do not know that I had qualia; so too in the uploaded case I should treat my memory as good evidence that I was conscious in the uploaded state. Jesse seemed to think that this still would not be enough evidence since the system had undergone such a drastic change. He compared his position to that of Dennett’s on dreams. According to Dennett, we think we have conscious experiences in our dreams based on our memories of those dreams but we are mistaken. We do not have conscious experiences in our dreams, just the beliefs about them upon waking. This amounts to a kind of disjunctivism.

I still wonder if we can’t do better. Suppose that while we are uploaded and while we are introspecting a conscious experience we ask ourselves if it is the same as before. That is, instead of relying on memory outside of the virtual world we rely on our memory inside the virtual environment. Of course the zombie that Jesse imagines we would be would say that has conscious experience and that it was introspecting, etc but if we were really conscious while uploaded we would know it.

Ned’s comments were short and focused on the possibility that Human intelligence might be a disparate “bag of tricks” that won’t explode. A lot of the discussion focused on issues related to this, but I think that Dave’s response is sufficient here so I won’t really rehash it…

I also became aware of this response to Dave from Massimo Pigliucci and I want to close with just a couple of points about it. In the first place Pigliucci demonstrates a very poor grasp of the argument that Dave presents. He says,

Chalmers’ (and other advocates of the possibility of a Singularity) argument starts off with the simple observation that machines have gained computing power at an extraordinary rate over the past several years, a trend that one can extrapolate to a near future explosion of intelligence. Too bad that, as any student of statistics 101 ought to know, extrapolation is a really bad way of making predictions, unless one can be reasonably assured of understanding the underlying causal phenomena (which we don’t, in the case of intelligence). (I asked a question along these lines to Chalmers in the Q&A and he denied having used the word extrapolation at all; I checked with several colleagues over wine and cheese, and they all confirmed that he did — several times.)

Now having been at the event under question I can’t rightly recall if Dave used the word ‘extrapolation’ or not but I can guarantee that his argument does not depend on it. Dave is very clear that it is not extrapolating from the “successes” of current AI that grounds his belief that we will develop Human level AI in the near-ish future. Rather his argument is that intelligence of the Human variety was developed via the process of evolution which is a ‘blind’ process that is dumb. It seems reasonable to assume that we could do at least as good a job as a blind dumb process, doesn’t it? If we can achieve this by an extendable method (for instance artificial guided evolution) then we would be able to extend this Human level AI to one that is superior to ours (the AI+) via a series of small increments. The AI+ would be better at designing AI and so we would expect them to be able to produce an AI++. This is a very different argument from the simple extrapolation from doubling of computing speed that Pigliucci lampoons. I don’t know which colleagues that Piggliucci consulted but had he asked me I could have set him straight.

Finally while it is certainly true that Dave is in no need of defending from me and I am the last person who has the moral high ground in matters of personal conduct but I have to say that Pigliucci shames himself with his adolescent ad hominem abuse; that is truly  behavior unbecoming to academic debate. So too it is bizarre to think that Dave is the reason philosophers have a bad rep when in fact it is behavior like Pigliucci’s that is more the culprit. Dave is among those who represent philosophy at its best; smart intellectually curious people thinking big and taking chances, exploring new territory and dealing with issues that have the potential to profoundly impact Human life as we know it…all with grace and humility. You may not agree with his conclusions, or his methods, but only a fool doubts the rigor that he brings to any subject he discusses.

Higher-Order Theories of Consciousness and the Phenomenology of Belief

Next week I am heading up to SUNY Freedonia to give two talks as part of the Young Philosophers Lecture Series . Here is a rehearsal of the first talk which is my most recent attempt to show that Rosenthal’s HOT theory is committed to cognitive phenomenology

Error
This video doesn’t exist

Unconscious Introspection and Higher-Order Thoughts

As I noted previously, I haven’t been all that good at keeping up with the NYU Mind and Language seminar, and I am going to have to miss this one coming up because of committee meetings :(, which is too bad since this week’s speaker is Alvin Goldman. The paper is a very long defense of simulation theory, which should come as no surprise. I must confess that I have never found the debate between theory-theory and simulation theory to be very interesting but one of the interesting things that I learned from Goldman’s paper is his views about unconscious introspection and in the appendix to the paper where he defends this notion he concludes by presenting a nice puzzle for those of us who like higher-order theories of consciousness.

To put it simply Goldman argues that there is empirical evidence which suggests that in order for me to be able to attribute a mental state, say digust, to you I have to internally simulate, or mirror, the state and then I use that state to attribute the state to you and that this can happen even though the mirrored state is unconscious. In order for me to introspect the mirrored mental state I would presumably token a higher-order state of the kind which would –if the higher-order theory is right– make the first-order state conscious. But we know that the state is not conscious. What are we to say about this given that the empirical evidence is as Goldman suggests?

One thing we could do, as he notes, is to take this as an argument against the higher-order theory. Goldman does not seem to want to do this; nor do I. What are we to do then? He glosses a couple of different suggestions but concludes by saying that he isn’t sure what the right answer to this puzzle is. On reading this it occurred to me that the puzzle could be solved by denying that the higher-order thoughts that one has when one attributes the state to others doesn’t have the same content as the one that attributes the state to oneself. When one has a suitable higher-order thought to the effect that one is in a mental state the HOT represents the state as the one that you, yourself, are actually in now; it represents that state as present. However when one is introspecting an unconscious mental state for the purpose of attributing it to someone else one is arguably conscious of the state not as being present but rather as the state that one is attributing. Being conscious of a mental state in this way does not make the state one is conscious of a conscious mental state. In short we have many thoughts about first-order states that do not make those states conscious. This is because it is only a certain kind of thought that makes us conscious of the mental states in the appropriate way. This, it seems to me, solves the puzzle.

Consciousness & the Tribunal of Experience

Here is a video of a recent talk I gave to the psychology department at Columbia University as part of their Cognitive Lunch speaker series. This rehearsal led to me add a few slides and change a few things here and there but I don’t have time to re-record it. Overall it is ok but I think the actual talk was better done…ah well…

Error
This video doesn’t exist

Download the video in Ogg format or subscribe to my podcast