Yesterday I attended Dave Chalmers’ session of the Mind and Language Seminar where we discussed his new paper on the singularity. I have previously seen him give this talk at CUNY and I was looking forward to the commentary from Jesse and Ned and the discussion that followed.
Jesse talked for an hour summarizing the argument and making some objections. The two that stood out to me were his claim that Human extinction is more likely than the singularity (he outlined some cheery scenarios including alien attack, global pandemic, science experiment gone bad, as well as depressed teenager with a nanonuke). Jesse’s other objection was to Dave’s argument that a functional isomorph of a conscious entity would itself be a conscious entity. Dave uses his dancing qualia/fading qualia argument here. The basic idea is that if we were to actually undergo a gradual swapping of neurons for computer chips it seems counter intuitive to think that my consciousness will cease at some point, or that it will fade out. In the context of the singularity this comes up if we consider uploading our minds into a virtual environment; will the uploaded virtual entity be conscious? Dave thinks that the fading qualia/dancing qualia intuitions give us good reason to think that they will. The people who upload themselves to the virtual world will be saying things like ‘come on in; it’s fun in here! We’re all really conscious, we swear!’ so why wouldn’t we think that the uploaded entities are conscious? Jesse worried that this begs the question against the person, like him and Ned, who thinks that there is something about biology that is important for consciousness. So, yeah, the uploaded entity says that it is conscious, but of course it says that it’s conscious! We have stipulated that it is a functional isomorph! Jesse concluded that we could never know if the functional isomorph was conscious or not. Dave’s position seemed to be that when it comes to verbal reports, and the judgments they express, we should take them at face value –unless we have some specific reason to doubt them.
During discussion I asked if Dave thought this was the best that we could do. Suppose that we uploaded ourselves into the virtual world for a *free trial period* and then download ourselves back into our meat brain. Suppose that we had decided that while we were uploaded we would do some serious introspection and that after we had done this we sincerely reported remembering that we had had conscious experience while uploaded. It seems to me that this would be strong evidence that we did have conscious experience while uploaded. Now, we can’t rule out the skeptical hypothesis that we are erroneously remembering qualia that we did not have. I suggested that this is no different than Dave’s view of our actual relationship to past qualia (as came out in our recent discussion of a similar issue). So, I cannot rule out that I did not have qualia five minutes ago with certainty but my memory is the best guide I have and the skeptical hypothesis is not enough to show that I do not know that I had qualia; so too in the uploaded case I should treat my memory as good evidence that I was conscious in the uploaded state. Jesse seemed to think that this still would not be enough evidence since the system had undergone such a drastic change. He compared his position to that of Dennett’s on dreams. According to Dennett, we think we have conscious experiences in our dreams based on our memories of those dreams but we are mistaken. We do not have conscious experiences in our dreams, just the beliefs about them upon waking. This amounts to a kind of disjunctivism.
I still wonder if we can’t do better. Suppose that while we are uploaded and while we are introspecting a conscious experience we ask ourselves if it is the same as before. That is, instead of relying on memory outside of the virtual world we rely on our memory inside the virtual environment. Of course the zombie that Jesse imagines we would be would say that has conscious experience and that it was introspecting, etc but if we were really conscious while uploaded we would know it.
Ned’s comments were short and focused on the possibility that Human intelligence might be a disparate “bag of tricks” that won’t explode. A lot of the discussion focused on issues related to this, but I think that Dave’s response is sufficient here so I won’t really rehash it…
I also became aware of this response to Dave from Massimo Pigliucci and I want to close with just a couple of points about it. In the first place Pigliucci demonstrates a very poor grasp of the argument that Dave presents. He says,
Chalmers’ (and other advocates of the possibility of a Singularity) argument starts off with the simple observation that machines have gained computing power at an extraordinary rate over the past several years, a trend that one can extrapolate to a near future explosion of intelligence. Too bad that, as any student of statistics 101 ought to know, extrapolation is a really bad way of making predictions, unless one can be reasonably assured of understanding the underlying causal phenomena (which we don’t, in the case of intelligence). (I asked a question along these lines to Chalmers in the Q&A and he denied having used the word extrapolation at all; I checked with several colleagues over wine and cheese, and they all confirmed that he did — several times.)
Now having been at the event under question I can’t rightly recall if Dave used the word ‘extrapolation’ or not but I can guarantee that his argument does not depend on it. Dave is very clear that it is not extrapolating from the “successes” of current AI that grounds his belief that we will develop Human level AI in the near-ish future. Rather his argument is that intelligence of the Human variety was developed via the process of evolution which is a ‘blind’ process that is dumb. It seems reasonable to assume that we could do at least as good a job as a blind dumb process, doesn’t it? If we can achieve this by an extendable method (for instance artificial guided evolution) then we would be able to extend this Human level AI to one that is superior to ours (the AI+) via a series of small increments. The AI+ would be better at designing AI and so we would expect them to be able to produce an AI++. This is a very different argument from the simple extrapolation from doubling of computing speed that Pigliucci lampoons. I don’t know which colleagues that Piggliucci consulted but had he asked me I could have set him straight.
Finally while it is certainly true that Dave is in no need of defending from me and I am the last person who has the moral high ground in matters of personal conduct but I have to say that Pigliucci shames himself with his adolescent ad hominem abuse; that is truly behavior unbecoming to academic debate. So too it is bizarre to think that Dave is the reason philosophers have a bad rep when in fact it is behavior like Pigliucci’s that is more the culprit. Dave is among those who represent philosophy at its best; smart intellectually curious people thinking big and taking chances, exploring new territory and dealing with issues that have the potential to profoundly impact Human life as we know it…all with grace and humility. You may not agree with his conclusions, or his methods, but only a fool doubts the rigor that he brings to any subject he discusses.
7 thoughts on “The Singularity, Again”
I appreciate the comments. I don’t think I have a poor grasp of Chalmers’ argument, I think he has a very poor argument (though not as bad as his zombie stuff.
And please, will you people develop a bit of sense of humor? A blog post is *not* an academic forum, it’s okay to joke about people to introduce a bit of levity. And if you want to get technical, my comment on his haircut was *not* an ad hominem, since I did not say that his arguments were bad *because* of his hair style…
Hi Massimo, thanks for stopping by and for the comment.
The haircut comment wasn’t what I was referring to…it was what followed.
I take the point about having a sense of humor but the overall tone of your piece is not humorous; it is insulting. This is especially strange since Dave was clear that he was an amateur enthusiast with respect to the singularity stuff and was just reflecting on various possibilities because he had been asked to talk at the singularity summit and he thought there was some importance to the topic. Why is it bad for a prominent thinker to draw our attention to an important topic, offer some arguments and thoughts on the subject in the hopes of getting philosophers interested in the topic?
I agree that the argument you criticized in the paper (the one I quote above) is poor but that is not the argument that he gives…it is in fact the one he is at pains to avoid…in the paper he says (page 7):
The ‘other argument’ not mentioned here is brain emulation…but either way neither of these are the simple extrapolation argument that you attribute to him.
He also says,
So I think it is clear that you do not get his argument right…and again, I was at the CUNY talk and I can verify that this is what he said there as well.
Now, do you think that is a poor argument?
@Massimo: I would have thought that blog posts at “Scientific Blogging,” “Psychology Today” etc qualify as academic forums, since many of the posts there are written by academics, for academics, about serious scholarly topics. They sure as hell aren’t comedy clubs, as one easily can see by having a look at at the content.
As for the Chalmers argument, I’m sure there are plenty of people who agree that it’s poor. But in order to reasonably criticize the argument, you need to need to understand what the argument is—and you’ve clearly mischaracterized the main argument, which is not an argument from extrapolation from trends in computing power, as you claim. You’ve succeeded in refuting a ‘straw man’ argument which is indeed poor; but knocking down a straw man doesn’t suffice for refuting Chalmers’s argument. To do that, you need to argue that artificial selection, brain-emulation, and any other route to AI are all so vastly unpromising (i.e. that they clearly won’t produce AI within several hundred years) that it’s reasonable to suppose that all such attempts will fail in the relevant timeframe. But it’s clear that establishing this point requires far more than simply saying “extrapolation is bad.”
Re: ad hominem, I think it’s pretty clear that you deploy some ad hominem arguments. For example, you say that “[Chalmers] reads too much science fiction, and is apparently unable to snap out of the necessary suspension of disbelief when he comes back to the real world”. In the context of your blog post, it’s hard not to construe this as an ad hominem attack. Anyway, I’m generally OK with a bit of ad hominem to liven the mood, but that doesn’t change the fact that your extrapolation criticism is weak/irrelevant. Without a strong criticism, however, the ad hominem is just so much fluff.
I’ll admit that some of your secondary criticisms are apt and interesting. But that doesn’t make your straw-manning of the primary argument any more reasonable. And unfortunately, your blunder with the primary argument and the generally vitriolic tone of your post tend to overshadow the discussion-worthy points that you do make.
SOmetimes what shows up is ” I am aware of being aware” or “there is awareness” or “there is a sense of subjectivity” or “there is an introspection”-or there is something it is like to be a me, or there is an intuition of an entity of awareness or a quality of such attached to all. These are all things that are said to characterize consciousness—
but all these things come and go. Most of the time
other things are happening. There is breakfast being eaten, the news read, the store patronized and so on—–and absolutely no sense conscousness or statement or sentiment or intuition of awareness, there is only the doing and the coming and going and so on.
So what am I supposed to think, that consciousness is always there somehow–when clearly anything connected with this supposed phenomenon comes and goes like everything else?
Does consciousness go unconscious?
How central can it be?
I do not share the intuition or apperception or whatever that says there is such a thing as consciousness and that it sticks around.
The reply: yes but often you don’t think of your body and still it is there. My reply is that it might as well not be there. If I am at the park talking to a friend and an errant baseball comes my way and I duck—-then that is what happens. I will not conclude that when I ducked I was aware of my body
—no, I just ducked, that’s all. No thought of body, no awareness of body–just the ducking and avoiding the ball. If it is not explicit–If I do not say “oh I was so aware of my body then”—-how do you assume that I was so aware? How would you prove that?
If you are writing and your wife is in the basement
cleaning—are you aware of your wifey even if you have absolutely no thought or sense of her–but instead are thinking about Kant and the moral imperative?
No. Such is my attitude toward consciousness.
If we have a thought of generality–a general categorization, a lumping of like and like together–
this too is common and is something that comes and goes in the moment like everything else.
And this is what consciousness is —-a convenient
umbrella for many disparate memories and intuitions.
ONe cannot avoid generalities—it is one of the things humans do apparently. But I am not drawn to
consciousness as an organizer of things–instead I am drawn to the widest possible category–that of “whatever shows up”.
And of course this comes and goes too.
And this is the point, I think. This coming and going is what the world, in some sense, is made of.
But even this thought comes and goes—–and baack to square one.
I guess all I can say is, there is something and the something outstrips any one perspective, point of view.
Chalmers is a fine philosopher but he knows ultimately no more than anyone else about things—
because we all are intimate with things—there is nothing exclusive about it.
I am not sure I follow all of this, but if you are saying that when we are unaware of some mental state, say a pain, then it is as though that mental state is not there from our point of view then I agree (and so does David Rosenthal)…maybe I missed something in your comment?
[…] Does the Zombie Argument Rest on a Mistake? 9. The Singularity, Again 8. Emotive Realism and Moral Deviance 7. Dream a Little Dream 6. Swamp Thing about Mary 5. […]
[…] The Singularity, Again […]