Revisiting my Dissertation

Nine years ago I defended My dissertation and then I promptly forgot about it. Part of the reason was that I was distracted with the Shombie Wars (believe me, I *never* expected to write a paper on zombies!) and starting Consciousness Online but the biggest part of the story was that I was sick of working on it. I had spent two years writing it officially but I had had the core idea for the dissertation in 2002 (developing ideas I had from my days as an undergraduate) and had written several versions of it for various seminars I had taken. By the time I had decided to pursue this as my dissertation project I had already been working on it (off and on) for 4 years. So after six years of reading, re-reading, writing, and re-writing I had a hard time even thinking about this material!

Looking back on it now I think the main “result” still stands up. Just after I defended hybrid expressionist views became popular and I thought that maybe I had been scooped  (more than I already had been by Blackstone!) but no one has developed, or even seemed to notice, the kind of hybrid view I formulate and defined (i.e. one where the speech act in moral discourse involves expressing an emotion and, at the same time, the belief that the emotion is the correct one to have towards the relevant state of affairs moral character, etc)…though to be honest I have grown more out of touch with the literature on metaethics…so maybe there is some devastating objection I am not aware of?

At some point I may try to look into it but in the meantime below are links to the blog posts I wrote while working on the dissertation.

  1. Introducing Frigidity
  2. What Kripke Really Thinks
  3. The Meaning and Use of ‘is True’
  4. Truth, Justification, and the Quasi-Realist Way
  5. Meaning and Justification
  6. A Simple Argument for Moral Realism
  7. Emotive Realism
  8. Truth and Necessity
  9. Varieties of Rigidity
  10. Devitt on the A Priori 
  11. Meta-Metaethics and the NJRPA
  12. Emotive Realism Ch. 1
  13. Emotive Realism Ch. 2
  14. Some Moral Truths are Analytic
  15. (Finally) Responding to Roman
  16. Moral Truthmakers
  17. Empiricism as the Default Position
  18.  Introducing Dr. Richard Brown
Advertisements

Cognitive Prosthetics and Mind Uploading

I am on record (in this old episode of Spacetime Mind where we talk to Eric Schwitzgebel) as being somewhat of a skeptic about mind uploading and artificial consciousness generally (especially for a priori reasons) but I also think this is largely an empirical matter (see this old draft of a paper that I never developed). So even though I am willing to be convinced I still have some non-minimal credence in the biological nature of consciousness and the mind generally, though in all honesty it is not as non-minimal as it used to be.

Those who are optimistic about mind uploading have often appealed to partial uploading as a practical convincing case. This point is made especially clearly by David Chalmers in his paper The Singularity: A Philosophical Analysis (a selection of which is reprinted as ‘Mind uploading: A Philosophical Analysis),

At the very least, it seems very likely that partial uploading will convince most people that uploading preserves consciousness. Once people are confronted with friends and family who have undergone limited partial uploading and are behaving normally, few people will seriously think that they lack consciousness. And gradual extensions to full uploading will convince most people that these systems are conscious at well. Of course it remains at least a logical possibility that this process will gradually or suddenly turn everyone into zombies. But once we are confronted with partial uploads, that hypothesis will seem akin to the hypothesis that people of different ethnicities or genders are zombies.

What is partial uploading? Uploading in general is never very well defined (that I know of) but it is often taken to involve in some way producing a functional isomorph to the human brain. Thus partial uploading would be the partial production of a functional isomorph to the human brain. In particular we would have to reproduce the function of the relevant neuron(s).

At this point we are not really able to do any kind of uploading as Chalmers’ or others describe but there are people who seem to be doing things that look like a bit like partial uploading. First one might think of cochlear implants. What we can do now is impressive but it doesn’t look like uploading in any significant way. We have computers analyze incoming sound waves and then stimulate the auditory nerves in (what we hope) are appropriate ways. Even leaving aside the fact that subjects seem to report a phenomenological difference, and leaving aside how useful this is for a certain kind of auditory deficit, it is not clear that the role of the computational device has anything to do with constituting the conscious experience, or of being part of the subject’s mind. It looks to me like these are akin to fancy glasses. They causally interact with the systems that produce consciousness but do not show that the mind can be replaced by a silicon computer.

The case of the artificial hippocampus gives us another nice test case. While still in its early development it certainly seems like it is a real possibility that the next generation of people with memory problems may have neural prosthetics as an option (there is even a startup trying to make it happen and here is a nice video of Theodore Berger presenting the main experimental work).

What we can do now is fundamentally limited by our lack of understanding about what all of the neural activity ‘means’ but even so there is impressive and suggestive evidence that homelike like a prosthetic hippocampus is possible. They record from an intact hippocampus (in rats) while performing some memory task and then have a computer analyze and predict what the output of the hippocampus would have been. When compared to actual output of hippocampal cells it is pretty good and the hope is that they can then use this to stimulate post-hippocampal neurons as they would have been if the hippocampus was intact. This has been done as proof of principle in rats (not in real time) and now in monkeys, and in real time and in the prefrontal cortex as well!

The monkey work was really interesting. They had the animal perform a task which involved viewing a picture and then waiting through a delay period. After the delay period the animal is shown many pictures and has to pick out the one it saw before (this is one version of a delayed match to sample task). While they were doing this they recorded activity of cells in the prefrontal cortex (specifically layers 2/3 and 5). When they introduced a drug into the region which was known to impair performance on this kind of task the animal’s performance was very poor (as expected) but if they stimulated the animal’s brain in the way that their computer program predicted that the deactivated region would respond (specifically they stimulated the layer 5 neurons (via the same electrode they previously used to record) in the way that the model predicted they would have been by layer 2/3) the animal’s performance returned to almost normal! Theodore Berger describes this as something like ‘putting the memory into memory for the animal’. He then shows that if you do this with an animal that has an intact brain they do better than they did before. This can be used to enhance the performance of a neuroscience-typical brain!

They say they are doing human trials but I haven’t heard anything about that. Even so this is impressive in that they use it successfully in rats for long term memory in the hippocampus and then they also use it in monkeys in the prefrontal cortex in working memory. In both cases they seem to get the same result. It starts to look like it is hard to deny that the computer is ‘forming’ the memory and transmitting it for storage. So something cognitive has been uploaded. Those sympathetic to the biological view will have to say that this is more like the cochlear implant case where we have a system causally interacting with the brain but it is the biological brain that stores the memory and recalls it and is responsible for any phenomenology or conscious experiences. It seems to me that they have to predict that in humans there will be a difference in the phenomenology that stands out to the subject (due to the silicon not being a functional isomorph) but if we get the same pattern of results for working memory in humans are we heading towards Chalmers’ acceptance scenario?