The Curious Case of my Interview/Discussion with Ruth Millikan

I started my YouTube interview/discussion series Consciousness Live! last summer and scheduled Ruth Millikan as the second guest. We tried to livestream our conversation July 4th 2018 and we spent hours trying to get the Google Hangouts Live to work. When it didn’t I tried to record a video call and failed horribly (though I did record a summary of some of the main points as I remembered them).

Ruth agreed to do the interview again and so we tried to livestream it Friday June 6th 2019, almost a year after our first attempt (and since which I did many of these with almost no problems). We couldn’t get Google Hangouts to work (again!) but I had heard you could now record Skype calls so we tried that. We got about 35 minutes in and the internet went out (I put the clips up here).

Amazingly Ruth agreed to try again and so we met the morning of Monday June 10th. I had a fancy setup ready to go. I had our Skype call running through Open Broadcast Studios and was using that to stream live to my YouTube Channel. It worked for about half an hour and then something went screwy. After that I decided to just record the Skype call the way we had ended up doing the previous Friday. The call dropped 3 times but we kept going. Below is an edited version of the various calls we made on Monday June 10th.

Anyone who knows Ruth personally will not be surprised. She is well known for being generous with her time and her love of philosophical discussion. My thanks to Ruth for such an enjoyable series of conversations and I hope viewing it is almost as much fun!

Do We Live in a Westworld World??

I have not had the time to post here as often as I’d like and I am hoping to get back into a semi-regular blogging schedule once things settle down. The hectic pace of an almost-two-year-old and teaching a 6/3/-6/3 course load (18 classes a year!) has taken its toll. I have been meaning to write a post on my plenary session at The Science of Consciousness (TSC2016) conference in Tucson. And I have been working on a paper with Joe LeDoux developing a Higher-Order Theory of Emotional Consciousness that is nearing the final stages. I plan to post something about it once we are done. I am also still trying to produce a series of videos for my introductory logic class at LaGuardia and will also post something on that when they are finished (hopefully before the Spring semester). So a lot is going on!

But all of that aside I wanted to take a moment to talk about Westworld. I have not seen the original movie by Michael Crichton but I was eagerly anticipating the new HBO series and now having watched it I think it is a wonderful show with a lot of rich philosophical content. There are a lot of interesting questions about consciousness and computation brought up by the show but I wanted to step back and note the clever way that the show introduces a new twist on the some old skeptical worries. There are some mild spoilers below but if you have seen the first episode that is all that you need to follow the argument.

The basic premise of the show involves the existence of a giant park known as Westworld where there are advanced artificial agents that serve as the backdrop for the various adventures of the patrons of the park. These advanced artificial agents, known as hosts in the show, are very lifelike and in fact stipulated to be indistinguishable from flesh and blood humans. The behavior of the hosts is for the most part scripted and under the complete control of the people who run Westworld. When the hosts interact with the ‘newcomers’, i.e. those who visit the park for recreation, they are allowed limited improvisation and mild variance from their scripted behavior but that is all. The feature that is noticeable for our purposes is that the hosts are programmed in such a way that whenever a newcomer mentions anything about the existence of things outside the park they noticeably fail to notice what the newcomer has said. If they happen to see an artifact from outside the park, like a picture, they do not register it and simply say ‘it doesn’t look like anything to me’. Finally, they mention that the hosts have the concept of dreaming, and specifically of a nightmare, in order to ensure that any weird experiences due to park maintenance can be attributed to being in a dream.

That is enough of the plot mechanics of the show to introduce the interesting new skeptical worry. How can we be sure that we are not now, at this very instant, in a Westworld World? That is, given some common assumptions, how can we rule out that our city -NYworld-, our state -CaliforniaWorld-, our country -USAworld-, indeed our planet -EarthWorld- etc, are not actually vast artificial environments run by external agents set up for the enjoyment of ‘newcomers’ (tourists?)? It is true that I do not notice any evidence that the Earth is just an artificial environment with automatons populating it. But this is consistent with my actually being an artificial agent of some sort whose internal programming, or what ever is equivalent to that, prevents me from noticing any such evidence. In the most severe form EarthWorld might be an amusement park for an alien race. A place where they go to vacation and reek havoc. We may have interacted with any number of alien beings and simply not have noticed that they have tentacles, four eyes, etc. We may be constructed to take their appearance to conform to normal human standards (after all many take physics to already demonstrate that we don’t perceive reality as it is).

In a sense this is related to the Simulation Hypothesis. In that case Bostrom and others consider the possibility that our reality is in actuality a computer simulation, like The Sims but more advanced. This is not the kind of scenario envisioned in EarthWorld. There the idea is that we have an actual physical place, The Earth, complete with physical elements, trees, animals, wind, etc and also artificial agents, ourselves. Our role in EarthWorld may vary depending on the skeptical scenario one envisions but one scenario is that we are highly advanced artificial agents with advanced AI and limited conscious experience (that is we are phenomenally conscious but miss out on a large portion of what is actually happening around us). This is not a computer simulated reality but is still an artificial reality of sorts. Maybe more akin to Live Action Role Playing than to computer simulation (maybe Artificial Action Role Playing?).

As with most skeptical scenarios I don’t think we have to accept the conclusion that we are indeed in such a scenario but it is, I think, an interesting new take on the ‘we might be conscious computer programs in an artificial environment’ trope. As such I also think that the simulation argument, if it works at all, works equally well for Earthworld and so if you think we might be in a simulation you should also think we might be in Earthworld.

Zombies vs Shombies

Richard Marshall, a writer for 3am Magazine, has been interviewing philosophers. After interviewing a long list of distinguished philosophers, including Peter Carruthers, Josh Knobe, Brian Leiter, Alex Rosenberg, Eric Schwitzgebel, Jason Stanley, Alfred Mele, Graham Priest, Kit Fine, Patricia Churchland, Eric Olson, Michael Lynch, Pete Mandik, Eddy Nahmais, J.C. Beal, Sarah Sawyer, Gila Sher, Cecile Fabre, Christine Korsgaard, among others, they seem to be scraping the bottom of the barrel, since they just published my interview. I had a great time engaging in some Existential Psychoanalysis of myself!

Clip Show ‘011

It’s that time of year again! Here are the top posts of 2011 (see last year’s clip show and the best of all time)

–Runner Up– News Flash: Philosophy Sucks!

Philosophy is unavoidable; that is part of why it sucks!

10. Epiphenomenalism and Russellian Monism

Is Russellian Monism committed to epiphenomenalism about consciousness? Dave Chalmers argues that it is not.

9. Bennett on Non-Reductive Physicalism

Karen Bennett argues that the causal exclusion argument provides an argument for physicalism and that non-reductive physicalism is not ruled out by it. I argue that she is wrong and that the causal exclusion argument does cut against non-reductive physicalism.

8. The Zombie Argument Requires Phenomenal Transparency

Chalmers argues that the zombie argument goes through even without an appeal to the claim that the primary and secondary intension of ‘consciousness’ coincide. I argue that it doesn’t. Without an appeal to transparency we cannot secure the first premise of the zombie argument.

7. The Problem of Zombie Minds

Does conceiving of zombies require that we be able to know that zombies lack consciousness? It seems like we can’t know this so there may be a problem conceiving of zombies. I came to be convinced that this isn’t quite right, but still a good post (plus I think we can use the response here in a way that helps the physicalist who wants to say that the truth of physicalism is conceivable…more on that later, though)

6. Stazicker on Attention and Mental Paint

Can we have phenomenology that is indeterminate? James Stazicker thinks so.

5. Consciousness Studies in 1000 words (more) or less

I was asked to write a short piece highlighting some of the major figures and debates in the philosophical study of consciousness for an intro textbook. This is what I came up with

4. Cohen and Dennett’s Perfect Experiment

Dennett’s response to the overflow argument and why I think it isn’t very good

3. My Musical Autobiography

This was big year for me in that I came into possession of some long-lost recordings of my death metal band from the 1990’s as well as some pictures. This prompted me to write up a brief autobiography of my musical ‘career’

2. You might be a Philosopher

A collection of philosophical jokes that I wrote plus some others that were prompted by mine.

1. Phenomenally HOT

Some reflections on Ned Block and Jake Berger’s response to my claim that higher-order thoughts just are phenomenal consciousness

Some Thoughts About Color

I just returned from an interdisciplinary workshop on color (More or Less: Varieties of Human Cortical Color Vision). Unfortunately I was not able to attend the conference that followed. Below are a few scattered (jet-lagged) thoughts in reflection of what happened.

The workshop began with presentations by Michael Tye and Alex Bryne on the philosophy of color. Tye went over the basic positions in the metaphysics of color, viz. realism (colors exist on the surfaces of objects), irrealism (colors exist in the mind of the perceiver), and super-duper irrealism (colors do not exist anywhere). The talks were uninteresting if you, like I, were already aware of this stuff and the arguments on each side but it would have been useful (if that is the right word) for, say, a scientist who wasn’t.

During the discussion Tye and various commenters, were arguing about the relative costs and benefits of the various theories. Tye seemed to think that we should opt for the theory with the most benefits and the least costs. Byrne objected and memorably said “the truth has no costs”. If, for instance, color physicalism is true (colors just are physical properties of the surfaces of objects) then there are no costs in accepting that theory. As a group we may not know which theory is true but, he went on, this is compatible with some particular philosopher, or even a scientist I suppose, knowing the truth. I am pretty sure that it was this line of argument which prompted some unnamed scientist to quip that “the philosophers here are arrogant” later that day. But at any rate what are to make of this debacle?

It has always seemed to me to be obvious that realism and irrealism are true in this case. We use color words interchangeably for both properties of surfaces and also for the conscious color experiences we enjoy. So, when someone asks the question ‘what is red, really?’ they are asking a question which is ambiguous. ‘Red’ really is some physical property of a surface if what you are asking is ‘what is the perceptible property red?’ and it really is a property of some conscious experience if we are asking the question ‘what is the perceived property red?’ Each of these deserves to be called ‘the color red’. But, as between the various ways of spelling out the former or latter who knows? Is perceptible red a complex or primitive property? If primitive is it metaphysically primitive or only nomologically? My money is on complex non-primitive because of considerations about science but this is an open question for me.

It seems to me that the main reason for objecting to this common sense way of thinking about the color red is because of theoretical concerns about transparency. If one is convinced that one can *never* become aware of properties of our conscious experience but, instead, are only able to become aware of the properties ‘out there’. I thought that some of the interesting empirical results about synesthesia presented by Noam Sagiv called this into question. Some synesthetes see the color of a given number, say, as being ‘on the number’ (associators) whereas others see the color not on the number but rather as a property of their experience of the number (projectors). Of course, to get subjects to make this distinction took training, and so no one should deny that in teh first instance what we are usually aware of are the properties of objects but with training we can become aware of properties of our experiences. This distinction also nicely illustrates the way that we use color words to apply to both kinds of things (objects and experiences).

Charles Hayward and Robert Kentridge presented interesting data on cerebral achromatopsia, which is color blindness due to cortical damage rather than any deficiency in the eyes or LGN. One of their main points seemed to be to distinguish CA from blindsight for color. So, cerebral achromatopsics are unable to access or use any information about the color of objects. It is not, like blindsight, that they (seem) to lack phenomenology but are able to use the information to make judgements that are mostly accurate. These subjects lack any ability to access color information. Most interestingly there was one patient who had CA but who did not notice the deficit at first. Presumably this person had all of the color phenomenology just vanish and yet he did not seem to notice. Perhaps even more surprising was the fact that it was not until there color vision had been restored that they noticed that it had been gone in the first place!

There is a lot more that happened (like Mel Goodale’s talk which was excellent) but I’ll have to think about that later!

Burge on the Origins of Perception

Saturday I attended a workshop on the predicative structure of experience sponsored by the New York Consciousness Project, which is sponsored by the New York Institute of Philosophy . Speaking was Tyler Burge and Mark Johnston with commentary by Alex Byrne and Adam Pautz respectively. I may write a separate post on Johnston’s talk but here I want to say something about Burge’s talk.

The first thing that Burge wants to do is to clarify the notion of representation in the claim that perception is representational. For a state to be representational is for veridicality conditions to be an ineliminable part of the scientific explanation of the formulation of the state. Thus representational states, in this sense, are not states that merely co-vary with some thing in the world. For instance, the level of mercury in a simple thermometer causal co-varies with the temperature but Burge wants to deny that the mercury level in the thermometer represents the temperature in his preferred sense. This is because the scientific explanation of how the mercury level cam to be such-and-such proceeds “from the inside” so to speak, and does not need to bring in notions like true or false. He admitted that we could, if we want, adopt a certain stance towards the state and call it representational. But there is still something unique about the kind of states that psychologists are interested in. The central task of perceptual theories, for Burge, is that of discovering the conditions under which we correctly, or accurately represent the world and when we fall into mistakes, i.e. illusions. The idea of “getting it right” does not enter into the explanation of why the mercury is at a certain level in the thermometer. To illustrate this idea Burge kept going back to saying that the explanation for the level of mercury being thus-and-so as opposed to -such-and-such would be the same whether or not we just started with the proximal stimulation or not. The basic idea seemed to be this; when we calibrate a mercury thermometer we take the contraption and put it in ice as it is melting (i.e. the just melted ice-water) and wait for the mercury to stabilize. We then do the same for boiling water and then assign 0 to the first and 100 to the second and divide the rest into 100 equal parts. Does it really make sense to say that the thermometer got the temperature right in the first step? Can we make sense of the notion that it got it wrong? Wherever it settles we call 0 so how could it be mistaken? Or to put it slightly differently, how could we make sense of the notion of it being under some illusion? These kinds fo considerations don’t even seem to apply. Now as already said we can adopt this sort of talk if we want to, but if it is really the case that nothing is lost when we stop talking that way then it is just a stylistic thing. When we talk about representational states in perception we are immediately confronted with truth-value talk. And Burge wagered that psychology as a science would not give up this notion.

For my own part I find this distinction quite plausible but I don’t see why it then follows that no causal or teleological theory can work. The special category isn’t representation, it is mental representation.

But back to Burge. The second thing that he wanted to clarify was the notion of perception. He first distinguished between mere sensory registration, which is just statistical co-variation, and perception. For a state to be a perceptual state it must be in the business of objectifying the world. That is, it is in the business of offering a solution to the underdetermination problem. The classic example is the construction/recovery of a 3-d image from the 2-d image projected onto the retina. There are an infinite number of ways that the brain could generate a 3-d image from that information but of course the perceptual systems are in the business of ignoring most of those. This fact is then used in the explanation of various visual illusions. The mark of perceptual systems are perceptual constancies. So, consider color. We Human Beings are pretty good at telling the actual color of a thing in a variety of lighting conditions. That is, our perceptual systems somehow take a range if inputs and treat them as the same. The same is true for length and etc. He distinguishes perception from any kind of mere sensory registration and seemed to think that olfaction and taste were non-perceptual senses. The reason that he gave was that there were no smell constancies. We don’t seem equipped to be able to track the same smell under a bunch of different environmental conditions.

Finally, and perhaps the most interesting part of the discussion, he defended his claim that talk about perceptual as representational and perceptual processes as computational does not commit us to a purely syntactic view of how it is implemented. So by way of illustration consider the way that we talk about the logical category of being a predicate. We can give a purely syntactic description but the level of explanation that matters is the level at which semantic information plays a role in individuating the state. This is somewhat like a point that has always bothered me. In logic we pretend that we are dealing with purely syntactic rules, but they are useless unless they are individuated semantically. But anyway, Burge’s point was that he thought that we would not even be able to get to the point where we could individuate perceptual states in a purely syntactical way, unlike the logic case where we can (so he thought). He speculated that the reason so many people think that you must do computational psychology purely syntactically is because of an antecedent position on the mind/body problem. It is because people start with the assumption that this stuff must ultimately be physical and so we must only apply to physical properties (viz. syntax). But Burge objected that we should do psychology autonomously and the see what the mind-body problem looks like afterwards. There is nothing in computational theory that would force us to opt for a purely syntactic theory of computation and so, Burge claimed, no reason that people who accept the language of thought hypothesis, even for perception, were committed to these computations being done on the basis of purely syntactical properties of the computata (if that is a world).

Now all of this is only by way of clarifying what perceptual representation is (!) and after this is done he goes on to talk about the structure of perceptual representations. This is getting long so I will make it short now and perhaps come back to it later for a fuller account. The basic idea seemed to be that perceptions must be composed of a general attributive part and a part that indicates some particular thing (singular reference part). So, for example, to perceive that one object is to the left of another object is to have a state that represents the general relation of ‘to the left of’ as being instantiated by these two particular objects. In both its general, attributive aspect, as well as the singular aspect perception is always trying to demonstrate some particular. It can fail to do that but it is always trying to do so. He seemed to want to model this on demonstratives. As I said there is much more to be talked about, including his view that the difference between conscious and unconscious perceptions might lie in some aspect of the states mode of presentation, but it is getting late and this is already too long!