Evidence for something String-ish

There has been a lot of debate lately about string theory and testability (see for instance Hanging on by a Thread) . It seems to be one of the only theories in the modern time that is taken seriously by physicists in spite of the fact that it cannot be empirically tested, and in fact it is hard to see how it could be so tested. This gets a lot of people upset as it looks like string theory can’t be falsified and so shouldn’t count as a scientific theory.

Ed Witten is famous for, among other things, arguing that string theory does make one prediction. It predicts gravity. It is concievable, he argues, that some alien society discovered string theory before they discovered gravity. In that world gravity is a prediction of string theory and not of quantuum mechanics. So it merely an historical accident that in our actual world we discovered gravity first and string theory second. So there is a sense in which string theory has already made one bery important ‘prediction’ albeit one that is already established.

Is this all the evidence for string theory that we can muster? I have been thinking lately that perhaps we can get some evidence for something that is at least more string-ish than particle physics if we think about special relativity….So perhaps the most famous result of the special theory of relativity is the equivelance of mass and energy captured by E=MC^2. There have been two philosophical interpretations of this equivelance (see the Stanford Encyclopedia entry linked to above). There are those who take it to show that the properties picked out by ‘energy’ and ‘mass’ turn out to be the same property, and those who take it to show that there is no ontological distinction between fields and matter, or in other words that there is just one kind of stuff out there. It seems to me that either way one interprets the equivelance of mass and energy it provides some reason for thinking that a theory like string thoery will turn out to be correct (as opposed to a theory like particle physics).

So, say that you take it to show that there is only one kind of fundamental stuff out there that we can describe as either energy or mass. This is evidence for something string-ish in that string theory posits only on fundamental entity, viz. the string. Particle physics, on the other hand, posits a zoo of particles that are all made from different stuff. Electrons are made from one kind of stuff quarks from another kind of stuff. Thus particle physics as standardly construed seems fundamentally at odds with the ‘one stuff’ interpretation of E=MC^2.

On the other hand, let’s say that you take the equivelance to show that energy and mass are the same property, though that property may be had by several different kinds of stuff. This is evidence for something string-ish in that string theory posits that the one property is actually string vibration and so can explain what the property is as well as how it will seem to us that one converts into the other (i.e. the pattern of vibration changes).  What is the candidate property that particle physics offers? It’s only answer is ‘a property, we know not what’. Thus the two fundamental properties of physics are rendered completely mysterious.

Now, I don’t think this second argument is as decisive as the first, and I also think that the property interpretation is more likely to be true, so I tentatively conclude that special relativity does give us some reason to prefer a string-ish theory over particle physics…but I will have to think about it some more…

Logic, Language, and Existence

I have been thinking a lot about the argument of an earlier post (I Necessarily Exist), due to some excellent comments on the post and because I have been having some discussion via email with Kent Bach about it, and I think I understand what the argument is supposed to look like now. So what I want to do is take some time to show how this argument for frigidity goes and how it ultimately supports what I say about What Kripke Really Thinks.

The argument, to remind you, is one that David Rosenthal presented in a Quine class I had with him and and is a proof by reductio that the existence of any object that one desires is a theorem of first-order logic. All that one has to do to get the proof going is to agree that to say that it exists is to say something with the logical form Ex (x=c) where ‘c’ is a singular terms that refers to the object in question. Here is a version that prooves that Saul Kripke’s existence is a theorem. Let ‘SK’ name the actual Saul Kripke.

1. –(Ex) (x=SK)                     assumption for reductio
2. (x) –(x=SK)                       equivalent to 1.
3. (x) (x=x)                            axiom of identity
4. (SK=SK)                             UI of 3.
5. –(SK=SK)                           UI of 2.
6. (SK=SK) & -(SK=SK)       4, 5

This argument is valid and is supposed to illustrate the problems that Quine discussed in his famous article ‘On What There Is’ involving existence statements. Some people have objected that since the first premises assumes that SK does not exist then he is not in the domain of the quantifier and so something fishy is going on in step 5 (and possibly step 4. as well). But this is not right because the argument is supposed to illustrate that something funny happens when you try to say that something doesn’t exist and you use a logic with singular terms. So, SK must refer (in first-order logic) and it does refer. We then show that since it refers it is a theorem of first-order logic that SK exists. So the ultimate aim of Rosenthal’s argument is to show that if we have singular terms in our logic, as opposed to just variables, then it turns out that it is a theorem of first-order logic that Saul Kripke exists, or that you do, or that I do, or that unicorns do…something has gone wrong and the natural candidate is the use of the singular term.

Quine’s solution to this problem is to suggest that we use Russell’s theory of descriptions so that when we analyze sentences like ‘Saul Kripke Exists’ we get a logical statement free of singular terms. He, of course, recommended that we invent a description like ‘the thing that Kripkisizes’, or ‘the Kripkisizer’ so that we would render ‘Kripke exists’ as Ex (Kx) where ‘K’ stands for the invented description. This is kind of weird and off-putting but the argument is good and so we should see if there is some more natural way to treat (linguistic) names as descriptions.

The Bachian strategy that I endorse is to use the description that mentions the name. So according to this view the linguistic name ‘Saul Kripke’ is semantically equivelent to “The bearer of ‘Saul Kripke'”. So we render ‘Kripke exists’ into first-order logic as Ex (Kx), where ‘K’ stands for the description that mentions the name (Bach calls this a nominal description). So, this part of the argument shows that we should rid first-order logic of singular terms and if one takes first-order logic to be in the business of giving a formal semantics then we should rid our semantic theory of singular terms, and this is just what frigidity does.

Now in the earlier post I suggested that we could adapt Rosenthal’s proof to a modal proof that Kripke (or you, or me, or unicorns) necessarily exists which to remind you again went as follows.

(2) Saul Kripke necessarily exists: □Ex (x=SK))
1. ◊ –Ex (x=SK)           assumption for reductio
2. ◊ (x) –(x=SK)          equivalent to 1.
3. (x)□ (x=x)                modal axiom of identity
4. □ (SK=SK)                UI of 3.
5. ◊ -(SK=SK)               UI of 2.
6. –□ (SK=SK)              equivalent to 5.
7. □ (SK=SK) & -□ (SK=SK)           4,6

Now, in the course of doing some research about this I made an interesting discovery.

It turns out that the problem of necessary existence has some history in modal logic. In fact it turns out that Kripke is famous for formulating a system of quantified modal logic that is supposed to block proofs of necessary existence (as well as some other pesky things like the Barcan formula). So how does Kripke do this? Well, in his 1963 paper “Semantical Considerations on Modal Logic” he modifies standard quanitified modal logic in two ways. The first is by requiring that there be no free variables in any of the axioms or theroems that we use.

The Stanford Encyclopedia entry on actualism has a nice Proof of necessary existence in S5 if one wants to look at it and the same article has some discussion of how Kripke’s move blocks the inference, but as is usually the case with papers in html the quantifiers do not show up and so it is hard to follow the discussion (in the article that is, the proof above is an image and so one can see the quantifiers)…so I will reproduce the proof with the ‘typwritter notation’ that I have been using here.

So the claim of necessary existence is taken to be the claim that everything that exists necessarily exists or, (x)□Ey (y=x) the proof of this proceeds as follows

1. x=x axiom of identity
2. (y) -(y=x) –> -(x=x) instance of quantifier axiom
3. (x=x) –> -(y) -(y=x) from 2 by contraposition
4. (x=x) –> Ey (y=x) from 3 quantifier exchange
5. Ey (y=x) from 1 &4 by modus Ponens
6. □Ey (y=x) from 5 by rule of necessitation
7. (x)□Ey (y=x) from 6 by rule of universal generalization

Ok, so now notice that the axioms 1 and 2 above have free variables which have to be bound in Kripke’s system. So we get 1′. (x) (x=x) and 2′. (x) ((y) -(y=x) –> -(x=x)) and so we cannot derive the problematic theorem. Instead we get the following.

1′. (x) (x=x)
2′. (x) ((y) -(y=x) –> -(x=x)
3′. (x) ((x=x) –> Ey (y=x) From 2′ by contraposition and quantifier exchange
4′. (x) (x=x) –> (x)Ey (y=x) From 3′ by quantifier distribution rule
5′. (x)Ey (y=x) From 1′ & 4′ by modus ponens
6′. □(x)Ey (y=x) From 5′ by rule of necessitation

But 6′ is harmless as it just says that necessarily, everything that exists is self identical. In order to get the pesky result that everything that exists necessarily exist we need a theorem that says □(x)Ey (y=x) –> (x)□Ey (y=x) (which is the so-called converse Barcan formula). If we had this we could derive the offending theorem from 6′ and the converse Barcan formula by modus ponens. “But,” the article continues,

as Kripke points out, the usual…proof of [the converse Barcan formula] also depends essentially on an application of Necessitation to an open formula derived by universal instantiation — the same “flaw” that infects the proof of [necessary existence]. (See the inference from line 1 to line 2 in the supplementary document Proof of the Converse Barcan Formula in S5.) Hence, it, too, fails under the generality interpretation of free variables.

But notice that the modal proof that I gave does not fail under the generality constraint.  The axiom of identity that I appeal to contains no free variables.

So what is going on here? Well, as the article continues by pointing out that we can still prove the offending theorems simply by replacing the free variables in the original proof by constants (this is in effect what the proof I offered did), and so,

The second element of Kripke’s solution, therefore, is to banish constants from the language of quantified modal logic; that is, to specify the language of quantified modal logic in such a way that variables are the only terms.

In other words Kripke thinks that we should eliminate singular terms from our quantified modal logic and so by extension from our semantical theory; in other words it looks like this is further support for my claim that Kripke really has something like frigidity in mind rather than rigidity.

Now there is more that needs to be said here, but this post is already way too long so I will save it for another time…

Brain Reading, Brain States, and Higher-order Thoughts

Recently there has been a lot of progress in brain reading; for instance Here is a nice piece done by CNN, here is a nice article on brain reading video games, and here is a link to Frank Tong’s lab, who may be familiar to those who regularly attend the ASSCor the Tuscon conferences. This stuff is important to me because it will ultimately help to solve the empirical question of whether or not animals, or for that matter whether we, have the higher-order states necessary to implement the higher-order strategy for Explaining What It’s Like so I am very encouraged by this kind of progress. The technology involved is mostly fMRI, though in the video game case it is scalp EEG. But though this stuff is encouraging fMRI and scalp EEG are the wrong tools for decoding neural representation, or so I argued in my paper “What is a Brain State?” (2006) Philosophical Psychology 19(6) (which I introduced over at Brain a while ago in my post Brain Statves Vs. States of the Brain). Below is an excerpt from that paper where I introduce an argument from Tom Polger’s (2004) book Natural Minds and elaborate on it a bit.

Polger argues that thinking

that an fMRI shows how to individuate brain states would be like thinking that the identity conditions for cricket matches are to pick out only those features that, statistically, differentially occur during all the cricket games of the past year. (p 56)

The obvious difficulty with this is that it leaves out things that may be important for cricket matches but unique (injuries, unusual plays (p 57)) as well as includes things that are irrelevant to them (number of fans, snack purchasing behavior (ibid)). The same problems hold for fMRI’s: they may include information that is irrelevant and exclude information that is important but unusual. Irrelevant information may be included because fMRI’s show brain areas that are statistically active during a task, while they may exclude relevant information because researchers subtract out patterns of activation observed in control images.

I would add that at mostwhat we should expect from fMRI images are picture of where the brain states we are interested in can be found not pictures of the brain states themselves. They tell us that there is something in THAT area of the brain that would figure in an explanation of the task but they don’t offer us any insight into what that mechanism might be. Knowing that a particular area of the brain is (differentially) active does not allows us to explain how the brain performs the function we associate with that brain area. We need to know more about the activity. Consider an analogy: we have a simple water pump and want to know how it works. We know that pumping the handle up and down gets the water flowing but ‘activity in the handle area’ does not explain how the pump works. Finding out that the handle is active every time water flows out of the pump would lead us to examining the handle with an eye towards trying to see how and why moving it pumps the water.

And, as I go on to argue, after examining those areas to find what the actual mechanisms are neuroscience suggests that it is synchronized neural activity in a specific frequency that codes for the content, both perceptual and intentional, of brain states. So, multi-unit recording technology (recording from several different nuerons in the brain at the same time) is the right kind of technology for looking at brain states. This is not to say, of course, that the fMRI and EEG technology is not valuable and useful. It is, and we can learn a lot about the brain from studying it, but it must be acknowledged that it is ultimatly, explanatorily, useless. To find higher-order thoughts or perceptions we will need to use advanced multi-unit recordings.  

The Meaning and Use of ‘is True’

The first thing that we need to do is to make a distinction between the redundancy theory of truth, which is a claim about the use of the predicate ‘is true’ in a natural language, and deflationsim, which is a metaphysical claim about the nature of the property picked out by ‘is true’. Usually what you find is that people just use ‘minimalism’ and ignore this difference though they seem to think that redundancy is true and so therefore deflationsism is true (Blackburn is a classic example of this).

The main motivation for redundancy is a collapsing of the meaning/use distinction that is characteristic of Horwich and other neo-Wittgensienians. If the meaning of a word just is the way that that word is used, the function it conventionally plays in a public language game, then finding out how people use the truth predicate and abstracting the rule that defines its function (the T-schema) is finding out the essence of our concept of truth. But there are reasons not to conflate meaning and use (which I won’t go into here). While I do think that people often use the word ‘true’ as a way of communicating that they agree with either what they themselves, or someone else, has said this communicative use of the predicate ‘is true’ depends on its having the correspondence meaning. ‘True’, the English word, means something like ‘being in accordance with the actual state of affairs’ and so it is easy to see how I could use it to express agreement with what has already been said; to say that something is true is to say that it is really the way things are. So in conversation I am able to exploit that meaning in order to indicate that I agree with something that has been said, I am in effect saying ‘yes, that is in accordance with the facts’.

We exploit the meanings of words in this way quite often. Searle (Searle 1969/2001, p. 142) pointed out a similar phenomenon with ‘promise.’ Suppose a parent says to their lazy child “clean your room or I promise I will take away your cell phone!” It is very odd to think the parent is actually promising to do anything here since the thing promised is not something that the child wants the parent to do. In fact this kind of utterance is most likely a threat or a warning. Or consider a professor confronting a student suspected of plagiarism. The professor says “this passage is taken from Wikipedia” and the student says “I didn’t plagiarize! I promise I didn’t!” This doesn’t look like a promise either, how can you promise that you did not do something? This is rather an emphatic denial of the professor’s accusation. How is this possible? It is because the verb ‘to promise’ is one of the strongest indicators of commitment in the English language, and so we adapt it in these cases as a way of indicating that we are really committed. It would be very hard to explain, from Horwich’s view, how the predicate came to have the function of indicating agreement in the first place without appealing to the correspondence meaning that the word has. If this is right then one of the motivations for accepting deflationsim about truth falls apart.

What’s so Unobservable about Causation?

What is it that is supposed to be so unobservable about causation? This question has nagged at me since my days as an undergraduate (and there has recently been a lot of discussion of it over at Brains, which inspired me to post on it). It had always seemed to me that the causal relation was entirely observable. We even have evidence that we have been observing it since before we could talk. For instance from the cradle we witness the passing of the sun behind clouds and the subsequent darkness, we witness objects falling to the ground, we witness that motion is attended with sound, and etc. Nor are these examples special; cases like these can be multiplied indefinitely. We can see water extinguish flame. While walking about I kick a stone and see my foot cause the rock to begin its trajectory. I can see the acid turn the litmus paper red. Pointing a magnifying glass at a piece of paper on a sunny day will cause the paper to smoke and turn brown, eventually catching fire. A small dog biting at my ankles will cause a pinching sensation and perhaps annoyance! I take these to be examples of seeing A cause B, seeing A causing B and feeling A cause B.

So why think that the causal relation is unobservable? I take the canonical text on this to be Section seven of Hume’s Enquiry. For instance in paragraph six of that section he says,

When we look about us towards external objects, and consider the operation of causes, we are never able, in any single instance, to discover any power or necessary connexion; any quality, which binds the effect to the cause, and renders the one an infallible consequence of the other. We only find, that the one ball does, actually, in fact, follow the other. The impulse of one billiard-ball is attended with motion in the second. This is the whole as it appears to the outward senses. The mind feels no sentiment or inward impression from this succession of objects: Consequently, there is not, in any single, particular instance of cause and effect, any thing which can suggest the idea of power or necessary connexion. (Hume 1999)

No doubt it was passages like this that lead to Kant’s rather rude awakening from his dogmatic slumber. It seems to me to have plunged us into one. In the above passage, which is completely typical of that section, Hume seems to be saying that since we cannot see that the connection between these events is necessary, which is to say that we cannot see why what happens is the ‘infallible consequence’ of the first event as opposed to some other event, there is no way that the idea of causation could be taken from anything perceptible in the outside world. It follows from what he says that the causal relation is not observable; something that makes no impression on the human senses would have to be. The intuition that we have to see the necessity of the causal relation in order to see it at all has guided discussions of the observability of the causal relation. It seems to me that Hume here fails to make a distinction between what we observe and how we analyze what we see. What we observe is a relation; it is when we turn to how to characterize that relation that necessity is involved.

As C.J. Ducasse pointed out, Hume’s claim about the unobservability of causation would be true only under “the assumption that a ‘connection’ is an entity of the same sort as the terms themselves between which it holds…” (Ducasse 1993, p 131). Hume’s mistake was to look for the sense impression of a relation. This we will not find unless relations exist in the same way as the things they relate. But the relation between two objects is not itself a third object! Consider two objects, one to the left of the other. We do not see the relation ‘to the left of,’ but rather see one thing as being to the left of another. It does not even make sense to ask ‘what does the relation ‘to the left of’ look like?’ If asked this question all we could do is describe the position of the objects along with the definition of ‘left,’ and ‘right.’

Any further questioning about what ‘left of’ looks like would betray a category mistake. That is, mistaking a relation for an entity of the sort that it holds between. By way of comparison imagine Hume arguing that ‘left of’ is unobservable. I mean all we see is object A and Object B no where is there an impression of ‘left of,’ and so strictly speaking our idea of ‘left of’ is meaningless. But this is ridiculous! The argument seems to work against any relation and so proves too much.

When we see one ball hit the other we observe that the two balls are related, one causes the other to move, but we do not see that the relation is necessary even though it is in fact a necessary one. Just as we can see X moving without seeing that X is going 22 MPH, or we can see X being 3 ft away from Y without seeing that X is 3 ft away from Y. Despite the fact that I do not see that X is three feet from me I do see the relation ‘3 ft away from,’ I just don’t know that it is this relation. We see relation X qua relation while not seeing that X is such and such a relation. So on the view I am advocating we see A and we see B and we see A cause B but whether or not we see THAT the relation is necessary is a different question.