I just came across Daniel Stoljar’s forthcoming paper A Euthyphro Dilemma for Higher-order theories. In it he tries to present a kind of dilemma for the higher-order thought theory but I find his reasoning highly suspect.
He assumes throughout that the higher-order theory is offering a definition of ‘consciousness,’ which is not exactly right. At least as I understand the theory it is an empirical conjecture about the nature of phenomenal consciousness and so not in the business of offering a definition. However, if we mean by definition something like what Socrates is seeking, viz., the thing which all conscious states have in common in virtue of which they count as conscious states, there there is a sense in which the higher-order view is after a definition, so I will go along with him on this.
The basic thrust of the paper is that we can ask two questions, one is ‘are we aware of ourselves as being in the state because the state is conscious?’ and the other is ‘is the state conscious because we are aware of ourselves as being in it?” Obviously the first ‘horn’ is not going to be taken as it effectively assumes that the higher-order theory is in fact false. The second ‘horn’ is the one the higher-order theories will take. So, what is the problem with it? Here is what Stoljar says:
Alternatively, if you say the second, that the state is conscious because you believe you are in it, you need to deal with the possibility of being in the state and yet failing to believe that you are. On the higher-order thought theory, the state is in that case no longer conscious. But as before that is questionable. Suppose you are so consumed by the fox that you completely forget (and so have no beliefs about) what you are doing, at least for a short interval. On the face of it, you remain conscious of the fox, and so your state of perceiving the fox remains conscious. If so, it can’t be the case that the state is conscious because you believe that you are in it. After all, you do not believe this, having temporarily forgotten completely what you are doing.
I am not sure how ‘on the face of it’ is supposed to work! It seems as though he is just assuming that the theory is false and then saying ‘ahah! The theory could be false!’ Even if we interpret him charitably it seems like he is assuming that the higher-order states in question would be like conscious beliefs. Calling the higher-order thoughts beliefs is a bit of a misnomer since I take beliefs to be dispositions to have occurrent assortoric thoughts. But as long as one means by ‘belief’ something like an occurrent thought then we can go along with this as well. If one is ‘so absorbed in the fox’ that one forgets (consciously) what one is doing it does not follow that one has no unconscious thoughts about oneself.
Stoljar recognizes this and goes on to say:
Friends of the theory may insist that you do hold the belief in question. Maybe the belief is not so demanding. Or maybe it is suppressed or inarticulate, not the sort of belief that you could formulate in words if asked. Maybe, but it doesn’t matter. For even if you do believe you are in the state of perceiving the fox, it doesn’t follow that this state is conscious because you believe this. Further, even if you do believe this, it remains as true as ever that, if you didn’t, the state of perceiving would nevertheless be conscious. After all, even if you didn’t believe that you are in the state of perceiving the fox, you would still focus on the fox, and so be conscious of it, as much as before.
I find this passage to be extremely puzzling and I am not sure how to interpret it. There are arguments given for the higher-order theory and this does not address any of them. Further, there is no justification given for the final claim, that even if one did not have the relevant higher-order thought one would still be (phenomenally) conscious of the fox in the same way. What reason is there to accept this? It is just assumed by fiat. So there is no dilemma for higher-order theories here. There is just someone with differing intuitions about what conscious states are.
Stoljar goes on to consider a version of the view that os closer to what is actually defended by Rosenthal. he says:
Rosenthal says you must believe that you are in the state in a way that is non-perceptual and non-inferential (Rosenthal 2005).
This is incorrect. What Rosenthal says is that the relevant higher-order state must be arrived at in a way that does not subjectively seem to be inferential. That is compatible with its actually being the product of inference. But ok, subtle points aside what is the issue? He goes on to say:
But even this is not sufficient. Suppose again you are in S and an amazing and unlikely thing happens. Before you even open Linguistic Inquiry, you get banged on the head and freakishly come to believe that you are in S. In this case, three things are true: you are in S, you believe you are in S, and you came to believe this in a way that is neither perceptual nor inferential. Even so it does not follow that S is conscious; on the contrary, it remains as unconscious as it was before.
But again what reason is there to think this? If one is in a higher-order state to the effect that one is in S and this is arrived at in a way that subjectively seems to be non-inferential then according to the theory on will be in a conscious state! That is just what the theory claims. So there is no need to use introspection in the way that Stoljar claims.
Stoljar also briefly discusses the argument from empty higher-order thoughts, saying:
It is worth noting that many proponents of the higher-order theory insist on a different response to this objection. They say the belief can be empty but that the state that is conscious exists not as such but only according to the belief, rather as certain things may exist not as such but only according to the National Inquirer. I won’t attempt to discuss this idea here, since it is extensively discussed elsewhere; see, e.g., (Rosenthal 2011, Weisberg 2011, Berger 2014, Brown 2015, Gottlieb 2020). But it is worth noting that interpreting the view this way has the consequence that it is no longer a definition of a conscious state in the way that it is normally taken to be, and as I have taken it to be throughout this discussion. After all, adefinition of a conscious state either is or entails something of the form ‘x is a conscious state if and only if x is…’. This entails in turn that the state that is conscious must turn up on the right-hand side of the definition. But if you say that something is a conscious state if and only if you believe such and such, and if the belief in question does not entail the existence of the relevant state, then the state does not turn up as it should on the right-hand side; hence you have not defined anything.
But again, this is incorrect. According to Rosenthal the state which turns up on the right hand side is the state you represent yourself as being in, -whether or not one is actually in that state is irrelevant!-
There is a lot more to say about these issues, and other issues in Stoljar’s paper but I have to help get the kids their lunch!
Thanks again for those comments Richard. I think the main disagreement between us concerns the second horn of the Euthyphro dilemma for HOT, the one that asks whether a state is conscious because you believe you are in it. I’ll start with that and then respond more briefly to the other things you say.
Regarding the second horn, you say “he is just assuming that the theory is false then saying ‘ahah!’” I don’t think you can brush the issues aside that easily. Here’s a way to think about it. Take a case in which you not only perceive the fox at the end of your street but you are completely absorbed in the fox and what it is doing. You attend to the fox, you are interested in the fox, you focus on the fox, you are engrossed in the fox, etc. I think we all agree there can be cases like this. Now, does it follow in such a case that you must believe *that you are perceiving the fox*? I don’t think so. You might believe this, of course, but you might not. After all, you are interested in the fox, and you may very well form lots of beliefs about it. But whether you also believe something about you, namely that you are perceiving the fox, is an open question. Moreover, even if you did believe that you are perceiving the fox, it is not true that you are engrossed in the fox because you believe that you are perceiving the fox. You would remain engrossed in the fox whether or not you believed this.
What is all this got to do with HOT? Well, if you perceive the fox in the way I described you are conscious of the fox. I am not saying that you are conscious of perceiving the fox. That is a different matter. I am only saying you are conscious of the fox. If so, in the imagined case you are in a certain kind of conscious state: you perceive the fox in a certain way, and in doing so, you are conscious of the fox. The state of perceiving is itself a conscious state. On the other hand, while your state of perceiving the fox is conscious, it is not conscious because you believe that you are perceiving the fox. That is the second horn of the dilemma.
To respond to this horn, you must say either (a) that you can’t perceive the fox in this absorbed way; or (b) that to perceive the fox in this way is not to be conscious of the fox; or (c) that to be conscious of the fox is not to be in a conscious state; or (d) that you are conscious of the fox because you believe you are perceiving a fox. None of these is plausible in my view.
Notice that none of what you say responds to this problem. Nothing here “assumes that the higher-order states in question would be like conscious beliefs”. Nothing here improperly ignores the arguments for the higher-order theory—the question is whether the theory is true, not what the arguments in favor of it are. Nor I am assuming from the start the higher-order theory is false and saying ‘ahah!’ I am giving an argument, and at the end of that argument I may say ‘ahah!’ at least to myself, but that’s different. :)
Here are some briefer reactions to other parts of what you say.
On the first horn, you say it too assumes that HOT theory is false. That depends on what the theory says. If the theory is the ‘if and only if’ claim, then that might be true even if the direction of explanation goes the way the first horn says. Actually, separating ‘if and only if’ issues from the direction of explanation issues is part of the point about the Euthyphro problem.
On inferences, thanks for pointing out that David Rosenthal’s version of HOT does not strictly require that the higher-order belief be arrived at in a non-inferential way, only that it be arrived at in way that is not consciously inferential. I had missed that aspect of his view in the paper, and I’ll fix it. Still, I don’t think that alters the substance of what I said.
On following linguistic rules, you say “If one is in a higher-order state to the effect that one is in S [i.e. the linguistic rule] and this is arrived at in a way that subjectively seems to be non-inferential then according to the theory on will be in a conscious state! That is just what the theory claims. So there is no need to use introspection in the way that Stoljar claims.” Yes, that is what the theory (in one version) says, but repeating that doesn’t respond to the problem about linguistic rules. Here’s the problem again. Take a case in which you know some complicated linguistic rule but unconsciously: you can’t attend to the rule, you can’t formulate it in words, you can’t teach it to anybody in the normal sense of ‘teach’. Now suppose that you come to believe that you know the rule by some freak accident, a blow to the head say. Of course that is incredibly unlikely, but if the HOT theory is a definition it is not out of bounds to ask what would be true in such a case. So let’s ask: must the state be conscious in such a case? I think that is extremely implausible. The higher-order belief is formed in the wrong way. The case is analogous to one which you come to believe that you know the rule by inference from what a linguist says. HOT people want to insist quite correctly in such a case that knowing the rule is not conscious even though you believe that you know the rule. That’s because the higher-order belief is formed in the wrong way. Freakish blows to the head are likewise the wrong way to form the relevant belief. I think what explains both cases is that you didn’t form the higher-order belief by introspection. And that runs straight into the problem I outlined in the paper.
On the empty-higher order thoughts, I think you need here to distinguish between two versions of the HOT theory. One version says or entails this: X is a conscious state of yours if and only X is such that you believe you are in it. The other version says or entails this: X is a conscious state of yours if and only if you believe that you are in X. The first version of the view doesn’t face the empty higher-order thought problem, because, on that version, there are no empty higher-order thoughts of the relevant sort. That is effectively the point I made (unoriginally) in the paper just prior to the passage you quoted. The second version of the view allows that there are empty higher-order thoughts, but the problem for this view is that here the RHS and the LHS of the biconditional have different entailments: the LHS entails that X exists, the RHS does not. If so, the HOT theory on that version can’t be a definition, at any rate not a successful one!
Hi Daniel, thanks very much for this response (and sorry if that means you are away from that beautiful view)! I am definitely glad that people are thinking and talking about these kinds of issues, but I am still somewhat confused by what you think the argument against the higher-order theory is supposed to be.
I thought that what you were trying to produce was an example of a case where we have a conscious state but do not have the relevant higher-order awareness. Your example, being deeply engrossed the perception of the fox, is supposed to be a plausible example of this kind of thing because in that case one doesn’t seem to be in any way aware of oneself, but seems to be wholly engrossed in the perception of the fox. According to the higher-order thought theory one is in a conscious state when one is deeply engrossed in one’s perception of the fox (the way you described it clearly indicates that this is a conscious experience) and so one will instantiate the appropriate inner awareness in the form of an occurrent thought to the effect that one is deeply engrossed in the fox (or one that represents the first-order states in such a way as to have this be the effect). You say that this is implausible but what is the reason to think so? That it doesn’t consciously seem this way to one when one has the experience? (incidentally, this is why I thought maybe there was something about this being conscious going on in the background.)
I think one could resist here and suggest that there is a sense in which one is still aware that these experiences are one’s own, so to speak. Maybe it is peripheral (as Uriah might suggest) or in the background but phenomenologically this has some merit. Even if one denied that there was this kind of phenomenology to rely on it could still be the case that one has the relevant kind of higher-order thought and that the content of that higher-order state accounts for what it is like for me at that movement -and because of the content of the higher-order state it would not seem to oneself that one was actually aware of oneself (when in fact one was). It would seem to you as though you were deeply engrossed in the perception of the fox. Why is this implausible?
The theory says that the way in which these kinds of higher-order states describe your mental life is exactly what it is for there to be something that it is like for you to be in the relevant states. So in this case there will be states which describe oneself as being deeply engrossed in the perception of the fox (or describe one’s mental life in such a way as for this to be the result) and so that it what it will be like for one. You may find that this is implausible but that may reflect your differing intuitions about consciousness to begin with. That is why I view this as ultimately an empirical matter. If we could somehow test the claim that you are making using something akin fMRI, like by decoding the content of the deeply engrossed conscious state from an area already thought to be implicated in the appropriate higher-order states, then we would have some evidence for it (or against it depending on the test). But none of this results from anything in the ‘dilemma’.
To sum up: the second horn of the dilemma just requires that the higher-order theorist insist that there is the relevant higher-order state and this is something that could be tested empirically.
Thanks, also, for the comments on those aspects of the discussion. On your comments about the linguistic rule. I would want to say that the rule is conscious in such a case. Having the relevant higher-order state ‘show up’, subjectively, from one’s point of view, is what it is for there to be something that it is like for one to be in the relevant first-order states. That is, there being something that it is like for one to consciously seed red is just for one to be aware of oneself as seeing red in a way that seems to be subjectively unmediated by inference or judgment. In the linguist case there will be evidence that one arrives at the relevant higher-order state in a way that seems subjectively to be mediated. In the hitting on the head case this isn’t the same. In that case the higher-order states pops into existence, so to speak, and so one from one’s point of vie wit will seem to be mediated. The state will have some content and it will result in what it is like for one to be as described by the content. Maybe some higher-order theorists resist this but I don’t think that is consistent with the explanatory goals of the theory.
Finally, in response to the empty HOT case you say,
I think this way of framing the issue is a bit tendentious. I have a paper with Jake Berger on this, which hopefully will be coming out some day, if you are interested in more detail. The basic point is that it is an unargued assumption that the LHC entails that x exists. As I understand the traditional higher-order thought theory a state’s being conscious is a property that a state has in virtue of there being the right kind of higher-order thought, to the effect that one is in that state. So, the state which is conscious is the one which is described in the content of the higher-order state, and that description may or not be actually veridical. But it will be satisfied by the same first-order state in either case. That is, when the higher-order state is veridical it is because of a certain FO state being actually tokened, and when the higher-order state is ’empty’ it will be because that same FO state was not actually tokened. So, it is the same state that figures in the truth-conditions of the relevant higher-order state and that is what it means for that first-order state to be a conscious state. It is just for that state to figure in the subject’s stream of consciousness and this happens because one represents oneself as being in that FO state. Thus for the state to be conscious does not require the state to be actually tokened. All that need be tokened is the relevant higher-order state and yet the non-tokened FO state is the conscious state (where, again, all this means is that it figures in the content of the higher-order state, and by that we mean that a concept of the state is deployed in the higher-order state). Different versions of this kind of higher-order thought theory will describe the same situation in different terms, so on my way of thinking (what I have called the HOROR theory) it is the higher-order state itself which is phenomenally conscious and so the FO state which is conscious is conscious is a way we could call ‘state conscious’ but not phenomenally conscious but that is not the view you were targeting.
Hi Daniel and Richard,
Thanks so much for the paper and for the interesting exchange.
Although Richard just responded, here’s a different way of putting why I think he initially thought the second horn at bottom begs the question against HOT theory.
I agree that there may be cases when you’re engrossed in something—say, absorbed in perceptually experiencing a fox. But what’s the *independent* reason to think that your conscious state of perceiving is not conscious because you believe (or are having the suitable higher-order assertoric thought) that you in the relevant perceptual state? As Richard says, HOT theorists give reasons for thinking there is such HO awareness, even in those kinds of cases. So what’s the reason to think there needn’t be such awareness?
Here’s a reason: introspection. It doesn’t feel like you believe anything about the perceptual state. After all, you’re really absorbed in the fox. But HOT theory has an explanation of that, which Richard alludes to: suitable HOTs are themselves typically unconscious. On the view, we are not typically aware of the HOTs in virtue of which our perceptual states are conscious. So having an absorbing perceptual experience of a fox is a function of having an unconscious thought—that is, a thought you’re not subjectively aware of having—about your perceptual state and likely no conscious thoughts about anything else. To deny that isn’t a possibility, without independent reason, does seem to me simply to beg the question against HOT theory.
Now, you might think that you can be so absorbed in the fox that you can’t have any thoughts at all about your mentality, conscious or otherwise. I’m not sure how one could know that. But even if we grant that possibility, to insist then that the perceptual state remains conscious again seems to me to beg the question against HOT theory.
It may seem that such a state must be conscious. Daniel, you write in your reply that in the fox case “you are in a certain kind of conscious state: you perceive the fox in a certain way, and in doing so, you are conscious of the fox. The state of perceiving is itself a conscious state.” But these remarks run together Rosenthal’s distinction between what he calls ‘transitive consciousness’ and ‘state consciousness’. It’s true that, if you’re really absorbed in the fox, you’re conscious or aware of the fox—and in that way your perceptual state makes you *transitively* conscious of the fox. If we like, we can say that the state is a “conscious state,” but then it’s a transitive-conscious state. But it’s an open question whether or not that perceptual state must also be *state* (or phenomenally) conscious.
There are good reasons to think that not all transitively conscious states–that make us aware of things–are themselves state-conscious states. If they were no distinction, then there’d be no unconscious or subliminal perception. Subliminal perception makes people aware of stimuli (that are, say, masked), but without being people (transitively) aware that they are in those states or there being anything that it’s like to be in them—that is, without those state’s being state or phenomenally conscious. Moreover, HOT theory seeks to explain in a non-circular way state consciousness in terms of transitive consciousness—it holds that what it is to be in a state-conscious state of seeing a fox is to be suitable transitively conscious of oneself as seeing a fox via a (typically not state-conscious) HOT. So to assume there is no distinction again begs the question against HOT. In other words, one possibility is that, if you’re really absorbed in the fox, you’re conscious of it, but subliminally so. That’s in a way to accept option (c) that you offer: that to be [transitively] conscious of the fox is not to be in a [state-]conscious state. In any case, I think the case you have in mind is more like the first situation I described—where you perceptually experience the fox by having the suitable (unconscious) HOT.
I hope this helps clarify the discussion some.
Cheers,
Jake Berger
Just saw Jake’s post after posting mine–hi Jake!
Again, I agree with most (all) of what you say here, but you raise one point that I want to follow up on and use to modulate my post: I said that to be “aware of” =/= to be “conscious of”, since awareness needn’t be conscious. Here I was conceding, for the sake of argument, that being “conscious of” x implies being consciously aware of x.
What was in the background of my thought, however, is the distinction between state and transitive consciousness that you bring up. We may indeed use “conscious of” in such a way that it doesn’t entail being in a conscious state. In that case we can reject option (c) in Daniel’s quadrilemma (just had to look that up to be sure, yes, it’s a word) as well.
I find it most perspicuous to reserve the term “conscious of” for phenomena that involve subjective awareness, and to speak only of “awareness” in the case of “transitive consciousness” without state consciousness, but this is terminological.
Hey Richard and Daniel–thanks for an interesting discussion so far! I’ve just begun reading Daniel’s paper but I wanted to jump in to add my 2c on the exchange here.
While I’m pretty much 100% on board with Richard’s answer in the comment, I’d like to reply a little more directly to the following:
“To respond to this horn, you must say either (a) that you can’t perceive the fox in this absorbed way; or (b) that to perceive the fox in this way is not to be conscious of the fox; or (c) that to be conscious of the fox is not to be in a conscious state; or (d) that you are conscious of the fox because you believe you are perceiving a fox. None of these is plausible in my view.”
I’d go for (d), though with an important qualification: you’re not conscious of the fox *solely* because you believe you’re perceiving a fox. The locution “conscious of” is an interesting hybrid: at least as we’re using it here, it implies (phenomenal) consciousness, which is a question of how things appear subjectively, but also genuine relation to an external reality (the thing one is conscious of). So, under HOT-theoretic assumptions, to be conscious of x involves:
(1) transitive awareness of x (for example via perception),
(2) thinking of oneself (in the right way) as standing in such a relation of awareness to x.
Now, it’s important to note that the awareness in (1) needn’t be conscious (to be “aware of” =/= to be “conscious of”, since awareness may be unconscious as in blindsight etc)–and the relation of awareness that figures in (2) needn’t exist ceteris paribus since the HOT may be empty.
The intuitive implausibility of option (d) above stems from supposing that (2) alone is sufficient for “consciousness of”, where the latter is being used in a clearly relational sense, but the HOT theorist needn’t (and I think wouldn’t) commit to that.
There is a residual worry which ties directly in to the issues about empty HOTs and existence entailments that you’ve been discussing (and which are my favorite part of this whole debate ;), namely: on what grounds, if any, can we identify the x in (1), which must really exist, with the “x” in (2), which need not? (see Pete Mandik’s “Beware of the Unicorn”).
Answering this question requires a satisfactory reply to Mandik’s argument, and touches on deep issues in psychosemantics that probably can’t be hashed out here. But basically: the fact that the “x” described in one’s HOT (as the x that one is perceiving) *needn’t* exist doesn’t entail that, when it does exist, the HOT isn’t about that particular x. That’s easy to say, but to justify it, one needs a theory of mental representation according to which: to represent x is fundamentally to represent a possibility, and in cases in which that possibility is actualized, one thereby represents an actuality.
I think it’s plausible that descriptivist, functional-role-style theories of content, which I’d say we need for independent reasons anyway, can secure this result. I won’t argue that here, but I think it’s what’s ultimately required in order for a version of HOT theory that allows empty HOTs to work.
I do not presume that Richard would agree with me on this latter point, by the way, as I know we have different views on psychosemantics!
Tks all again for those v helpful comments. I’m very glad Alex brought our attention back to the quadrilemma I mentioned (though not under that name!), because when I first read what Richard and Jake said I kept asking myself “okay, but which option do they take in the fox case?”
Jake: You rightly bring up two different notions of consciousness, transitive and state. If transitive consciousness is in play, then (I take it) you will agree with me on my description of the case—it is just that you think this isn’t a problem since transitive consciousness isn’t the relevant notion. On the other hand, if state consciousness is in play, you will say (b) or (c), or perhaps both.
My response is to insist that transitive consciousness is relevant. You say it isn’t because every mental state is conscious in that sense. That (I think) is what Rosenthal says and proponents of the first-order view sometimes say it too (e.g. Dretske). But I don’t myself think that is right. In the paper I pointed out that the higher-order theory and the first-order theory can be formulated in ways that make them look incredibly similar. On the higher-order theory, your perceiving the fox is conscious if and only if you are aware of your perceiving the fox in a certain way. On the first order theory, your perceiving the fox is conscious if and only if you are aware of what you perceive (i.e. the fox) in a certain way. So even for the first-order theory, what is important for consciousness is not merely being aware of the fox but being aware of it in a certain sort of way. Once you have a theory with that structure, it is open to you to distinguish between being aware of a fox, and being conscious of a fox. If so, not all mental states will be conscious in the transitive sense.
Alex: In your comment on Jake, you said you’d plump for (c), and on similar grounds. If so, then I hope what I just said would apply. In your initial comment, however, you said you’d plump for (d): you are a conscious of the fox because you believe you are perceiving the fox. You go on to say that other things too may be required to be conscious of the fox; fair enough. However, at least if we are talking about a definition of a conscious state, (d) seems incredible to me. Maybe we are hearing it differently. I hear it is as entailing that, in the case in which you perceive the fox in the absorbed way I describe, it is metaphysically necessary that you believe that you are perceiving the fox. That is, there is no possible situation, no matter how remote, in which a subject can perceive a fox in this absorbed way, and fail to believe that they are perceiving the fox. That’s what seems to me to be incredible. And in fact, not only is it incredible, it doesn’t seem to fit with the general mood of the higher-order theory that emphasizes that we can be in lots of mental states without believing that we are.
Richard: I take it you, like Alex in his original post, want to say (d). What is holding me up a bit is that you describe the fox example as one in which you are deeply engrossed in a *perception of a fox*. That is not the case at all. The point is that you are deeply engrossed in the fox, not a perception of a fox. (Being deeply engrossed in a perception of fox would be something else entirely, as I understand things.) So the key issue for (d) is whether the following is at least metaphysically possible: you are deeply engrossed in the fox and yet you do not believe that you are perceiving the fox. It seems to me that this is certainly possible. If so, and if the HOT theory is true, it must be that being deeply engrossed in the fox is not conscious. But as you yourself say, the way I describe it “clearly indicates that this is a conscious experience”!
There are loads of other interesting thing you guys raise apart from this. I’ll have think about them further.
Thanks everyone for these really interesting comments and discussion!
Daniel, I agree that wasn’t how I should have put it. I should have said that you are deeply engrossed in the fox, but what I said in response will still go through. One could still insists that even in this cases there is -phenomenologically- a kind of peripheral awareness or one could posit content for the relevant higher-order states that would explain why it seems this way to us.
In response to the key issue-
I think some people in this debate might be sufficiently Quinian in their perspective that they may balk at talk of necessity in this way (as long as it is true of our world then what else is there?), but I am ok with it. But how can we tell if this is metaphysically possible or not? if the conscious experience consists in the appropriate higher-order state then that is what it will be necessarily. Thus if the theory is true then any world where one has this exact experience will be one where there is a suitably instantiated occurrent thought to the effect that one is in a certain perceptual state. So to know if what you think is ‘certainly’ metaphysically possible really is we need to evaluate the claim made by the higher-order theory. Some may think that the transitivity principle is a priori (as I think Gennaro does) or a folk-psychological platitude (as Rosenthal does) but I take it as an empirical conjecture. For us to find out which is metaphysically necessary (either that this experience does or does not depend on some kind of higher-order awareness) will have to wait until we can check these empirical predictions.
I think the same happens for your first-order view. Whatever the right way of being aware of the fox turns out to be it will be necessary that when one has this experience one is in a state of that sort, and I can seemingly conceive of that state being instantiated in that way without being conscious, and in such a case on wouldn’t be inclined to call it a conscious state.