Where is Consciousness Located?

If I am not making any genuine (free) choices as an agent when I decide to believe something, then my belief is just an attribute of the universe and says nothing of the truth or falsehood of that proposition.

But how can I freely choose to believe something? Belief is not a choice; it’s compulsion. I am compelled to believe what I think is true. For example, I cannot freely choose to believe in the existence of unicorns. The only alternative to determinism is randomness. If nothing causes me to believe something, then I hold my beliefs for no reason. Why would that be any better?

I think Rauser is incorrect about the requirements of rationality here: what rationality requires is the ability to choose the conclusion because (i.e. for the reason that) it bears the correct logical relationship to the premises. This requires the human reasoning process to involve intrinsic teleology in a way that no mere physical system can: physics doesn’t make any reference to reasons, so the ultimate cause of “reasoning” determined by physics isn’t actually discriminating based on reasons, and moreover, reductionistic accounts of intentionality that try to get around this have all failed. And even if you go the Aristotelian route and bring teleology back into material things, there are arguments within that metaphysical system that the intellect must be immaterial.

1 Like

If belief is compelled, then we bear no responsibility for what we believe. Then (for example) it’s not a young-earth creationist’s fault for not accepting the evidence for an old earth, right? They simply weren’t compelled by it.

Introspectively, it seems to me that we do in fact have a choice over our beliefs - when we see sufficient reason for a belief we can choose to accept it or not.

1 Like

I have a series of blog posts starting here: The Realm of the Mind, arguing for dualism regarding the mind-body problem. And since I’ve recently read his book which touches on this, I might as well point towards Ed Feser’s blog, where a search for “arguments for dualism” will turn up a lot of interesting posts.

1 Like

Correct. I don’t think that anyone can be held accountable for what they believe.

Introspectively, it does not seem to me that we have a choice over our beliefs. I cannot simply “will” myself to believe something. If you can choose to accept a belief as long as you see sufficent reason for it, then theoretically, you could also choose to accept a belief for which there is no verifiable evidence. But I bet you can’t. I am going to type in a proposition:

Unicorns exist.

Now choose to accept the validity of this proposition. You can’t. Maybe if there was some evidence to back it up, you would accept it. But notice how we just established a causal link: if the reason why you believe in a proposition is because you recognize its internal consistency and its solid evidential basis, then that recognition is what causes you to believe it.

1 Like

I don’t see any reason to accept that premise, so your argument against my position doesn’t work.

Necessary condition =/= sufficient condition.

You clearly sidestepped the issue. It is obvious that we don’t “choose” what convinces us. Either we are convinced or we are not and this is entirely beyond our control. You wrote:

when we see sufficient reason for a belief we can choose to accept it or not.

When I choose to believe in a proposition, does something cause me to believe it or do I choose to believe it for no reason? If nothing causes me to believe in it, then my “choice” constitutes nothing more than a chance event.

1 Like

I didn’t sidestep anything. You put forward an argument, I pointed out you had an unsupported premise so your argument didn’t go anywhere.

Depending on how you’re defining “convincing”, I don’t think that is obvious. If by “convincing” you mean rationally compelling belief, then sure, we can’t choose what convinces us - but that doesn’t mean we don’t have control over any of our beliefs, because not all of our beliefs are rationally compelled that way. If by “convincing” you mean whatever serves as the basis for accepting the belief, then I would say there actually are cases where we can choose what convinces us.

False dichotomy. In cases of voluntary beliefs, I cause me to believe it (I am not merely determined to by my environment or neurological state), and I do so for the reasons that I see for the belief (which are not the efficient cause but the teleological final cause of my belief).

1 Like

Do you have some examples of cases where we can choose what convinces us?

And how do I cause myself to believe in a proposition? This confuses me; you seem to believe that when the will takes action, nothing causes it to take that action, but the taking of that action is not wholly random either.

Please note that I only said we can “choose what convinces us” if “convince” is defined in a certain way, and I only noted that because your question to me presupposed (falsely, in my view) that there is something that “convinces” us for every belief that we have, determining us to believe it. I don’t think that’s the best way to phrase the issue. So, instead, I’m going to answer as if you’ve asked me for an example where we can choose what to believe.

Here’s a self-referential example, for kicks. Suppose you have been reading the philosophical literature on doxastic voluntarism, because you want to decide whether involuntarism or some form of limited voluntarism is true. (We both agree that unlimited voluntarism is obviously false, so there’s no need to keep that on the table.) And the reason you want to determine which of these is true is because there is a moral decision you have to make which rests on the answer (deciding whether someone is culpable for some action which they performed because of a false belief they held, say). Now, in your reading, you’ve made yourself aware of the arguments for and against, and you think there are some good arguments on both sides, so that you are not compelled by the arguments in either direction.

My claim is that, in this situation, you are able to choose to believe the arguments for or against the position - not just to make a decision as if one side or the other is true, but to actually come to believe one side over the other, even if you believe it without complete certainty. I base this claim on introspection of my own experience of deliberating about beliefs. And this is pretty much analogous to situations of deliberation over a decision more generally - i.e. cases where I believe we exercise our free will.

Correct. I characterize libertarian free will as the power in some circumstances to act as an uncaused “first cause” of a series of events (uncaused in the sense that one was not caused to act). But the even though I am not caused to act, the act of will is not random, because it has intrinsic teleology - it was done for a purpose; it has a “final cause” in Aristotelian terms.

I’m not sure I understand the point. Of course unicorns exist.

No, this is not sarcasm or a joke. There are many types of unicorns and many types of existence. For every type of existence, I can see at least one type of unicorn. For all types of unicorns, I can see at least one type of existence. Unicorns, therefore, most certainly exist.

What exactly is your point @Ignostic?

1 Like

I have not really staked a position on determinism, but it sounds like you are suggesting you must be irrational to have free will. Conversely, rationality is deterministic.

@structureoftruth So this only applies to the fence sitters? Only those who are not yet compelled in either direction can make truly free choices? Well, I don’t share your sentiment. I don’t think there’s any freedom in that - not even in this particular situation. The decision to believe one side over the other is ultimately determined by social background, previously held beliefs, the current mental state and so on. A bit of chance may also be involved.

The central question is: Does something cause an agent to act or does he act for no reason? If “it was done for a purpose” then something seems to have caused the agent to act, e.g. a certain goal that the he wanted to achieve. This goal in turn is predetermined by other factors beyond his control. I still see no room for free will here.

@swamidass Let me rephrase this. An atheist does not believe in the existence of a god or gods. An atheist cannot simply will himself to believe in a god. He cannot be forced and he cannot force himself to believe it. This is because belief is not a choice. The same way that love is not a choice. Note that this is also a common objection to Pascal’s Wager.

@RonSewell No. You don’t have to be irrational to have free will. I just explained the causal link between a belief and the individual requirements that must me met in order for this belief to be sustained. Likewise, if the reason why you love someone is because you recognize their physical attractiveness, then that recognition of attractiveness and your attraction to attractiveness is what causes you to love them. In that case, your love is a deterministic response to a stimulus.

Why change the topic? I’m still trying to figure out what you have against unicorns.

The same is true of gods. For example, you certainly believe gods exist in Greek mythology, which in turn exists in the physical world we all inhabit. We you certainly believe that the Greek gods “exist” on some level or another.

I’m not just toying with word games here. But I’m losing hold of your point.

As for beleif not being a choice? Perhaps. There is some legitimacy to the Maxim, “seeing is believing and believing is seeing.” I think a more interesting verb to consider is “trust.” I’m not sure belief in God is half as salient as trust in him.

If a decision were compelled, it wouldn’t be free. So, obviously. I’m not entirely certain if your characterization here really captures what I’m getting at, though. Going back to what I said before: if someone is presented with sufficient reasons for a certain proposition, they are able to believe it. And in some cases if the reasons are overwhelming, they become unable to disbelieve it.

And I don’t think you have any evidence that these conditions determine belief, rather than merely influencing it. Moreover, I am persuaded by philosophical arguments that rationality requires free will, so in my view your deterministic conclusion cannot be justified even in principle. (I write about some of this in my blog, linked a few posts back.)

Rather than showing that there is no room for free will, what you are doing here is simply presupposing that a necessary condition for free will (intrinsic teleology or final causality) does not exist, by asserting that purposes must be part of the order of efficient causes.


@swamidass Yes, gods exist in an abstract sense in Greek mythology. But they are mere imaginations of the human mind, not actual entities.

@structureoftruth In your post on The Nature of Causation, you wrote:

In self-deterministic causation […] the cause is an agent, someone who acts for reasons, goals, or purposes of their own.

Here, you are just pushing randomness/determination off a step. I can still ask: “Does something cause the agent to have purposes of his own? Or does he pursue his goals for no reason?” If nothing determines the choices of agents, how is that different from being random? This is what Randolph Clarke (2017) called “The Problem of Present Luck.” This is how he explained it:

We’ve imagined that, in actuality, S decides at t to A. Let us imagine, further, that just prior to t, and following careful consideration, S had judged that it was best to A straightaway. Still, we imagine, the decision at t to A is not determined by anything prior to t. There are, then, possible worlds with the same laws as the actual laws and the same pre‐ t history as the actual world—including the same considered judgment—in which S doesn’t decide at t to A. Let us suppose that in some such world, W, S decides at t not to A. Consider the difference between the actual world, in which S decides at t to A, in accord with her considered judgment, and world W, in which S decides at t not to A, contrary to her considered judgment. Isn’t this difference between the actual world and world W just a matter of luck? And if this difference between these worlds is just a matter of luck, does it not seem that S’s decision can’t be one in the making of which S exercises free will or one for which S is morally responsible?

Yes, he can be caused to have certain purposes by his physiological constitution, earlier free choices, or other influences… but these purposes don’t cause him to act on them.

No, he pursues his goals for the sake of achieving those goals. It’s different from being random because of the teleology of the free choice.

Narrowing in on what Clarke says here:

Why is it a matter of luck? He just assumes that it is. But I don’t even have to accept this scenario - it may be the case that S’s judgment that it is best to do A straightaway causes S to do A, but that the judgment itself was a free choice (S could have weighed the alternatives and judged differently).

Yes, the agent pursues his goals because he wants to achieve them. This is obviously correct. But it does not explain why the agent has these goals in the first place. As Schopenhauer observed: “A man can do what he wants, but not want what he wants.” I am free to do whatever I desire. But I am not free to choose my own desires.

If, under the same circumstances, S could have judged differently, then I don’t see the distinction between a choice which is determined by an agent (in this case: S) and a choice which is the result of probability/chance. This is the whole point of Clarke’s objection: there seems to be no practical difference between a chance event and the choice of an agent.

But that’s not what I said, and depending on how you mean it, it’s an assertion that I would deny.

The agent does not pursue his goals because he wants to achieve them. (I am denying that desires causally determine free choices.)

Rather, the agent pursues his goals in order to achieve them. (His desires are “final causes”, not “efficient causes”, of his action.)

It’s the difference between something that the agent does, for reasons he judged worth acting on, and something that merely happens to him.

If I am choosing between two houses to buy, the fact that I choose to buy house A (perhaps because it is cheaper) doesn’t mean I was causally determined to do so - I could have chosen house B (because it was in a nicer location, say) instead. But the fact that both choices were possible doesn’t make them random - the stated reasons and the fact that it was me who made the choice make it decidedly not random in a meaningful sense.

I have never been convinced that this is not a false dichotomy. Substitute “reason” for “desire”. I think it does not go without saying that reason is constrained by material or psychological determinism. Rationality and creativity are a challenge to define in terms of material or psychological determinism. Your rationality will frame your taboos, your aspirations, and your values. How can your rationality not have a role in channeling the choices you ultimately make? I do not believe rationality can be fully reduced to instinct and desire.