The Dennett article I linked above is interesting in this regard, in that he speaks of an “intentional stance” that is commonly used to understand and predict how many entities will behave even though we do not usually think of them as conscious. This involves treating the entities as if they are rational being that have beliefs, desires, goals, etc. A chess playing computer would be one example. A unicellular organism moving towards food or away from a toxic substance would be another.
While I do not believe Dennett is, himself, an eliminativist WRT to intentionality, there is an idea worth at least putting on the table: Is it possible that intentionality in all cases is nothing more than a theory or model we use to help understand how things around us will behave and has no existence beyond this? And that this would apply even to our own minds and behaviors? That is one way, rightly or wrongly, that I understand the eliminativist position.
No, that isn’t what I said. An intent is an instance of intentionality (in the philosophical sense), but not all instances of intentionality (in the philosophical sense) are intents.
That depends. Some philosophers would construe intentionality more narrowly, and always take it to be in some way associated with semantic content, i.e., the kind of content we express through language. I would say we do not have evidence for this kind of intentionality in organisms other than humans (though if any other species displayed capacity for abstract thought or use of language comparable to humans in complexity and flexibility, we would have evidence of such).
On the other hand, some philosophers construe intentionality more broadly, and take many states that we do not typically consider to have semantic content to be intentional - e.g., they might say perceptions are intentional by being in some sense directed at the object of perception. So this kind of intentionality is associated with consciousness, and any kind of organism with consciousness would exhibit it.
I would say not necessarily. If there can be non-conscious mental states (e.g., if you still believe X even when you aren’t consciously contemplating X) then it seems there can be non-conscious instances of intentionality. (Though maybe something needs to be able to be conscious to manifest intentionality.)
And we’re back to the need to define intentionality in order to make sense of all this.
It seems to depend on the definition of intentionality, and there we are again. In all the preceding argument, what has been your implied definition? How about that?
In any sense, you seem not to have answered my question. Do any other species display intentionality, by whatever definition you are applying here? Given your argument, is this equivalent to asking whether other animals have souls?
Well, I can make sense of it: “a dictionary with intrinsic meaning” would be a dictionary that had meaning (i.e., it meant something) intrinsically (i.e., without reference to anything else). I just don’t think a dictionary is the kind of thing that can have intrinsic meaning.
Codons don’t have semantic content; “meaning” is applied to them analogically and refers to the amino acids they get linked up with. So you could say that codon triplets have derivative “amino acid content”. But the amino acids don’t have derivative “amino acid content”; they have their “amino acid content” intrinsically, just by being the amino acids that they are. And the only way that codon triplets can have derivative “amino acid content” is if they get linked up by a chain of interactions to something with intrinsic “amino acid content”, namely, some amino acid. (*)
In the same way, the only way for something to have derivative semantic content is if that content is derived from something that has it intrinsically; stated briefly, the analogy is:
words : codon triplets = thoughts : amino acids
Edit: realized after writing this that I was mixing up codons and nucleotides, so “codon triplet” is redundant; my bad.
(*) If you want to get fancy, you could say something like “they have a disposition, due to their chemical structure, to be involved in a chain of interactions, in the context of the translation system, that would link them up to an amino acid.” That doesn’t affect the point.
Going back to your objection about a dictionary having intrinsic meaning:
There is no problem here for attributing intrinsic meaning to thought, precisely because they don’t exist in the absence of anyone who understands them - rather, they exist as a mental state in a thinking subject.
In his article that I’ve linked several times now, Feser goes beyond arguing that thoughts with determinate meaning are immaterial; he also argues (along similar lines, and while making the distinction that I mentioned all the way back near the beginning of this thread between intellectual activity and merely conscious activity) that thoughts, concepts, etc, are not reducible to mental images (or anything else we might take to have derivative semantic content). So there’s a parallel argument here, sharing Premise 1, that something with intrinsic semantic content exists.
That sounds like eliminativism about intentionality, actually, as does some of what you say later in the post.
I am claiming that there is more to our thoughts than just such mental images - over and above those images, our thoughts have semantic content that is not reducible to them. And I believe that claim is sufficiently justified simply by knowing the meanings of our thoughts, but especially so in light of the arguments that concepts and/or semantic contents are distinct from mental images and that something with intrinsic semantic content exists. Again, see Feser’s article for more on that.
The question is this: if the meaning of your thought about the cat is determined by the causal connection between the cat and your brain, what feature of that causal connection distinguishes it from the other myriad causal connections going on so that it constitutes a meaning rather than them? (e.g., to make it so that you are having a thought about the cat, rather than about the source of illumination for the cat; or to make it so that you are having a thought about the cat, rather than your eyeballs having a thought about the cat).
I think “that’s how your brain interprets the light” would be trying to explain things in a circle, but in the end you do seem to concede the point and head in an eliminativist direction.
Given that I don’t think books have intrinsic meaning, this isn’t a problem for me. What you want to ask is what makes the intrinsic meaning of my thoughts mine instead of attaching to anything else. The short answer is that my thoughts are features of me and not of something else; the long answer is hylomorphic dualism (or some related view).
I deny that beliefs and the like are theoretical postulates (even “folk”-theoretical) put forward to explain anything, but rather are themselves features of our subjective experience that we can know through introspection. In fact, I have no idea of what else you could mean by intentionality “as a mental phenomena that we subjectively experience” other than that. So I suppose you could say that I deny there is the distinction you claim here between phenomenon and theory.
And it seems to me that Rosenberg at least denies this phenomenon and not just a “theory” - similar to Rumraket’s comments above, in the paper you linked, he adverts to his introspective experience and says, “look, its just mental images, no aboutness here”. (I think he has simply misdiagnosed the situation by looking at one aspect of his introspective experience and ignoring another aspect of it, and I’ve pointed to my reasons for thinking so in my response to Rumraket, but the point remains.)
And I’m not arguing that their use of language that is the problem, but that they can’t get rid of the concepts no matter how hard they try. Spelling “true” and “false” backwards doesn’t do anything to show that we could possibly be mistaken that some things are true and other things are false. Never again talking about beliefs wouldn’t do anything to show that we could possibly be mistaken that we have beliefs.
I have to admit this almost drove me to profanity…
A theory or a model is an instance of intentionality. Treating entities as if they have intentionality in order predict and understand their behaviour is a complex of intentional states. So no, it’s not possible that all of our intentionality is just a kind of theoretical model rather than a real feature, because we can only have theoretical models if we have real intentionality.
Perhaps the best spin I can put on this idea is to say that if eliminative materialism is true, intentionality is not real and none of our thoughts have semantic content, and the appearance of intentionality is an illusion or a theory or model which is also not real and has no semantic content. Our own conception (if you could call it that) of eliminative materialism, and every other thing we think we know, from science or otherwise, is also in fact contentless. We never really contemplate propositions, we never really reason from premises to conclusions, and we have no knowledge.
Even if this isn’t literally self-contradictory (and I’m not conceding that) it remains a kind of performative self-contradiction in that the eliminativist invariably takes himself or herself to have reasons to believe eliminative materialism and performs a host of other intentional acts, even if he or she manages to not speak or even think in such language; and it is self-undermining in that it denies that we know anything we might have used as evidence for the position in the first place (because such evidence would have to be among our beliefs for it to commend eliminative materialism to us, and we have no beliefs).
And so I contend that eliminative materialism is deeply absurd and cannot be rationally believed.
In the “narrow” definition, this “aboutness” is specifically associated with some kind of semantic content; in the “broad” definition, it can refer to a more general “directedness” towards a state of affairs. (Note that intentionality in this narrow sense is a subset of intentionality in the broader sense.)
To put it another way, philosophers see something in common between the way a thought is “about” something, the way an intent to act is “directed towards” the intended goal, or the way a perception is a perception “of” the perceived object, and they use the word intentionality to refer to that common feature.
I wouldn’t say I’ve been specifically implying either the broad or narrow definition; I don’t believe much in the discussion about intentionality per se has depended on the disambiguation.
In the argument for the immateriality of the intellect, I referred to semantic content, so rephrasing that argument in terms of intentionality would require specifying the narrow definition. (But that’s why I wrote in terms of semantic content instead of intentionality in the first place.)
Except that we do have evidence. As mentioned in the Dennett article: Computers that play chess. Micro-organisms moving towards a food source. Etc. On what basis could one deny that they have intentionality? That they do not demonstrate the use of language? There are humans beings who do not have the capacity for language. Does that mean they also lack intentionality? Did our ancestors who had not yet developed spoken or written language have intentionality?
One might say we don’t believe computers and paramecia have intentionality because they don’t have brains. But that would then be attributing intentionality to physical structures, which is exactly what you are denying.
I’m not interested in getting into a digression on my reasons for the following, but very briefly:
to the extent that computers exhibit intentionality, it is extrinsic and derivative in basically the same way that the meanings of words is derivative.
I think we can attribute “intentionality” to paramecia in a very broad sense; though at that point it would be better to start using the older notion of “final causality”. But that would involve a dive into more general metaphysical questions.
I believe the eliminativist would reply something like this: That we can only speak of the idea that intentionality does not exist by using language that entails intentionality only demonstrates how deeply our very language is rooted in our theory that our thoughts have semantic content.
To return to the analogy of an optical illusion: The illlusion of intentionality is not like the optical illusion below, which with only minor effort we can disregard and perceive what is actually going on:
Rather, it is like the earlier illusion we have discussed of the grey stripe seeming to have variation in shades. No matter how hard one tries, one cannot un-see it. Now, it may be possible that with lots of practice one could train oneself not to be deceived by the illusion. But, even if that is not the case, it is not necessary for us to not be deceived by the illusion to know that it is just an illusion.
And, again, you have offered no argument or evidence for the existence of intentionality other than the fact that you are firmly convinced that it must exist. But your inability to provide a defnition that @John_Harshman finds convincing or even informative, I think, reveals something important. If the evidence for something’s existence is the very strong subjective feeling that it exists, can we really say with certainty that it exists at all?
Again, it isn’t the use of language that is the problem, but the necessary use of concepts that go into the reasoning for this position. And again, I deny that the semantic content of our thoughts is a theoretical postulate rather than something introspectively known.
You keep saying this, but that doesn’t make it true. The eliminativist ultimately has no valid response to the incoherence objection; all they can offer is rhetorical hand-waves and dodges. That’s an argument.
Ah, but why am I trying to argue rationally with the eliminativist position? I suppose I am the one playing the fool, as the eliminativist doesn’t believe rational arguments exist.
Does it? You’ve been content to carry on this conversation without objecting about definitions. Rosenberg, as far as I know, doesn’t argue for eliminative materialism on the basis that intentionality is too vague of a notion for us to know what we’re talking about. Seems to me there is a sufficiently clear concept here.
If @John_Harshman still finds himself lost in this discussion, I recommend he find an introductory philosophy of mind textbook and look up “intentionality” in the index.