An argument for the immateriality of the intellect

I thought I’d create this to de-clutter The most current philosophical arguments for the existence of god thread a little bit (hopefully aiding @Dan_Eastwood’s sanity). Here’s the context. @misterme987 and others were discussing the argument from reason, and he indicated:

Against these conclusions, I pointed to an argument that it is not possible for our thoughts to be entirely physical, presented by Ed Feser in an article here and further explained on his blog here and here. @Paul_King took a look at the argument and responded to me, and we were off to the races.

When I get some time I will try to explain the argument more fully and continue my discussion with Paul, but in the meantime - shall we move the relevant posts here?

2 Likes

That is clear enough but it doesn’t really answer my objections, Does a “material triangle” have a meaning? If it does how would unpercieved deviations from perfection affect that meaning?

So, Feser’s point is that the idea of triangularity can’t be represented by the idea of a physical trainable because of the imperfections of the physical triangle? That seems even weaker since unnoticed deviations aren”t going to be relevant, and digital technology manages to perfectly represent perfect triangles more easily than it can represent the imperfections of physical examples of triangles,

I don’t think that intrinsic meaning is a requirement. Surely meaning within the particular context of our minds is all that we would need of any representation. (I also think your example is incorrect, since any representation of “triangle” as a class would be distinguished from “trilateral”).

I think that this is just an argument from ignorance. Since we don’t really understand where meaning comes from then assumptions that it cannot come from material processes are just assumptions. The claims about precision are just wrong, I don’t see any justification for the requirement for objective meanings outside the context of the mind, and multiple interpretations of language is mainly due to practical limitations.

2 Likes

None of what you said in your response actually addresses Feser’s arguments for the premises of the main argument, which, from what I can tell, you are still misunderstanding. I’ve tried to explain it as best I can; if you want to continue trying to understand it, my only advice (again) is to go back and read over his article and posts more closely.

That is not at all what I said.

And I don’t see how “meaning within the context of our minds” means anything other than that the representation only has its meaning derivatively; i.e., it has the meaning we assign to it. But there cannot be any meaning for us to assign to it if there is not, somewhere, some aspect of our intellectual process that has meaning non-derivatively, i.e., intrinsically.

Feser does not assume that meaning cannot come from physical processes, he argues for it. And it is not an argument from ignorance: it does not depend on our “not knowing where meaning comes from”, as you claim, but on certain things we can know about the meaning of our thoughts, and about the way that ambiguities arise when we try to represent then.

This is a mere assertion on your part, and the writings I’ve been pointing to contain arguments to the contrary.

There doesn’t need to be, because that isn’t what is being argued.

Which does nothing to rebut the arguments.

Well, then perhaps you can explain why the imperfections of the physical triangle are relevant. I keep asking but I don’t get an answer.

I mean that if it were true that a concept were encoded physically in the brain, it would not entail that the physical encoding would have a meaning in itself. Analogously to the way that a dictionary gives the meanings of words in terms of other words.

A computer can represent a perfect triangle. Where does Feser refute that? Feser’s examples of 5,000 trees and the difference between a 100-sided polygon and a 110 sided polygon are not limits on physical encodings but limits on our minds (and perhaps to en extent our senses, but primarily our minds). We cannot easily count 5000 trees, nor can we easily visually distinguish between the two polygons - nor can we visualise them exactly in our minds. But adding the concept of “5,000” to the concept of “tree” is not difficult, nor is it significantly more difficult to represent the combination than it is to represent the concepts individually.

If the problems are a limitation of language that exists for practical reasons we cannot conclude that they are fundamental to physical representations.

2 Likes

I think you may be asking specifics which go beyond the geometrical concept. Is a physical triangle perfect? Probably not, unless we allow for non-euclidean topology and then pretty much anything with three corners might be a triangle, or close enough that it makes no matter. Like triangles, God can be a concept, and there may even be multiple versions of the concept that qualify.

It’s been a really good discussion otherwise. Do you think you might move on?

1 Like

This was my answer… (with emphasis added)

As far as I can tell, it is just meant as an illustration or analogy of how physical things aren’t like concepts; the sense of “exact” or “determinate” in the illustration is not actually the same as the sense of “determinate” in the main argument, and the illustration is not an argument for premise that no material thing is semantically determinate. Confusing, perhaps, but there you have it.

Right! Now if you consider that dictionary definitions only work because we already understand some of the words in a dictionary without having to look them up - otherwise you’d be left endlessly cycling through entries with no gain in understanding - you might see what I’m getting at when I say that material representations, words, etc., only have meaning derivatively, and that there must be something that has meaning non-derivatively.

A computer can only represent a triangle if you take into account the intentions of the programmers and comprehension of the users. Say you have some string of 1s and 0s. How do you know it represents a triangle? Well, you feed it into this program and it prints a picture of a triangle. But how did you know that you were supposed to feed it into that program instead of any other? How do you know the program is working properly? How do you know the printed image is supposed to just be a triangle, rather than a symbol for Harry Potter’s Cloak of Invisibility?

Computers can only represent meaning derivatively.

The other examples that you bring up are meant to show that even our mental images are semantically indeterminate in this way. Feser would have no problem saying that a mental image could be encoded in a pattern of neurons firing; his point is that these mental images (which are indeterminate - our mental image of 5000 trees is not going to be appreciably different from our mental image of 5001 trees) are not the same thing as concepts (which are determinate - we immediately and exactly know the difference between 5000 trees and 5001 trees).

Sorry, what I meant was, your assertion that the limitations of language are merely practical does nothing to rebut the philosophical arguments Feser references that these limitations are, in fact, fundamental.

Thanks Matthew, this is helpful!

Feser in fact claims that it is an illustration of how “material things are never determinate or exact in this way”. And goes on to say “ And in general, material symbols and representations are inherently always to some extent vague, ambiguous, or otherwise inexact, susceptible of various alternative interpretations.”So I think that exactness is part of his point, but obviously it is not a barrier to physical representation of an exact triangle.

Agreed, however that does not mean that anything taken out of the brain must be unambiguously interpretable without reference to the brain structure. Sense-impressions arrive at the brain through the nerves. They are already physically represented, and the brain can only deal with that representation. (In the case of colour, while there is an objective basis, it has much more to do with the responses of the cells in the retina than the objective differences. Green photons are not qualitatively different from red photons - they simply have different energies. Colour is itself contextual at best).

Sense impressions account for some of the issue, without the need for intrinsic meaning. There must indeed be other, more basic things, but I do not see why they would have to have a clear and unambiguous meaning when taken out of context. A bit string in a computer might have an absolute meaning in context (if it were in a dedicated register) but taken out of context that meaning would be lost.

I obviously disagree. If a computer program defines a Triangle class which, given three points takes those as the vertices, and when given the command to draw itself on the screen draws a triangle with those vertices, then it is certainly representing a triangle. I don’t have to know the programmers intent independently of the program, nor does the user’s intent change anything at all. The presence of other programs is even more irrelevant and I can’t think why you would assume that the triangle meant any particular thing in the context of the computer.

Our distinction between 5000 and 5001 trees is only the distinction between 5000 and 5001. I really don’t see why we need to invoke a non-material mind to explain that. Even that distinction is likely no more than the idea that 5001 is one more than 5000. And I don’t think that we intrinsically understand either - that is why we can’t easily visualise them.

If exact meaning could be communicated through physical representations would that not undermine that aspect of Feser’s arguments? If not, what is the point of arguing that physical representations do not communicate exact meaning?

1 Like

Good day @Paul_King, thanks for continuing the conversation.

I do think that most of your most recent responses are missing the main point of the argument, which, to be fair, I haven’t laid out as explicitly as I could. I already wrote up my responses below, so I’ll post them, but it might be more productive to this conversation if I go back to square one and lay out the whole argument here on this forum. I will try to do that the next time I post here.

I think this is consistent with what I said; where Feser says that, he reasoning by analogy, and he gives more in-depth arguments for the premise in question elsewhere.

I’m not sure what you mean by that. My point is that, even given the context, any physical representation can only derivatively have determinate meaning; if all you have to go on are the physical facts (i.e., if you try to find some non-derivative meaning), the meaning will be indeterminate.

Sense impressions are irrelevant to Feser’s point; he distinguishes between the mental images we form from sense impressions, and concepts.

Without reference to the programmer’s intention, the physical facts alone are not sufficient to determine what program the computer is running, or if it is running it correctly or incorrectly (this is an implication of the argument from Kripke referenced by Feser). Moreover, the ouput of the program (whatever it is) still needs to be interpreted - if it draws something that looks like a triangle, is it actually supposed to be a triangle, or is it a figure in some kind of discrete space with a topology and geometry reflective of the merely finite precision that the computer is capable of? When you say “it’s clearly a triangle”, you are actually making an inference about the intended interpretation of the figure (the meaning it has derivatively).

Visualization is not the same as conceptualization or understanding. I think I can say quite confidently just from what you’ve written here that you do have a perfectly clear understanding of the numbers 5000 and 5001, inability to easily visualize them notwithstanding.

I’ll try again…

Feser’s article references some arguments (which are fairly well-known in philosophy) that linguistic representations are necessarily, always, unavoidably semantically indeterminate.

Your assertion that this problem is mostly practical does nothing to show that those arguments are wrong. It just denies their conclusion.

Feser definitely seems to think that exactness is a problem and he makes the same point with regard to language.

If “determinate meaning” requires having a definite meaning even when taken out of context, that’s trivial.
But I would contend that a value in a dedicated register in a computer has a definite meaning that comes from the fact that it is in that register.

I absolutely disagree, I think that colours are just as valid as concepts as anything else we conceive. And we can think about colours with definite meanings in our minds. So it seems clear that some meanings at least do not have to be determinate in the way Feser claims - while meeting the conditions he says require that they are determinate.

Again I disagree. If the class has a creation method that takes 3 points and a draw method that plots a triangle on the screen, with those three points as the vertices it is unquestionably drawing a triangle,

I don’t think you understand my point. What I am saying is that the difference between 5000 and 5001 is the only real issue in 5000 trees versus 5001. That is the relevance of the difficulty in visualisation. I do have a reasonably clear understanding of 5000 and 5001 but that understanding is not intrinsic at all, and I don’t think that Feser would claim otherwise. Equally, both quantities can easily be physically represented.

Again, I think you are missing what I am saying. My only point as that exact communication was at least possible in principle through language.

I think the real difference is in our assumptions about the most basic level of understanding. You and Feser assume that the most basic level must consist of concepts. I am not convinced of that. I think that it is at least possible that the basic elements are lower level, and only make sense in terms of the brain’s architecture.

2 Likes

Okay. As of yet I have not actually presented the argument we are arguing about (having relied so far on referencing external content), and I’ve been saying that I would get around to it, so here it is.

Some Background
It helps to understand that Feser makes the distinction between intellectual activity (apprehending concepts, combining concepts into propositions, forming judgements about whether those propositions are true or false) and merely conscious activity (the forming of mental images either through sensation or imagination).

Example: concept “triangle” is not identical to any mental image of a triangle - any mental image is going to have particular features (angles, side lengths, orientation), while the concept is universal (it applies to all triangles whatsoever).

Example: concept “green” is not identical to any sensation or visualisation of that colour, because many different shades fall under that concept.

Example: concept of a “1,000-gon” and of a “10,000-gon” are perfectly clear and distinct (we can work out that each of those, unlike a 1,001-gon, are symmetric under a 180 degree rotation, etc) even though we cannot form a clear mental image of them.

Example: concept “law” is not even susceptible to visualisation in the way that “triangle” or “green” are - you can form a mental image of scales or documents or the word LAW, but all of those are only related to the concept by arbitrary convention.

Feser argues that because concepts have features (e.g., they universally apply to all of their instances) that mental images cannot have (e.g., they are necessarily particular), concepts cannot be reduced to mental images. And something similar is going on with physical representations of the semantic content of our thoughts.

The Argument
Premise 1: Some of our thoughts have determinate semantic content (i.e., they have a specific meaning; they are not ambiguous among various incompatible meanings).
Premise 2: Nothing merely physical (no physical system or process) has determinate semantic content.
Conclusion: Some of our thoughts are not merely physical.

Feser supports Premise 1 with the following considerations:

  • Sometimes we very clearly do have unambiguous thoughts. E.g., when we are adding two natural numbers or applying the basic laws of logic, we can understand and know exactly what we are thinking about.
  • In general if we know anything about mathematics and logic - and natural science presupposes that we do - some of our thoughts about math and logic must have determinate semantic content. But we can know things about math and logic (and science).
  • If we do not have a determinate understanding of some of the basic laws of valid reasoning, we cannot reason validly. E.g., if we don’t clearly understand that if A is true and A implies B, then B is true, then our reasoning is invalid (even if it happens on the correct answer, it isn’t guaranteed to be true in the way that it must be for deductive validity).
  • To deny Premise 1 we would have to have a determinate grasp of what Premise 1 is saying (otherwise we would fail to actually deny it). But this presupposes that Premise 1 is actually true; it is impossible to consistently believe that Premise 1 is false.

Premise 2 is probably the more contentious one (at least if I’m reading the room right). In support of it Feser considers (slightly adapting in some cases) 3 well known philosophical problems:

  • Kripke’s “quus” argument

Imagine you have never added two numbers larger than 57 (you probably have, but you can replace 57 by some larger number to make the argument work). Now someone asks you what 68 plus 7 is, and you say it’s 75. But then someone else says that none of the physical facts determine that when you say “68 plus 7” you mean 68 + 7 instead of 68 quus 7, where x quus y = x + y if the inputs are less than 57, and otherwise x quus y = 5. After all, having never added numbers larger than 57 before, all your past usage of “plus” is consistent with both + and quus. You might say, “by ‘plus’ I mean following this particular algorithm”, but then the bizarre skeptic makes similar arguments for all the terms you use to describe the algorithm you say you mean, and you can’t get anywhere.

  • Quine’s “gavagai” argument

A linguist is studying an unknown language by interacting with speakers of that language when one of them points at a rabbit and utters “gavagai”. Now the linguist could translate this as “look, a rabbit”, but none of the physical facts can possibly determine that meaning over something like “look, an undetatched part of a rabbit” or “look, a temporal stage of a rabbit”. (And Quine argues that this is the case even if the physical facts of the whole context and environment are taken into consideration.)

  • Goodman’s “grue” argument

Say something is “grue” if it is green before the year 2029, but blue after that. All the physical facts that something is green are also evidence that it is grue. (If it doesn’t turn blue in 2029 it will stop being grue then, but that doesn’t mean it wasn’t grue prior to that time.) And so all the physical facts are insufficient to determine whether we mean green or grue when we use the word “green”.

All of these arguments can be generalized to conclude that language is fundamentally semantically indeterminate, always ambiguous between various possible meanings. Feser then reasons that there is nothing special about language that leads to this conclusion. It holds for any representation of meaning whatsoever, so that the only kind of thing that can have determinate semantic content is something that has it intrinsically instead of representationally, something that just is a meaning itself.

And nothing physical is like that - certainly not at the level of fundamental physics, and it is nothing more than a baseless assumption to say that such a property (having an intrinsic meaning) can “emerge” from an appropriately configured physical system, especially considering that the structures in our brains are bound to have the same kind of arbitrary, contingent, and merely conventional connection to the meanings of our thoughts that words themselves do.

But (by Premise 1) some of our thoughts do have determinate semantic content. So there must be some aspect of our thoughts which is non-physical.


Responses to earlier discussion:

Okay. As far as I can see, the point Feser is making is that the inexactness of a physical triangle, or a mental image of a triangle, is a barrier to its representing the concept of triangularity where the representation is supposed to be based on resemblance (the inexactness or imperfection of the physical triangle making it not resemble anything that actually satisfies the definition of a triangle) - but this is different from the more general argument that Feser makes for semantic indeterminacy.

You can’t just pick 3 points on a screen and call that a physical triangle without using actual lines of pixels drawn between those points, because otherwise you are just talking about an abstraction rather than an actual physical pattern. And if you are using an actual line of pixels, because pixels have width (which geometric lines do not) and diagonal lines of pixels are going to be somewhat jagged rather than perfectly straight, the thing which is drawn on a screen is not a perfect triangle, and so not unquestionably a triangle (without making some inference about the intent of the output).

Concepts of colours are not the same as sensations or mental images of colours; that is the point I was making. Any sensation or mental image of something green is going to have a particular shade of green, while the concept equally applies to all shades which we call green.

Not sure what you mean by “not intrinsic” here. If you mean you weren’t born with the understanding of these numbers, obviously I agree. If you mean that there is no intrinsic connection to the concept of 5000 in whatever it is that makes it the case that your thoughts about the number 5000 are in fact about the number 5000 - then no.

Again, without deriving from something that has intrinsic meaning, nothing can have meaning at all - just like a dictionary defining its entries entirely using words that literally no one understands would be meaningless. And nothing merely physical has intrinsic meaning like that. (That is a slightly different argument from Feser’s, but I think it is also sound, or at least, it can be articulated in a way that is sound.)

As for whether the number 5000 can be physically represented - how do you propose to do that? A collection of 5000 objects might represent “5000”, but it might also just represent “multiplicity” or “this specific collection of objects”, and the physical facts do not determine one meaning over the other.

This doesn’t counter my point - the meanings of words are not fixed by the physical facts alone. (Exact communication is possible because we can infer the intended meaning of words from facts which do not determine them, but that is only possible because our thoughts do have determinate meanings.)

You are making that assumption. Meanwhile the argument I am putting forward is not making the assumption that you impute to it - rather, it argues from what we can know based on our experience of our own thoughts (i.e. they have determinate content) and something we can know or at least reasonably believe based on other arguments cited by Feser (i.e. language has indeterminate content taking only the physical facts into consideration, and there is no feature specific to language that makes these arguments inapplicable to any physical representation), and it concludes that concepts must be present at the most basic level. Which refutes your assumption.

It seems to me that if we take the arguments for premise 2 seriously (which I confess I do not), they cast doubt on premise 1, as they show that our concepts are indistinguishable from various alternative concepts, and we can’t actually know what concept we are thinking about. To accept premise 1 and premise 2 simultaneously, as required for the argument, would seem to be to entertain contradiction.

1 Like

Yes. And the whole argument can go nowhere in this philosophical space. The entire question is whether the physical brain can, in fact, account for the forms of thought we witness, and that is purely a question about how the brain’s structures and processes work. Until one knows that, one cannot say that some types of abstraction are possible for meat-brains and others are not. What we know is that we do have meat brains, and they seem to be doing this just fine, with no evidence of ghostly intervention. Neurology may one day answer that, if neuroscientists even consider it a question worth asking. Philosophy might comment upon it, but probably not helpfully.

2 Likes

All of the listed arguments to support premise 2 are question begging, assuming that thoughts are not physically encoded to argue that thoughts are not physically encoded.

1 Like

I agree with Premise 1 only to the extent that we need to have a clear idea of what we mean. But I don’t think that is adequate for Feser’s argument and I don’t think he can extend it far enough without running into areas of considerable uncertainty.
Looking at the support for Premise 2 it seems to me to be irrelevant - it isn’t something we have to worry about.

I certainly don’t agree that we need some representation of the thought that can be taken out of context and have only one possible interpretation. Inventing alternate interpretations seems trivial. Limits on our ability to find alternate meanings would serve just as well.

The concept of “green” is a case in point. It cannot be understood without reference to the particulars of the human visual system and the parts of the nervous system involved in processing the sensory inputs. The context is absolutely required.

It seems to me that this point raises a clear problem for the argument. I can only understand Premise 1 because it has been communicated to me through a physical representation with all the ambiguities of semantics as well. Communicating understanding - involving two minds - seems to me to be a harder problem than simply possessing it. Yet the problems Feser raises are not sufficient to prevent it.

1 Like

Except that’s not exactly what they show. To expand on what I said earlier:

The reason nobody hears Kripke’s “quus” argument and actually proceeds to wonder if they’ve really been quadding instead of adding all their life is because we can distinguish between those two concepts, and we know that we’re thinking about adding and not quadding when we say “plus”. The fact that we can know that means that our thoughts are not actually susceptible to the kind of reinterpretation needed for the semantic indeterminacy arguments to work. (There’s Premise 1.)

But if our thoughts were constituted entirely physically, by patterns of neurons firing or whatever else you want to suppose, those patterns would bear the same kind of contingent relationship to the meanings of our thoughts that words bear, and so would be susceptible to same kind of reinterpretation. (And that’s Premise 2.)

Feser discusses this kind of objection (and others) in his original paper in more detail.

Considering that we think in words (Don’t you?), that seems obvious. But the relationship doesn’t seem as contingent as the argument requires, since nobody hearing or thinking about the word “adding” would understand it to mean, or even include, “quadding”, and would never reinterpret it. If there are indeed pure concepts that we only attach to words after the fact, that goes on at a level far below the conscious. So why couldn’t that level be physical?

It really is impossible for me to take any of this argument seriously. We have evidence that brains exist, that they contain neurons, that neurons fire when we’re thinking, that their firing (or stimulation to fire) can produce images, emotions, and even thoughts. Yet we have no evidence for the existence of any nonphysical phenomenon attached to the brain. Is evidence irrelevant to philosophy?

3 Likes

Is Feser a substance or property dualist?

I can see how the way I phrased things in my attempt to summarize the arguments would lead you to that conclusion. By “physical facts” I was intending something like “facts accessible to physics” which include physically measurable properties but not our own first-person introspective access to our thoughts (the only physically measurable way of assessing those would involve communicating reports of introspection, and those can be attacked by the skeptic in the manner of the semantic indeterminacy arguments).

With that in mind, the argument for premise 2 should be read more along the following lines:

Which is not question-begging.


Actually, what the concept “green” really cannot be understood without reference to is the subjective qualia of our experience of the things we call green, but that is an entirely different argument against the idea that the mental is reducible to physics.

I say again:

In other words, exact communication is possible because Premise 1 is true; its possibility does not imply that Premise 2 is false.

Or it means that whatever interpretation goes on in the mind will not indulge in such reinterpretation.
I have no reason to think that my mind will become a “bizarre skeptic” reinterpreting my own thoughts in strange ways, even if it is physical in nature. Indeed, the idea seems obviously absurd.

1 Like