The metaphysics of head and brain transplants

I should have given more detail there.

If I am using an abacus, and I am thinking only about the moving of the beads, then I am doing bead motion. To be doing computation, I have to be thinking about the numbers represented by the beads.

I’m assuming that the neural activity is about the bead motion. The thoughts of computation are at a higher level than what the neurons are doing. Yes, there may well be neurons involved in that higher level activity, too, but it would be hard to pin down.

Looking at it this way, you could perhaps say that our computers don’t really compute. We just interpret them as computing. And I think that’s the best way of looking at them.

So going with that analogy: Consider a simple computer like a digital thermostat. It senses the temperature in the room, then a number of electrical processes occur that result in it turning the heating system on or off depending on where the desired temperature has been set and whether the measured temperature is above or below this. The thermostat does not “know” it is doing this. It’s just carrying out a set of processes that it has been programmed to perform.

Our brains could be seen as doing the same thing. That is what I am calling “computational processes.” The part that keeps philosophers in business is the fact that we are aware of at least some of these processes and have the sensation that we can control or direct them to at least some extent. This it what is called phenomenal consciousness, the idea that there is something that it is like to be you or me sensing the temperature of a room and deciding to turn up the heat, which does not exist for the thermostat. Whether that is actually the case and, if so, how this phenomena can be accounted for biologically and philosophically is the nub of the question.

Can you explain what difference you see between our mind and a thermostat? If a thermostat is not able to compute, what is it doing instead?

Interesting thoughts again. I’ll ponder on them, thanks.

Let’s look at an ordinary alcohol thermometer. There’s a bulb filled with alcohol, and that alcohol can expand up a small capillary tube.

From the height of the alcohol in that capillary tube, we can compute how much it expanded. And then, using our laws of thermal expansion, we can compute the temperature.

We can describe that thermometer as if it were doing computation. But it isn’t actually doing any computation at all. Rather, the capillary tube is directly calibrated in terms of temperature. Calibration is needed (usually at the factory), but computation is not required.

I see the brain as using calibration, but not computation. I see Hebbian learning as an internal calibration program.

Back to your digital thermostat. Yes, the thermostat is doing computation, or information processing. It starts with information. The information it uses is a human artifact created by human sensors (measuring devices) which are calibrated in accordance with human standards.

The brain could not be just doing information processing, because the natural world is devoid of information. What we call “information” is mostly a human artifact, generated in accordance with human created standards.

computation: manipulating/reorganizing information that already exists.

measurement: the construction of information from the natural world.

I see the brain as mainly engaged with measurement. It is responsible for producing information, because the needed information does not already exist in the natural world.

So that’s one of the areas where we seem to disagree. I think information is something that exists in the natural world, and which can be measured in the same way we can mass, temperature, etc.

If I may expand on the original analogy, as much as anything just to focus and clarify the discussion for other readers.

Suppose someone decided to design a most unnecessarily complicated thermostat in the form of an anthropomorphic robot. It would have temperature sensors embedded in its skin, and just sit near the switch that operates the heating system with its hands folded in its lap until these sensors detect that the temperature of the room is below or above a certain range, at which point the robot will reach up and turn the heating system on or off accordingly.

A computer programmer who knew nothing of how this robot was designed would presumably be able to analyze the processes going on in its circuits and determine how the robot operates.

The suggestion, then, is that a neuroscientist could, in the same way, observe the firing of particular systems in the central nervous system of a person who feels the room is too cold and therefore turns on the heat. The neuroscientist would thereby understand how our brain causes this to occur.

However, I believe most people will have the intuition that there is a difference between the two scenarios, in that the robot does not have awareness or consciousness of its actions, whereas the human being does. And from that the question arises as to whether a complete understanding, within our current understanding of the physical world, of the neural basis of the human’s actions will be able to account for this conscious awareness as well as it would the process by which the perception of the room’s temperature leads to the physical act of turning on the heat. Some say it will not because this particular question is a “hard problem” that cannot be solved by our current understanding of the physical world, and that its solution requires a different understanding of fundamental aspects of our physical universe.

Yes, it is a fundamental disagreement. Almost everyone agrees with you and disagrees with me. However, there is very little progress being made in the studies of human cognition and of consciousness. And, in my opinion, that’s because your view (i.e. the dominant view) misunderstands the problem.

There are lots of signals floating around. But signals are not the same as information.

Look at information that humans use. We can include information about mass, temperature, distance, time, etc. We have standards for all of those kinds of information. The scientific community uses the “mks” standards (meter, kilogram, second). When we use that information, we rely on those standards so that we know how to interpret the information.

When we look at natural signals that we find in the natural world, there are no such standards. The signals are useless until we invent ways to use them. And the ways that we invent will involve our creating suitable standards.

Right. But the programmer’s analysis would be entirely in terms of switching in the electronic circuits. It would not mention temperature or heating systems, unless the programmer were allowed to bring in extra knowledge that did not come from analyzing the circuitry.

Yes. But if the neuroscientist did it the same way, his account would be of neural switching, and it would not mention temperature.

But here’s a huge difference. The programmer has a complete mechanical account of the operations of that one robot. But thousands of robots were manufactured at the same time, and the programmer’s mechanical account applies to those, too.

With the neuroscientist, he has a complete mechanical account of one system (i.e of one person). But he cannot apply it to any other system, because we are all different. And he can’t be so sure of applying it to the original system (original person), because that person’s neural network is changing over time.

Here’s an illustration that I like to use. I’m in Chicago, and I decide to drive to Los Angeles. Before I start of the trip, an engineer fits lots of sensors in my car, which will record the force that I use on the steering wheel, on the gas pedal, on the brake pedal, etc. After I make that trip, the engineer now has a complete mechanical description of how to get from where I started to Los Angeles. Or does he? If he tried to use exactly the same forces at a different time, it might not work. The traffic would be different. The weather (and winds) would be different. It probably wouldn’t work.

That’s a limitation of mechanical descriptions. Instead, it is better to give an intentional or teleological description: "Drive to the nearest highway. Then follow the signs that say “Los Angeles” or “California” or “West”. That’s going to be an easier description to follow, and it won’t be seriously affected by traffic conditions or weather conditions.

We need the intentional/teleological description of what the brain is doing. The purely mechanical description won’t be very useful.

That’s part of the challenge in doing neuroscience, but I think the difference between understanding the brain and, say, the kidney may only be one of degree rather than kind. No two kidneys function exactly the same, because people’s genetics are different as are their diets and any other illness or other environmental factors that may have affected any particular kidney. We can nonteheless derive general principles and understanding regarding the functioning of the kidney.

The same can be done for the brain. We can predict the results of specific lesions to specific areas of the brain with a degree of accuracy. If this were not the case, the game the students of neurology play called “Name the lesion” would not be possible. The challenge, then is to narrow down the specificity to which particular cognitive functions can be attributed to particular brain networks.

That is not an argument against the utility of a mechanical account, but against using a mechanical account that fails to take into consideration all the factors necessary for accomplishing the task. If the engineer knows all those other factors you mention, he could in theory create a program that would adjust the steering, braking etc of the car based on those other physical factors so that the desired route would be followed. This, to be sure, would be a very complicated task. The human brain is also very complicated.

But then it would not be a mechanical account. You use the expression “accomplishing the task”, but that already has a teleological aspect.

Teleology in biology is an illusion caused by the fact that attributes that increase the odds of reproductive success for an organism tend to be more stable over time. IMHO, of course

1 Like

For what do we need that?

Quite honestly, I am puzzled by the question.

Most of our ordinary descriptions depend on purposes and intentions.

Yeah but I don’t understand what you’re trying to achieve with a teleological description of what the brain is doing. It seems trivial to give one. Some times the brain is trying to find a way to pass on the genome that encodes it. Some times it’s trying to find food because the organism of which it is a part has an instinctive desire to keep living.

You can add as much detail as you want. But why are we bothering with this?

It seems to me that is an example of what you are talking about: Our minds making up a concept that helps make sense of the data that is being presented to it. Dennett calls this the “intentional stance.”

I’ll agree that’s not a very good way of incorporating teleology into a description.

That one is better.

I have a post on my blog, from years ago where I suggest how to use purpose in explanations.

It comes from my disagreement, a few posts up, with @Faizal_Ali – he apparently wants to look at information as mechanical entities, while I want to look at it as intentional entities.

I’m not a big fan of Dennett’s “Intentional Stance”.

Yes, it can be a partial explanation in some cases. But it mostly attempts to explain away what needs explaining.

That applies to much of what Dennett argues. Personally, I find it a useful way of clearing up the extent to which what appears to be a mystery or hard question actually is such. But I can see how it can also feel unsatisfying.

1 Like

My interest in this started with an attempt to understand human learning. And I quickly ran into the information problem, as in “how do we decide what is a cat or a dog or a tree or a chair”. The computationalist approaches to this do not work (in my opinion).

The computationalists typically assume a lot of innate knowledge (innate rules that we follow). I grew up in Australia, and I never had a difficulty recognizing kangaroos. But my ancestors were all of European descent, with no kangaroos in their experience. There had not been nearly enough time for there to have formed a genetic basis for the assumed innate rules.

My own conclusion was that children (and their brains) must have some way of making sense of the world that separates out cats, dogs, kangaroos, trees, etc. It could not be based on the kind of innate rules assumed by computationalists. And how we learn to make sense of the world is presumably behind how we acquire meaning, intentionality, etc.

Computer attempts at perception don’t work well at all. Show me a cereal box at the store, I will recognize what it is. The computer equipment searches for a UPC code on the box, looks that up in a database, and the database entry tells the computer what it is. That looks like a sophisticated form of cheating.

I have heard that an autonomous vehicle (self-driving car) slows down when there is a snowman on the side of the road. It slows down in case the snowman runs into the road. This is what you get when you are mindlessly following rules without any understanding.

Dennett spends some time talking about “competence without comprehension”, and this seems to be his attempt to explain away the problems.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.