Splitting the Brain Creates Two Minds?

@AlanFox,

I was in the room when Noam Chomsky BLEW MY MIND in his description of this problem!

.
.
.
He said that the odds are very low that the human mind will ever be sophisticated enough to understand how the human mind works!

Thatā€™s the weak version! In my version, the odds are zero! :slight_smile: Seriously, I was unaware of Chomsky beating me to the punch on this. :frowning:

That seems intuitively right. But maybe not. By analogy: The complexity of evolution is immense, if you have to ā€œunderstandā€ it in a reductionist way and know how every genetic change possible will affect biochemistry, and how every sort of biomolecule will behave in every kind of organism and how every sort of organism will deal with every kind of ecological and environmental constraint. But if we hope to understand things at a somewhat more abstract ā€œsystemsā€ level, as one does with evolution, we might come to understand a great deal. All of it? Never, I should think. But maybe enough of it for all practical purposes.

2 Likes

Yes but allowing the converseā€¦

We could, with that understanding, construct an entity more complex than ourselves. That entity could thenā€¦ you see where Iā€™m going? The Terminator scenario!

1 Like

Or the Freudinator scenario! A psychologist-bot that understands us so well that it never even gets around to asking ā€œwhy do you hate your father?ā€

Actually, now this reminds me of the computer which is asked the answer to Life, the Universe and Everything, and which responds (after ā€œ42ā€) with instructions to build an even greater computer that can formulate the question corresponding to the answer. Itā€™s like turtles all the way up!

The next question is if we can invent an AI that can understand how the human mind woks, and if we can understand the AIā€™s explanation.

I say this half-jokingly, half-seriously. Even simple computer algorithms can figure out complex interactions much faster than humans can. For example, computers can beat the best Go players on the planet, and decrypt baseball signs like itā€™s nothing. Could there be a point where a computer can create a working model of the human brain?

Well, yeah. There was an interesting piece recently on a new computer chess program where the programmers decided to give it no hints, no starting algorithms, nothing: just the rules and objectives of chess. It played the worst imaginable chess against itself at first, flailing in the dark. But eventually, learning only by playing against itself, it became amazing ā€“ and, interestingly, played in ways that no human or existing program plays, making surprising sacrifices of material for seemingly minor tactical gains. But it wins, and not just against itself.

Now, the funny thing is that if you ask, ā€œcan it teach me chess,ā€ thatā€™s a whole different question. Probably not. You and I learn chess very differently from how this thing does, relying on themes, mental shortcuts (a rook is worth 5 points; a bishop, 3 points), and articulable strategic objectives, and without the raw processing oomph to handle evaluation of all possible outcomes.

So we could get a model of the human mind, and find that it contributed nothing at all to our understanding. It could behave like a human mind, in which case it would be just as hard to explain.

And this, oddly enough, does have something to do with arguments with creationists: I encounter a lot of the ā€œif you donā€™t have a detailed, fully deterministic model for how evolution works, evolution isnā€™t scientificā€ sorts of people. These usually are people who write computer software. I have to explain to them that anything which COULD generate a fully deterministic model of evolution could NOT explain the process, while no explanation of the process could be as richly detailed as a fully deterministic model. The way I put this, usually, is that while undoubtedly a full explanation of the War of the Spanish Succession purely in terms of physics could ā€“ given sufficient data (which, admittedly, we cannot have) and enough time ā€“ be provided, it would answer no question worth asking.

1 Like

No, we cannot.

They can ā€œfigure them outā€ but can they actually understand what they have figured out? I donā€™t think they can. What we normally mean by ā€œunderstandā€ is not within the capability of the AI systems that we build.

1 Like

I feel like there should be a " . . . thus far" at the end of that quote. :wink:

I have no idea if we will ever build an AI that has something akin to human understanding, but it doesnā€™t seem like it is out of the realm of possibility. If nature can produce human understanding through trial and error, what is stopping us?

I did carefully word that to be a statement about the present, rather than about the unknowable future.

We wonā€™t. But I donā€™t have a convincing argument for that.

The thing to consider, though, is that we build AI systems to be our servants. We do not build them to be our masters.

Well, thatā€™s the plan! :slightly_smiling_face:

1 Like