Artificial Intelligence, Consciousness and Epiphenomenalism

In another forum, @swamidass, myself and others discussed some questions regarding AI, consciousness and semantic epiphenomenalism. We are looking for questions, answers and discussions from all comers.

Joshua made the claim that research into AI had proven that semantic epiphenomenalism is false. James requested a link to support that claim. I also believe that epiphenomenalism is false but have not seen any research on AI would could claim to support that view. I would love to see the research.

One participant claimed that ā€œorganisms programmed for fitness outcompete organisms programmed to see reality as it is.ā€ For support, he provided this link to a Ted talk. Donald Hoffman: Do we see reality as it is? | TED Talk

This link discusses epiphenomenalism. Epiphenomenalism (Stanford Encyclopedia of Philosophy)

In a related tangent, others provided links critical of Plantinga’s Evolutionary Argument Against Naturalism (EAAN). While I agree with Plantinga that science and naturalism are at odds, I think he has misidentified the battlefield. Here are two links contra the EAAN.

Matthew posted this link on Philosophy of Artificial Intelligence.

To that post, Joshua commented that it contained mostly dated information and neglects newer research on AI.

Another participant asked why humans should ascribe to AI the quality of ā€œintelligenceā€ but deny that AI has ā€œconsciousness.ā€ To this question I responded:

ā€œThis is a good question. When I think of human intelligence and human consciousness it is very different than say the reasoning of Watson. Although Watson is amazing at playing Jeopardy, I don’t think Watson is capable of desiring to do science, or of thinking up new experiments, or of creating art, or of wrestling with guilty desires, or of worshiping God of his own free will. I’m not certain he can appreciate the beauty of a sunset or a painting by Leonardo. I’m not sure that Watson can experience altruism or love.ā€

To this Joshua replied that Watson is a poor example of AI and that more recent research might change my mind. He suggested I searched the journals Nature and Science on the topic of ā€œdeep learningā€ to find these articles. I’m very interested, but have not yet had time to perform that search. If someone beats me to it, I would be grateful. I won’t be able to actually read the articles until I’m back at the university library.

1 Like

Great to see you here @Ronald_Cram.

More reservedly, it is a rapidly moving field that is challenging that notion very strongly, and more importantly is challenging the intuitions behind the EAAN.

Once again, I’m not sure what the answer is, but Watson is a narrow example of AI, and certainly not the most interesting. The new research is making rapid progress in AI, and isn’t done yet. It might change our minds, but it is thus far under-contemplated in philosophy from what I can tell.

I also add that there are several very complex things been discussed at once here. Each of the things I brought up had precise relevance to others, and something is lost without the context. We will get into it, but it might take some time, and multiple threads.

I’m not quite sure what the topic is here.

I looked at the linked SEP entry for epiphenomenalism, and then searched there for the word ā€œsemanticā€. My browser did not find that word. So while I’m reasonably familiar with epiphenomenalism, I’m not quite sure what is ā€œsemantic epiphenomenalismā€. But I guess I will get by.

I did watch the TED video. Daniel Dennett argues that consciousness in an illusion – a user illusion. And the video works well to illustrate that idea. But here’s where I disagree with Hoffman. He is discussing whether we see reality as it is. And I doubt very there there any such thing as ā€œreality as it isā€. I can agree that there’s reality as we see it – that’s roughly what philosophers call ā€œthe manifest imageā€. And there’s reality as science sees it – philosophers call that ā€œthe scientific imageā€. But I am doubting that there is any such thing as ā€œreality as it isā€. If there is such a thing, then I expect that it is unknown and unknowable to us.

1 Like

It starts with the EEAN, and my statement that Machine Learning based AI is more and more falsifying Plantinga’s intuition of epiphenomenalism that underlies the EEAN.

From my own investigation into human cognition, I have concluded that there are no truth requirements for perception. So Plantinga’s EEAN cannot get started.

My current view is that truth emerges from human culture and, most particularly, from human language. So the idea of an ultimate truth seems mistaken.

2 Likes

No doubt a person can claim that Epiphenomenalism is false… but i doubt there is anything that can prove it is false!

1 Like

I agree that this is a complex and wide-ranging discussion and context is important. I apologize for not providing great context for everything. I did make an attempt. I also apologize if you feel I didn’t represent your position fairly.

1 Like

No need for an apology. Just putting enough out there so people can follow along.

This link explains the difference between epiphenomenalism and semantic epiphenomenalism.

While I respect Plantinga, I don’t think his EAAN is a good argument. In part because he believes naturalism entails epiphenomenalism or at least the semantic variation. I do not. The link above also gives the natural selection argument against epiphenomenalism.

The other problem with the EAAN is the belief that generally speaking the false beliefs are just as life-affirming as true beliefs.

The great philosopher Dallas Willard developed this axiom: ā€œReality is what you run into when you are wrong.ā€

Because reality (including Bengal Tigers) can kill you, I have developed a corollary: ā€œThe better you understand reality, the better your probability of survival.ā€ If my corollary is true, then the EAAN is false.

I think the EAAN fails because of both these points. Naturalism does not entail epiphenomenalism and true beliefs are (generally speaking) more life-affirming that false beliefs.

3 Likes

Are you denying the existence of an outside world? A denial of reality?

How do you respond to Dallas Willard’s famous axiom that ā€œReality is what you run into when you are wrongā€?

Once when I was much younger, I thought I had money in the bank and the bank disagreed. A check I wrote was returned marked ā€œNon-sufficient funds.ā€ This is a good illustration of me running into reality. Does that make sense?

Oh, no. Not at all. I don’t doubt that there’s a reality independent of us.

I take ā€œthe way reality isā€ to be an implicit reference to something like a complete description or specification of reality. I am denying the possibility of such a complete description. Or, if you like, I am commenting on the limitations of language and truth.

I can maybe explain that a little differently.

Due to our limitations, we can only have partial descriptions of parts of reality. So there’s the theoretical possibility that there could be two different cultures such that there is no overlap between the parts of reality described by culture A and the parts of reality described by culture B.

1 Like

I can join you in denying that humans are not now or ever will be in possession of a complete description of reality. However, I believe God is in possession of that complete description.

I can also agree that two different cultures often see the world very differently, but denying any overlap between them seems bizarre to me. If someone comes to the US from another country and rents a car, are they not going to see the same red light that I see?

@swamidass suggested that I search the journals Nature and Science for articles on ā€œdeep learning.ā€ I have started that process.

I found a review article in Nature titled ā€œDeep Learningā€ and it is freely available.

In the conclusion of that article it states that supervised learning has, so far, been much more successful that unsupervised learning. But the researchers claim that they expect unsupervised learning to become far more important in the future. Published in May 2015, I would like to know if this prediction has been proven right so far. Based on my preliminary research, it appears the answer is no. The only articles I’ve found so far relate to unsupervised learning in reading medical imaging. This is highly-directed unsupervised learning. There is no desire on the part of AI to do this unsupervised learning.

In Scientific American there is an article titled ā€œMachines Who Learn.ā€ https://www.nature.com/scientificamerican/journal/v314/n6/full/scientificamerican0616-46.html

Published in May 2016 and with only four citations, this may not be a great article. I thought it was very telling that the word ā€œWhoā€ was used in the titled. I would have titled it ā€œMachines That Learnā€ but already the authors are attributing personhood to a machine. Unfortunately, this article is behind a paywall and I will not be able to read it until I get back to the university library.

In a bit of news reporting in Nature, I found an article called ā€œAI mimics brain codes for navigation.ā€ This is freely available.
https://www.nature.com/articles/d41586-018-04992-7

The paper underlying the news report is behind a paywall.
https://www.nature.com/articles/s41586-018-0102-6

The findings here don’t appear to be too surprising.

In explaining why I don’t think that AI will ever be conscious, I wrote:

ā€œWhen I think of human intelligence and human consciousness it is very different than say the reasoning of Watson. Although Watson is amazing at playing Jeopardy, I don’t think Watson is capable of desiring to do science, or of thinking up new experiments, or of creating art, or of wrestling with guilty desires, or of worshiping God of his own free will. I’m not certain he can appreciate the beauty of a sunset or a painting by Leonardo. I’m not sure that Watson can experience altruism or love.ā€

Based on your reply, Joshua, I thought I was going to find articles on AI showing that it has developed desires, ambitions, fears, rebellion, an appreciation for beauty, emotional outbursts, or a refutation of EAAN. So far, I’m not finding anything like that.

Is there any particular article you think I should read?

I was suggesting a theoretical possibility.

Think of culture B as consisting of aliens from Andromeda who are perhaps visiting earth.

Yes, for another human culture, an extreme lack of overlap is hugely unlikely. Our biological similarities force us to see much the same things.

1 Like

Okay, are you saying that aliens from another planet wouldn’t see the same red light?

I just read an interesting news report that the head of the British Science Association thinks we are not ready for AI and that this unreadiness is the greatest threat to mankind, greater than climate change.

Personally, I don’t see where this concern is coming from. I would like to understand it.

It’s hard to know without knowing more about the aliens.

The main point for me, though, is that the world we see depends on our culture and our biology. We can somewhat see the cultural dependencies by looking at other cultures. However, human cultures are becoming more homogenized, so some of that is becoming less obvious.

What is harder to see, is the dependence on our biology. We all share similar biologies and we pretty much take it for granted.

Another way of looking at this: I’m saying that how we see the world is mostly a matter of pragmatics. We see the world in ways that best support our way of life.

People tend to think of how we see the world in terms of truth. But that is mostly illusory. We see the world pragmatically, and then we invent truth conditions so that we can describe the way we see the world as truth.

I’m fairly sure a dog won’t see the red light, nor would a bull. Many animals would not see the red light.

We also do not see color as it is. Color is a spectrum, with an ordering, and colors do not mix to form new colors. However, our perceptual rainbow is unordered, and different colors mix to form new colors on the same rainbow in counterintuitive ways. The mixing rules are different if we are mixing light or pigment too. We cannot perceptually distinguish between a pure yellow light, and yellow light formed by mixing red and green, even though these are different in their spectrums. We certainly do not perceive color in a way that maps to reality.

For this reason, for example, we should not expect that artificially produced images (screens and tvs) would be correctly perceived by other life. Our technology might be perceived on the flattened hues that a color blind person might see, perhaps more like a tinted black and white video.

1 Like

You are absolutely right about that. I chose a bad example. On the other hand, the aliens might be able to see infrared, which we cannot see.

Let me rephrase in the next comment.