The Theological Implications of Artificial Inteligence

This strong AI, also known as artificial general intelligence (AGI), has not yet been achieved, but would, upon its arrival, require a rethinking of most qualities we associate with uniquely human life: consciousness, purpose, intelligence, the soul—in short, personhood. If a machine were to possess the ability to think like a human, or if a machine were able to make decisions autonomously, should it be considered a person?

Noreen Herzfeld, professor of theology and computer science at St. John’s University, further explained these issues in her paper, “Creating in Our Own Image: Artificial Intelligence and the Image of God.” She writes, “If we hope to find in AI that other with whom we can share our being and our responsibilities, then we will have created a stand-in for God in our own image.”

As weak AI evolves into strong AI, says James F. McGrath, author of “Robots, Rights and Religion” and a New Testament professor at Butler University, humanity will have grown accustomed to treating it like an object. Strong AI, by definition though, is human-like in intelligence and ability. Its development, he says, would force humans to reconsider how to appropriately interact with this technology—what rights the machines should be afforded, for instance, if their intelligence affords them a designation beyond that of mere tools.

“The worst-case scenario is that we have two worlds: the technological world and the religious world.” So says Stephen Garner, author of an article on religion and technology, “Image-Bearing Cyborgs?” and head of the school of theology at Laidlaw College in New Zealand.

I wonder if some of the best contemplation of this takes place in Science Fiction movies and books.

The book by David Bell of Christians in Science and retired professor of AI is a good place to start.

1 Like

Welcome to the forum @Rneely. Tell us about yourself?

I don’t expect to have much to contribute here but I’m happy to read and learn. My background is a couple of Master’s degrees, one in Electrical Sciences in the early 70s and one in Computer Science in the early 80s. Before retiring I worked for a long time in collaborative European computer research projects mostly in the areas of formal specification, proof and refinement techniques. After that much of my time was spent in corporate technical strategy turning to futurism consultancy and venture capital. So nothing much relevant to this forum apart from linkages to the strong AI and transhumanism debates.

3 Likes

Robert,
Nice to see a fellow retired electrical engineer from the 70’s and 80’s here.

1 Like

I’m giving my first talk on this in Hong Kong end of October. Let’s keep the conversation going. What can you teach me?

Here is my tentative abstract:

What is Human? The Challenge of Artificial Intelligence…

Artificial intelligence is rising everywhere, embedded in technology and driving scientific discovery. It seems that machines can intelligently solve an increasing number of problems. In my own work, we use artificial intelligence to understand the metabolism of medicines. What does it mean for scientists when computers labor alongside us? To what extent does this progress unsettle human exceptionality? True artificial minds are still science fiction, but artificial intelligence is bringing us back to the grand questions. What is consciousness and how could it arise? In a world with artificial minds, what would it mean to be human?

To prepare for this two books come to mind:

One is fiction, and the other is speculation of the future by a historian. Both are heavy based on a world with AI.

1 Like

If you have time on your way to Hong Kong, you could see the current state of aspects of the debate at the How the LIght Gets In festival at end of September in London. I will be there for both days skipping between the sessions on the topics that interest you. Look at https://howthelightgetsin.org/london/programme-page and fliter first for MInd/Psychology then Science/Technology. Many of the usual commentators will be there and I know there will be vigorous debate from some planning to attend.

2 Likes

Patrick, There is always a danger of me posting interesting links back to the place I found them since I’m not subject to academic rigour in recording references. Given that risk, you would find this interesting https://medium.com/@emmily.j.g/homo-deus-a-blundering-tall-story-of-tomorrow-d774c912a099

I believe that after a surprise success such as Sapiens, publishers put heavy pressure on their new finds to get another out and hopefully build up a loyal readership who will buy all subsequent output regardless of merit.

2 Likes

Wow, that review of Homo Deus is great. If those quotes are real, this is really just an absurd book.

Throughout the whole book, Harari shows a somewhat out-of-touch attitude towards research in natural sciences. A statement representative of this is ‘ … every technical problem has a technical solution. We don’t need to wait for the Second Coming in order to overcome death. A couple of geeks in a lab can do it [Chp.1, p.26]. Being a physicist, I might be one of these ‘ geeks in a lab ’. Unfortunately, I do not see me or any of my colleges from neighbouring disciplines helping humankind to overcome death anytime soon. However, what I can do is point out the false statements in ‘ Homo Deus ’ and Harari’s misconceptions when it comes to natural sciences.

@Patrick do you really think that statement is defensible? Do you think this a book with solid science? Or is just a comforting fairy tale?

Wow, there are some heavy hitters to be speaking: Sean Carroll, Roger Penrose, Stephen Pinker. Very impressive.

1 Like

It is not an absurd book. It is really an extension of Harari’s Sapien book which is a history book.

It is not a science book. It is a prerogative view of the future. It is certainly not a comforting fairy tale as anyone reading it can say, “I would like to live in such a world” or “I am fearful to live in such a world” But it is pretty much total conjecture taking the cutting edge of science and projecting it into the future. It is not unlike science fiction like Star Trek. I enjoyed this book mainly because I enjoyed his history book “Sapiens” which told of history of mankind much differently that the popular way of look at how we got here. Thinking about the future is fun.

Some men see things as they are and say “why”, I dream things that never were and say “why not?”

Since the date of this talk is approaching I thought I"d mention some aspects of AI/AGI in Hong Kong you might want to be aware of to avoid surprises from audience questions.

There are some individuals who are prominent in OpenCog and related groups who have made Hong Kong their base for quite a number of years. In particular Ben Goertzel provides regular commentary for the singularitarians and Sophia from Hanson Robotics has been excellent marketing for them despite the obvious deployment of chatscripts.

Here are some extracts from a very recent piece by Ben:

“we need to do our best to guide the AI minds we are creating to embody the values we cherish: love, compassion, creativity, and respect”.

“we want our first true artificial general intelligences to be open-minded, warm-hearted, and beneficial”.

“AGIs have the potential to be massively more ethical and compassionate than humans”

In David Bell’s book on Superintelligence, he discusses the provision or emergence of AGI “worldviews”. Goertzel now seems to support nurturing human “values” with a seeding process.

“we can seed the AGI we create with our values as an initial condition”.

The spin-off LovingAI project additionally considers the impact on human consciousness of interaction with specifically scripted AI channeled through the Sophia humanoid. The experiment in Hong Kong was viewed as a great initial success since:

“the interaction with the robot consciousness guide triggered a dramatic change of consciousness in the human subject”

More details on the project are at lovingai.org.

I would expect that some members of the audience for your talk will be aware of these views and may pose questions that reference them.

1 Like

Thank you!

A regular visitor to Hong Kong who now seems to be taking in interest in this topic is retired mathematician John Lennox. Interestingly he centres his overview on the Homo Deus idea while also skirting around the usual pronouncements from Martin Rees, Max Tegmark, Ray Kurzweil and even Dan Brown. The relevant part of his talk starts at this point https://youtu.be/njU4u2hMFnE?t=1657

1 Like

Eberhard Schoneberg has been in HK for about 20 years. If you see him in your audience be prepared for some amusing opinions. Here is his most downloaded article https://www.linkedin.com/pulse/darwin-finally-proven-wrong-ai-eberhard-schoeneburg/

2 Likes

Is that satire?

1 Like

Try this for an explanation and notice the Clever Hans story.

https://www.linkedin.com/pulse/key-note-beware-deep-learning-eberhard-schoeneburg/

Is anyone taking this as anything more seriously than a manifesto plus a chat bot? I don’t see the substance here.

Thanks for the head’s up. Content has changed somewhat:

Re: “Is anyone taking this more seriously …”

Focusing “anyone” on “anyone outside of the tech community”, the answer is yes. After the Saudi government offered person status to Sophia and appearances became regular occurrences on tv shows and news items many people (even those at transhumanism conferences) showed confusion over Sophia’s current and imminent capabiliity.

Hanson Robotics took the maximum marketing boost from the publicity and uncertainty before Goertzel laid out some details in a blog post.

One thing I describe there is the three different control systems we’ve historically used to operate Sophia:

  1. a purely script-based “timeline editor,” which is used for preprogrammed speeches, and occasionally for media interactions that come with pre-specified questions;
  2. a “sophisticated chat-bot” — that chooses from a large palette of templatized responses based on context and a limited level of understanding (and that also sometimes gives a response grabbed from an online resource, or generated stochastically).
  3. OpenCog, a sophisticated cognitive architecture created with AGI in mind. It’s still in R&D, but it is already being used for practical value in some domains such as biomedical informatics, see Mozi Health and a bunch of SingularityNET applications to be rolled out this fall.
1 Like