Southern Baptist Statement of Principles on Artificial Intelligence

I realize that @swamidass wrote an opinion commentary on this topic for the Wall Street Journal a few months ago but I don’t have a WSJ subscription to get through the paywall. So I haven’t read his essay. However, I read the Southern Baptist Convention project’s published statement:

I also read Jonathan Bartlett’s critique of @swamidass’ WSJ editorial:

I’d like to know what others think of these documents. I’ll get the ball rolling with the following observations.

Article 2 of the the Southern Baptist Convention project statement says:

We deny that the use of AI is morally neutral.

Did the authors perhaps mean something more like “We deny that the use of AI is necessarily morally neutral.” After all, is AI technology really so fundamentally different from so many other major advancements, such as the first wave of computer technology or even the invention of clay-absorbed TNT by Alfred Nobel?

After all, Nobel intended his creation to save enormous expense and human toil in excavating rock and creating railroad tunnels. Surely dynamite can be used in the pursuit of goals which are morally good. Nevertheless, dynamite can also be used by terrorists to maim and promote overthrow of a lawful government through reckless anarchy, a moral evil. So isn’t dynamite morally neutral until it is applied towards a specific goal that is recognized as morally good or bad?

It is not worthy of man’s hope, worship, or love.

I would have preferred “It is not worthy of man’s ultimate hope, worship, or love.” Is that what they meant?

As to A.I. not being worthy of man’s worship, has this been a problem with A.I. so far? Will it ever be? Are they worried that, much like an episode in the original Star Trek TV series, an artificially intelligent computer will be worshiped as the central deity of a new religion? (Remember the natives in that classic episode bringing fruit as an offering to the “god of the volcano”, which had another civilization’s computer inside it?) I suppose a general Statement of Principles has to cover every possible contingency, so no harm done. Right? So the Southern Baptist Convention simply wants people to know that worshiping an Artificially Intelligent system is not cool. Got it.

As to A.I. not being worthy of man’s love, I suppose the fear is sex robots. In that case, they are already available at retail. ('The cow is already out of the barn" as we country folk used to say.)

Returning to Jonathan Bartlett’s essay, my initial read-through gave me the impression that Bartlett is confusing moral responsibility with the legal culpability which goes with robotic cars. (Yes, the two types of responsibility are related but not necessarily identical.)


I’m going to publish my peice on the PS blog soon. Also I had a great conversation yesterday with Jason Thackery of ELRC. Looking forward to a possible collaboration between ELRC and PS.


I’ve encountered Bartlett on FaceMook and read a few of his essays. I wouldn’t put much weight on his opinions.

I’m quite certain I do not want to google the topic! :fearful:

I think they avoided the really deep issues. Could an AI have a soul? Is it wrong to turn off an AI, thus killing it? So far these questions are science fiction, but that could change.

The broad principles of the Southern Baptist statement seem reasonable – that we should not use AI to dehumanize. But some of the specifics may be a tad too specific. Yes, I assume that they are troubled with the possibility of sex bots. But their statement could be seen as objecting to use of AI in some kinds of prosthesis, and I question that.

I didn’t have any serious objections to Bartlett’s comments.

Are you referring to this part?

Furthermore, we reject the materialist and consequentialist worldview that understands medical applications of AI as a means of improving, changing, or completing human beings.

I think they are referring to transhumanism here.

I was bugged a little bit by the end of Bartlett’s essay:

The idea that theologians shouldn’t comment on anything without first running to experts to tell them what to think has neutered theology for the last century. I for one am glad to encounter theologians who understand both where their expertise lies (in this case, the relationship between humanity and our tools) and how it can be applied in the real world.

I am sympathetic to his cause, and no doubt theologians were (unfairly, and potentially unwisely) pushed out of many of the academic discussions in the 20th century. However, I think theologians should bear a big share of the blame for that themselves. It is very rare, in my experience, to encounter a theologian who will truly step out and engage the sciences (natural or social) or other areas of modern academics. Perhaps theology was neutered for the last century, but I have a hard time seeing where the theologians put up much of a fight. I think there are exceptions (T.F. Torrance comes to mind) but much of the work at the intersection between science and faith has been from scientists or scientists-turn-theologians like John Polkinghorne, Alister McGrath, etc.

1 Like

He misread me badly too. I was not saying they shouldn’t comment. I was suggesting they should get more information before they commented.


Too often it seems set up like a debate:

  • Scientist (Christian or not) makes their case from their field
  • Theologian makes their case from their field
  • The audience is left to decide who’s right and who’s wrong

The ERLC statement seemed kind of like a shot across the bow.

I think it’s more powerful when the disciplines spend some time seeking mutual understanding and the enter into dialogue as conversation partners looking for common ground.

I know theologians have things to say about AI and transhumanism, but they need to understand what AI is better. Similarly, scientists could spend some time with ethicists, if not theologians, to learn more about the deeper significance of what it means to be human and thinking about moral transfer.