I realize that @swamidass wrote an opinion commentary on this topic for the Wall Street Journal a few months ago but I don’t have a WSJ subscription to get through the paywall. So I haven’t read his essay. However, I read the Southern Baptist Convention project’s published statement:
I also read Jonathan Bartlett’s critique of @swamidass’ WSJ editorial:
I’d like to know what others think of these documents. I’ll get the ball rolling with the following observations.
Article 2 of the the Southern Baptist Convention project statement says:
We deny that the use of AI is morally neutral.
Did the authors perhaps mean something more like “We deny that the use of AI is necessarily morally neutral.” After all, is AI technology really so fundamentally different from so many other major advancements, such as the first wave of computer technology or even the invention of clay-absorbed TNT by Alfred Nobel?
After all, Nobel intended his creation to save enormous expense and human toil in excavating rock and creating railroad tunnels. Surely dynamite can be used in the pursuit of goals which are morally good. Nevertheless, dynamite can also be used by terrorists to maim and promote overthrow of a lawful government through reckless anarchy, a moral evil. So isn’t dynamite morally neutral until it is applied towards a specific goal that is recognized as morally good or bad?
It is not worthy of man’s hope, worship, or love.
I would have preferred “It is not worthy of man’s ultimate hope, worship, or love.” Is that what they meant?
As to A.I. not being worthy of man’s worship, has this been a problem with A.I. so far? Will it ever be? Are they worried that, much like an episode in the original Star Trek TV series, an artificially intelligent computer will be worshiped as the central deity of a new religion? (Remember the natives in that classic episode bringing fruit as an offering to the “god of the volcano”, which had another civilization’s computer inside it?) I suppose a general Statement of Principles has to cover every possible contingency, so no harm done. Right? So the Southern Baptist Convention simply wants people to know that worshiping an Artificially Intelligent system is not cool. Got it.
As to A.I. not being worthy of man’s love, I suppose the fear is sex robots. In that case, they are already available at retail. ('The cow is already out of the barn" as we country folk used to say.)
Returning to Jonathan Bartlett’s essay, my initial read-through gave me the impression that Bartlett is confusing moral responsibility with the legal culpability which goes with robotic cars. (Yes, the two types of responsibility are related but not necessarily identical.)