This strong AI, also known as artificial general intelligence (AGI), has not yet been achieved, but would, upon its arrival, require a rethinking of most qualities we associate with uniquely human life: consciousness, purpose, intelligence, the soul—in short, personhood. If a machine were to possess the ability to think like a human, or if a machine were able to make decisions autonomously, should it be considered a person?
Noreen Herzfeld, professor of theology and computer science at St. John’s University, further explained these issues in her paper, “Creating in Our Own Image: Artificial Intelligence and the Image of God.” She writes, “If we hope to find in AI that other with whom we can share our being and our responsibilities, then we will have created a stand-in for God in our own image.”
As weak AI evolves into strong AI, says James F. McGrath, author of “Robots, Rights and Religion” and a New Testament professor at Butler University, humanity will have grown accustomed to treating it like an object. Strong AI, by definition though, is human-like in intelligence and ability. Its development, he says, would force humans to reconsider how to appropriately interact with this technology—what rights the machines should be afforded, for instance, if their intelligence affords them a designation beyond that of mere tools.
“The worst-case scenario is that we have two worlds: the technological world and the religious world.” So says Stephen Garner, author of an article on religion and technology, “Image-Bearing Cyborgs?” and head of the school of theology at Laidlaw College in New Zealand.
I wonder if some of the best contemplation of this takes place in Science Fiction movies and books.