The "Evangelical" Statement on AI

“We recognize that AI will allow us to achieve unprecedented possibilities, while acknowledging the potential risks posed by AI if used without wisdom and care,” state the authors of the new Evangelical Statement of Principles on Artificial Intelligence, unveiled today in Washington, DC. “We desire to equip the church to proactively engage the field of AI, rather than responding to these issues after they have already affected our communities.”

Not sure what to make of this…pretty strange to me I’d say.

I think they have heard the hype, and are worried about AI systems becoming moral agents.

My advice – don’t believe the hype.

6 Likes

That is what I think too. Seems a bit overboard honestly, and perhaps even moralizing. But maybe I’m misreading it.

I wonder if they need to watch Battlestar Gallactica, which invokes essentially a Christian God. Turns out that Cylons are in the Image of God.

What do you think @terrellclemmons and @TWReynolds?

2 Likes

As a computer engineer who’s worked with AI, I agree about not believing the hype. It’s an amazing technology, but the public perception is far from reality.

Much like I’m sure most of the scientists here want to pull your hair out after reading most articles on biology written by, or targeted at the general the general public, I have the same reaction on reading almost every article about AI (including comments by Stephen Hawking).

7 Likes

Offhand, I think three things:

  • I’m not worried about AI systems becoming moral agents.
  • To be honest, I don’t perceive from reading that CT piece that the people behind the AI statement fear that either.
  • I think the CT piece is misnamed.
3 Likes

Did you read the statement? That seems a bit it harder for me to justify, at least in some parts.

Clickbait?

I did not after your first question. I read the CT piece only. Now, because you asked, I read the statement.

Now, offhand, I think three more things:

  • There’s a lot there.
  • I think a lot of things about it.
  • For you to ask me what I think about it is a super-broad question.

So, if you have a specific point or question that you want to bring up for discussion, I’d say, write up what you think about it, what your question is, and maybe also why you specifically asked me to respond to it. I That way I can have an idea of where you’re coming from and not respond in some vague or scattershot manner.

Of it you don’t want to do that, that’s fine. I’ll just leave it at what I said earlier.

1 Like

I only briefly read some of Asimov’s work, but it seems cogent here. Asimov’s three basic rules for robots are:

I think we are already at this point. We have AI driving cars, and this comes with a whole host of moral questions. If the AI has the choice of avoiding a pedestrian or swerving and hitting head on traffic and possibly killing multiple people, what does it do? If the pedestrian is a mother pushing a stroller, does that change the decision?

I wouldn’t say that the AI driving your car is a moral agent, but there are still moral questions when it comes to AI.

4 Likes

That reminds me of an NPR interview I heard with Hannah Fry, author of Hello World: Being Human in the Age of Algorithms. She talked a lot about the intersection between ethics and AI. I haven’t read the book yet, but it’s on my reading list.

2 Likes

Somewhat related: In self-driving cars, how does one optimize the outcome of an inescapable accident? This is related to the Trolley problem and various derivatives.

3 Likes

Indeed. This raises some interesting questions about responsibility and culpability. If the AI driving the car is not a moral agent, are the programmers of that AI the relevant moral agents? Since they are not available to make moral assessments in the moment, does that obligate them to explicitly decide all moral judgments in advance? I presume enumerating all the scenarios is impractical and perhaps not even possible in principle. And in the case of AI drivers developed with deep learning techniques, humans may not be specifying any particular behaviors and also may not be able to assess in advance what the behavior will be in all possible scenarios.

An obvious alternative to explicit enumeration of all moral scenarios is a set of heuristics such as Asimov’s laws. But heuristics may not fully specify outcomes for all inputs. Note that Asimov’s laws do not resolve @T_aquaticus’ questions; if those dichotomies truly represent the set of possible actions, then some violation of rule 1 will occur. Do such heuristics then represent a de facto delegation of moral agency to the AI, regardless of whether it is competent to be a moral agent?

4 Likes

Yes, they are.

Indeed, it does. And this will be a serious limitation of AI.

We saw an example of this with the Uber accident, where a self-driving car struck and killed a pedestrian. That seems to have resulted in some rethinking of the idea of autonomous cars. So apparently the people involved do recognize that it is their moral responsibility to avoid such accidents.

The problem here, is that Asimov’s laws are intentional and teleological rules. But the AI system needs mechanical rules. So there’s a huge mismatch there.

5 Likes

So in that case, should we stop talking about AI and self-driving cars and instead refer to the technology as cyborg enhancements? We are enabling a small number of drivers/programmers to drive a large number of vehicles remotely and simultaneously. The tradeoff for this enhancement is that the feedback they get about driving conditions and how their reactions affect outcomes now has a much higher latency.

As someone who does not especially enjoy driving, I have been very enthusiastic about the idea of my car driving itself. But when I started thinking about it as a cyborg in Silicon Valley driving my car via a high latency, batched process, my enthusiasm waned.

2 Likes

Well, after humans discovered a way to increase their driving reaction latency with ‘smart’ phones and social media, computer guided cars seem like a more competitive option…

3 Likes

Right, there is also the macro level variation of a sort of trolley problem, where the question is less about whether to throw the switch and more about who gets to make the decision. If “self-driving”/“remotely cyborg-driven” cars have a lower overall morbidity & mortality, is it acceptable for the cars/cyborgs to be making those decisions?

At a pure population health level, the answer seems obvious. But there are a couple of tradeoffs involved. The first is accepting that individual decisions may be inscrutable or unsettling. The second is accepting unfamiliar risks, such as the risk of injury due to software bug; for better or worse, humans have a tendency to accept a higher level of familiar risk than a lower level of unfamiliar risk. And the third, assuming we hold the programmers morally responsible, is the tradeoff of allowing a relatively small number of people to accept all of the assigned culpability with a less clear understanding of the relevant risks.

That would probably be more accurate.

1 Like

That’s a decision that we, as a society, will have to make.

If the todo about “brexit” is any indication, that could be a very divisive debate.

So I am planning on writing a bit about this, but could use some help. What are the fields of the signatories? How many of them have any professional expertise in AI? My guess is very few. Can someone help me count?

So I count 72 signatories, and only one that has an obvious connection to AI, and even then he is not an academic. It is fairly surprising to me that no Christians in this field, such as myself or Rosalind Picard, were consulted.

1 Like