Should We Fear AI?

All of these fears boil down to the fact that we just don’t know where AI is going and how soon it will take us to get there. Technology makes surprising and unusual leaps and bounds in ways we never think it will and things we think will take a while don’t. On the other hand things we thought would be here sooner aren’t there yet. It’s just a situation where we have to wait and see what comes.

AI is super boring to me. Its not intelligence but only a memory search machine that is here. thus only what is put in the memory makes it work. it never knows more then what folks put in the memory.
It could clean up logical thinking by a accuracy of math yet still its just memory search operations.

No, I don’t think we should fear AI.

The supposedly most worrying are mostly fiction. AI is not going to supersede humans.

Yes, it might be a job killer. But automation is already a job killer, and has been for some time. But these are problems that we can deal with if we revise some of our economic thinking.

What we have seen is that technology has increased worker productivity. But most of the benefits have gone to the very rich, instead of being spread among all. This is a problem due to our current economic assumptions. We need to find ways to change that.


I think there a couple issues, with different timescales:

  1. AI is still in its infancy, I don’t think most people know really what it is, and I suspect experts would describe what it is differently. Is AI merely a set of computational techniques (machine learning/neural nets/etc.) that allow us to do computational tasks that we previously were unable to do, or at least not able to do in a feasible timescale? Is AI an attempt to augment human intelligence to handle “big data”? Is AI a push to understand human intelligence by modeling it computationally? Is it to hope/fear of a new form of consciousness?
  2. How will AI change life for humans? We can ask Alexa, or Siri to do things that are convenient for us, but at what cost to privacy and security? How easy would it be to manipulate these systems to do bad things? Since often times the point with AI is that we don’t exactly know how we get the answers we do (it’s a black box - input in, output out) does that mean we may be unable to detect or trace such manipulations?
  3. How does it change our ideas about crime/ethics/choice? NPR had a story about animal preserves trying to use AI to predict where poachers will go before they even decide to go there! Some states are looking at using AI to determine things like bail and sentencing. This starts to sound like the “precognition” of sci fi. This is especially scary if AI is a black box.

@nwrickert’s point about jobs and automation is a good one as well. My fields of science (chemistry) is slowly being automated or being replaced by computational approaches. How will we, as a society, approach shifts in employment needs if it’s changing this fast? How will high school and college educators keep up?

In the end, I know it’s cliche, but I don’t fear AI itself so much as I fear people using AI as a weapon for evil rather than as force for the common good.