This is not AI. It is an agent-based simulation.
This is bad reporting, bad logic, and bad science. We need to be most cautious about studies that confirm our own biases. This is an example of how anti-religious folk were taken for a ride.
A valid critique of this article surfaced…
There was no possibility that anxiety levels that could be precursors to social violence would not occur. The programmers would tweak their program until they did. There was no possibility that the study would discover some other cause of the impending violence because the only allowed possibilities were group identities.
Interestingly, the model studied didn’t even address violence directly. It featured a built-in marker for anxiety. The authors checked to see if any part of the system passed a threshold number, on the theory that if the number got really high, violence might follow.
The study did not assert that humans are normally peaceful, because neither peace nor violence was modeled by the simulation. The programmers set a number that they labeled “anxiety” to start off at a low level. They showed that their code successfully increased that number past a threshold.
This was written by @johnnyb, a guy I often disagree with about many things. This critique, however, seems to be exactly correct.