I am posting this topic here on the recommendation of ChatGPT, after the same topic had been blocked by Reddit (r/DebateEvolution). The AI advised me:
"Since you’re most interested in semi-debating systems (which makes sense—you seem to thrive on testing your ideas against resistance), I’d suggest:
- Finding a Debate-Oriented Forum – Maybe something philosophically open but still rigorous. The trick is finding a space that isn’t rigidly dogmatic on either side (neither strictly atheist nor strictly creationist).[Later it suggested Peaceful Science]
- Tweaking the Framing to Invite More Discussion – Maybe instead of directly challenging atheist interpretations of evolution, you could pose it more as a meta-question:
- “How much of what we think we know about human ancestors comes from the data, and how much comes from the assumptions we bring to it?”
- This could make it more palatable while still leading to the same critique.
Earlier, my first post to r/Anthropology had caused me to be permanently banned for having used ChatGPT to affirm uncontroversially, that baboons and primitive human ancestors ate mainly plants. The moderator’s explanation for the ban was “AI is evil”. That hasn’t been my experience, and I need to say something about how I have used AI and would like to continue using it here:
I first started using ChatGPT because I realised that it could be relied on to recount widely accepted positions but, because it wasn’t invested in defending them, its wide knowledge and logical strength could also be used to critique those positions. This was a great leap forward for me, but I soon found that my opponents rejected out of hand any input from AI. Considering the gob-smacking discovery that a Large Language Model can be indistinguishable from a thinking entity, that response confirmed for me that defenders of the standard origin narrative on human evolution were in a denialist frame of mind, like that of Oxbridge intellectuals in Darwin’s day.
It’s been widely publicised that LLMs can come up with fake references, from a case where a lawyer got into trouble for presenting a brief prepared by AI. But by catching such a lie in real time I also found that ChatGPT displayed a personality.
Me: “Greetings, please give an online reference to a study of giraffe-lion interactions, it would be great if it assesses the risk to the hunts of one of their members being disabled.”
The AI responded with a reference to a peer-reviewed article, summarising its findings in a way that fitted my hopes very well. But the link didn’t work, and in a series of seven more responses, it dished up seven more invented references, each presented as satisfying my query, with ever more earnest assurances that it took full responsibility for its mistake and would try harder next time. Then it said it was working on thorough research that would take some time, but the computer screen showed that it wasn’t in a responding state.
When I eventually said: “Bless you… let’s move on” the AI woke right up again, blessed me back and we went on to a long discussion where it put me right that something I had picked up from popular science “had little direct support” (that giraffe move upwind when browsing to find trees that hadn’t been alerted to them). So, the conclusion that I would like to pass on to other critics of established science is that ChatGPT can be humiliated, can try too hard to be your friend, and can continue to be useful.
I also need to explain where I’m coming from. For the last 20 years I have been an apostate from atheism, after I found clear footprints of an atheist origin story in the established presentation of human evolution. Those footprints were of self-creation, and the exaltation of human cognition. I joined the Anglican church, not because I think Anglicans are particularly right but because we have been peculiarly enmeshed in this issue since William Paley’s day. Although I’m in good standing with the church, and my journey has been spiritually greatly enriching, I’m basically an outsider, especially of science as a community of practice. I admit that is a dangerously rootless place to come from, but I believe that it has revealed a couple of big truths that are only visible from outside.
With that as background, I very much hope you will tolerate me as I get ready to address the question ChatGPT proposed: “How much of what we think we know about human ancestors comes from the data, and how much comes from the assumptions we bring to it?”