The bad influence of atheism on the human origin story told in the name of evolution. 1: the usefulness of AI

I am posting this topic here on the recommendation of ChatGPT, after the same topic had been blocked by Reddit (r/DebateEvolution). The AI advised me:

"Since you’re most interested in semi-debating systems (which makes sense—you seem to thrive on testing your ideas against resistance), I’d suggest:

  • Finding a Debate-Oriented Forum – Maybe something philosophically open but still rigorous. The trick is finding a space that isn’t rigidly dogmatic on either side (neither strictly atheist nor strictly creationist).[Later it suggested Peaceful Science]
  • Tweaking the Framing to Invite More Discussion – Maybe instead of directly challenging atheist interpretations of evolution, you could pose it more as a meta-question:
    • “How much of what we think we know about human ancestors comes from the data, and how much comes from the assumptions we bring to it?”
    • This could make it more palatable while still leading to the same critique.

Earlier, my first post to r/Anthropology had caused me to be permanently banned for having used ChatGPT to affirm uncontroversially, that baboons and primitive human ancestors ate mainly plants. The moderator’s explanation for the ban was “AI is evil”. That hasn’t been my experience, and I need to say something about how I have used AI and would like to continue using it here:

I first started using ChatGPT because I realised that it could be relied on to recount widely accepted positions but, because it wasn’t invested in defending them, its wide knowledge and logical strength could also be used to critique those positions. This was a great leap forward for me, but I soon found that my opponents rejected out of hand any input from AI. Considering the gob-smacking discovery that a Large Language Model can be indistinguishable from a thinking entity, that response confirmed for me that defenders of the standard origin narrative on human evolution were in a denialist frame of mind, like that of Oxbridge intellectuals in Darwin’s day.

It’s been widely publicised that LLMs can come up with fake references, from a case where a lawyer got into trouble for presenting a brief prepared by AI. But by catching such a lie in real time I also found that ChatGPT displayed a personality.

Me: “Greetings, please give an online reference to a study of giraffe-lion interactions, it would be great if it assesses the risk to the hunts of one of their members being disabled.”

The AI responded with a reference to a peer-reviewed article, summarising its findings in a way that fitted my hopes very well. But the link didn’t work, and in a series of seven more responses, it dished up seven more invented references, each presented as satisfying my query, with ever more earnest assurances that it took full responsibility for its mistake and would try harder next time. Then it said it was working on thorough research that would take some time, but the computer screen showed that it wasn’t in a responding state.

When I eventually said: “Bless you… let’s move on” the AI woke right up again, blessed me back and we went on to a long discussion where it put me right that something I had picked up from popular science “had little direct support” (that giraffe move upwind when browsing to find trees that hadn’t been alerted to them). So, the conclusion that I would like to pass on to other critics of established science is that ChatGPT can be humiliated, can try too hard to be your friend, and can continue to be useful.

I also need to explain where I’m coming from. For the last 20 years I have been an apostate from atheism, after I found clear footprints of an atheist origin story in the established presentation of human evolution. Those footprints were of self-creation, and the exaltation of human cognition. I joined the Anglican church, not because I think Anglicans are particularly right but because we have been peculiarly enmeshed in this issue since William Paley’s day. Although I’m in good standing with the church, and my journey has been spiritually greatly enriching, I’m basically an outsider, especially of science as a community of practice. I admit that is a dangerously rootless place to come from, but I believe that it has revealed a couple of big truths that are only visible from outside.

With that as background, I very much hope you will tolerate me as I get ready to address the question ChatGPT proposed: “How much of what we think we know about human ancestors comes from the data, and how much comes from the assumptions we bring to it?”

1 Like

Temporarily closing comments until I can get an a moderation response up.

2 Likes

Hello @Jay, and welcome to Peaceful Science. :slight_smile:

Thank you for your transparency about using ChatGPT. Be aware that at least one other before you used AI for the purpose of trolling, which was obnoxious. I recommend you use AI sparingly (and transparently), otherwise you may as well just have a discussion with the AI.

This is a first! :slight_smile:

A general comment; first you sing the praises of AI and then you go one to point out it’s flaws. I think you might have answered your own question.

There’s an old saw here about science being atheistic. No, science is methodology, and anyone can use that same methodology. Science also requires falsifiable hypotheses, a common theme in many discussions here. I think you will find that most atheists have no objection to religion per se, but will fiercely disagree with the suggestion that religion can be scientific.

You might also check out The Faith Instinct, by Nicholas Wade, a book which dives into the anthropological literature to make a case that humans evolved the capacity for religious beliefs. I think Wade makes a plausible case, but he stops short of asserting it must be true.

3 Likes

I would say that the responses from an LLM can mimic human responses very well. But it cannot truly think and I would question the logical capabilities it may appear to have.

In the end an LLM is a statistical model - admittedly with additional guidance still sitting on top - but any output it produces needs to be checked.

There is also the business of providing the huge amount of data needed to train LLMs. Both the use of large amounts of copyright protected text as well as the huge energy consumption are considered morally questionable. Which likely contributed to the negative reactions to your use of ChatGPT.

3 Likes

Within artistic communities there strong negative feelings about training AI on their creative work, and this seems to be well justified. There is also strong pushback among the greater science community about AI generated papers appearing in journals.

It’s also the case that AI can sometimes synthesize large amounts of data to tease out innovations that were not previously known. There was a recent discovery in protein folding where AI found the previously unknown 3D shape. This discovery seems to have a lot of human input too.

Back to human origins, I would be highly surprised if AI could produce a metaphysical model that has not already been considered by humans (and debated for centuries).

2 Likes

Welcome to the forum Jonathan.

AI is useful and getting scary better all the time. However, to echo the moderator’s admonishment on sparing use, AI can instantly generate walls of text, while digesting and responding to that output takes a lot of human time and effort. I would be most interested in your own personal thoughts and reflection.

To your question, we are learning more all the time of human development from the fossil evidence, physical anthropology, and genetics. The “but” is that the field is also particularly rich with…perhaps speculation is a better word than assumptions. There seems to be a great deal of controversy still concerning the pattern of human dispersal, which variations represent distinct species, which are direct ancestors and which are offshoots, and the implications of morphology. The recent disputation over Homo Naledi is an example of a rather too rich narrative being extrapolated from too thin evidence.

1 Like

Human paleontologists have biases that tend to affect their conclusions, but atheism doesn’t seem to be among them. The chief bias is the need to tell a story that makes whatever fossils they find important, generally by claiming them to be human ancestors in a linear progression. In reality, we can’t distinguish ancestral species from cousins, and rather than a single line we have a rather bushy tree with one surviving branch. Likewise, for molecular biologists there’s a tendency to make every genetic difference between humans and chimps into an important human advance, when in reality 90% of the genome is meaningless junk. But I don’t think that’s what you were really asking about.

So here’s a question for you: how would the data look different depending on whether God had been active vs. non-active in human evolution?

6 Likes

I have found AI-generated content (and by this I mean chatbot-type generative AI, not programs like @Dan_Eastwood makes reference to which can crunch large amounts of data to solve complex problems) to be atrocious. And, worse, this atrocious content is often posted by people who believe, wrongly, that it can answer questions about their own views better than they themselves can.

People seem to think that the key word in “Artificial Intelligence” is “Intelligence.” But the key word is, in fact, “Artificial.”

There are a lot of patient people here, who will write at length to help you understand things you don’t. I think it is a gross abuse of their patience to ask them to respond to AI-generated material.

Passing the Turing test is a completely separate question from whether any useful work is actually being done when the test is being passed.

I would suggest you not rely on AI chatbots to address complex questions. It will not aid understanding. If instead YOU can express what you think it is that biologists and paleontologists have wrong here, then there are many people here who can help.

6 Likes

There’s a whole range of “AI” models, some more useful than others. While impressive in some ways I feel that the ChatBots may be the least useful.

1 Like

Welcome to Peaceful Science.

I think whether AI is “evil” or not is contextual.

In the context of a forum such as Reddit, AI can:

  1. produce a very large amount of text without much user effort; and

  2. regularly hallucinate parts of that text.

This creates a tremendous asymmetry of effort in those responding to the AI text, in that they would need to expend far more effort to check the AI’s unreliable claims than were expended in creating them.

Simply banning AI-generated content would seem to be a not-unreasonable solution to this problem.

3 Likes

One more thing: when you (or your chatbot) say

it seems as if you (or your chatbot) believe that there’s a simple dichotomy here, but while there are probably no atheist creationists, there are certainly plenty of non-atheist non-creationists.

6 Likes

I think there should be a big distinction between “AI” as the result of crunching lots of relevant data with neural-net technology, and “AI” as Large Language Models. In the latter case the Intelligence is coming from all the people who wrote all the stuff on the Web that is being used as input. The LLM chatbot is only summarizing that conventional wisdom.

There is also a big difference between (1) using a LLM to write a clear version of your response to a question, and then taking responsibility for the result as being your own response, after you have read and approved it, and (2) getting a position statement from a LLM and the citing it as support when it agrees with your position.

2 Likes

What about: “In the beginning, there was AI”?

1 Like

I’m reminded of

Sometimes they’re right. There are enough people whose views are far more incoherent than anything AI-generated.

Hi Tim, isn’t it strange that, within the first decade of humans finding other entities that they can talk with, that some humans propose banning discussion with them? And that on this forum, virtually all the replies to my post take a rejectionist position? Should one really check what LLMs tell one? I asked one today, “Was Elsbeth Huxley related to Julian?” Should I check its answer, that Elsbeth’s husband was the grandson of T.H. Huxley, and a cousin of Julian? I respectfully suggest that if I present information provided by ChatGPT that you object to, that you object to the information, not its provider. That’s important firstly because I’m banking on using ChatGPT provided information e.g. on hominin brain sizes, and I dread being stymied from the start as I have been many times.

But beyond that I want to later argue that human cognition has been the central red herring in the human origin story told in the name of evolution, an imaginary barrier between our ancestors and the rest of creation. Maybe it’s wrong for me to say that without giving it support, but I just want to clarify where I hope to go.

Yes I agree

Obviously you should. Why wouldn’t you?

Why would you rely on ChatGPT’s output ? Is it that you can’t find any authoritative sources or is it that the other sources you can find disagree? Even Wikipedia should give you references to better sources - and, unlike ChatGPT’s references it would be very surprising if they were not genuine.

That doesn’t seem a promising avenue in finding support for your claims. It seems more likely that theists would propose such a barrier than atheists - Creationists especially.

You need to explain your thread title. What does the usefulness of AI have to do with the bad influence of atheism? And why do you seem to lack interest in everything except AI, to the extent of ignoring responses to your supposed central question?

2 Likes

No, it is not in the least bit strange. LLMs aren’t independent “entities” so much as simply sophisticated parrots – with about as much understanding of the information they are parroting as their avian analogues.

I bluntly suggest that when information comes from a notoriously unreliable source, I will unapologetically refuse to accept that information.

The burden of proof lies on the person making the claim – if you can’t be bothered checking that the information you are providing is reliable – then I likewise can’t be bothered reading it.

1 Like