This seems to me to be an important aspect of Generative AI that often gets obscured – Gen AI only predicts (e.g. the next words, computer code, etc), it does not analyse, or ‘think’ about underlying concepts.
(Interestingly, one of the authors, Joshua Gans, was an associate of an economics consultancy I used to work for, twenty-odd years ago.)
I would think it would include things like inductive and deductive reasoning.
Because experiments have been done to test Gen AI’s ability at this sort of reasoning, and they failed it.
For one thing, it would be able to distinguish between information that was relevant versus irrelevant to the problem at hand. That was one of the tests it failed.
Fair point, but what about intuitive thinking, making a leap between two seemingly unrelated things? Of course, a bad intuition might be hard to distinguish from hallucination.
OR consider a child, who might reach a wrong conclusion because they do not understand. This too is thinking, just not correct thinking to an adult.
Validation may be a missing step. Is the “thought process” valid for more than a single prediction?
The first problem with that line of thought would be ‘how do we rigorously define “intuitive thinking”?’ I rather suspect we can’t and therefore we can’t distinguish between intuition and blind guessing.
In that circumstance, it should be possible to identify the incorrect axiom or inference that led to the “wrong conclusion” (even flawed reasoning is still reasoning). with a Gen AI, it is simply a ‘block box’ how it got there.
The problem being that, as we are just validating the prediction, not any reasoning to get to it, any validation of Gen AI predictions will be localised, and cannot tell us anything about its validity beyond what we have directly tested.
E.g. even if we validate a Gen AI as giving an accurate weather forecast for tomorrow under normal conditions, we won’t know if it does so for the day after a hurricane until we do direct validation on that.