Uncertainty in the model speaks for itself.
It looks like Mann is trying to deal with this by simplifying the analysis looking at short deviations from the norm.
I am hoping someone can explain his method.
Uncertainty in the model speaks for itself.
It looks like Mann is trying to deal with this by simplifying the analysis looking at short deviations from the norm.
I am hoping someone can explain his method.
Statistical models are all about quantifying uncertainty. As Tim notes, if there wasn’t any uncertainty (or if the uncertainty was not quantified) that would be a cause for suspicion of the results. These models give us predictions AND confidence intervals for those predictions. Those confidence intervals are interesting specifically because they describe the uncertainty in the predictions.
There is more that will not be obvious. You can have a statistically significant model that gives terrible predictions. This might happen because the wrong sort of model is being used. There are methods to test for Goodness-of-Fit, which test for variability in model predictions in excess of (or lacking from) expected random variation.
I haven’t read the report, but the graph Tim posted appears to be a sort of Variance Components analysis (VC):
The difference here is VC analysis specifically focuses on sources of uncertainty, and tells us how much uncertainty is due to each. Here the authors have combined predictions and uncertainty (expressed as confidence intervals) to show sources of uncertainty over time. Uncertainty increases as the predictions go farther into the future, as they should (predicting the future is hard!).
Thanks Dan
I have been slogging through Mann’s work (fellow Cal bear) and it appears to be based on Monte Carlo analysis and spotting statistical anomalies in more recent historical data (where man made carbon is present) and narrower geographic locations.
Have you used this method before?
Am I reading the first chart right the range of warming predicted for the globe is a range of 1 degree to 8 degrees F over the next 80 or so years?
Are you aware of the oil companies investments in carbon capture processes?
“His method.”
You’re not quiting a paper of his. He’s not on the list of authors. He’s not even a co-author on any of the papers in the list of references.
Yes I have used bootstrapping. It’s a method of estimating the distribution (of some statistic) from the data itself, resampling (with replacement) many (thousands) of times to estimate the distribution. It often used when the model assumptions are not met or cannot be verified. It’s also used to estimate distributions for complicated situations that can’t be worked out theoretically.
Without looking, I’d guess that bootstrapping is being used because of dependencies in the data - statistical independence of data is a common model assumption.
Example: With Independent Identically Distributed Data (IID) we would use a T-test to compare means. If the data isn’t IID, then the T statistic doesn’t have exactly a T distribution, and the T-test doesn’t work as it should (the Type I and II error rates are off). It’s HARD to work out the distribution of the T statistic when the data isn’t IID, but bootstrap can be used to estimate it.
The key assumption for bootstrap is that the data available is sufficient to be representative of the underlying true distribution. That can be a problem with small data sets, but climate data sets tend towards huge, so that won’t be a problem.
I was finding Bill’s repeated references to Mann confusing as well – particularly as no paper by him seemed to have been cited in this thread.
I agree.
This is one of the challenges and why Mann may be on the right track in his attempt to generate simpler models.
@colewd: what papers by Mann are you referring to?
As no papers by Mann appear to have been cited on this thread.
How do you expect @Dan_Eastwood to comment on papers that have not even been cited, let alone read by him?
@Dan_Eastwood: why are you indulging Bill’s substance-free sealioning on this?
I think it’s relevant to explain why we use statistics to deal with uncertainty, and when particular methods are appropriate.
Of course, Bill still hasn’t answered MY question yet, but we’re getting there …
Yes, but when it is completely unclear what papers are being discussed, I would suggest that the conversation adds more confusion than clarity. Perfect if the objective is sealioning, if the objective is “explaining” then not so much.
Addendum: it is unclear whether Mann himself has used Monte Carlo since his PhD thesis:
Google scholar search: “Michael Evan Mann” “monte carlo”
Hi Tim Dan
The objective is to learn more about the subject. I do not take a position that man made climate change is not a problem.
As, AFAIK, no papers by him have been cited on this thread.
What does it say?
That it’s not certain?
Hello, My name is Kent Hovind…
I am the very model of a modern creationist,
I’ve information vegetable, animal, and atheist,
I know the kings of Bible, and I quote the fights historical,
From Genesis to Revelation, in order categorical;
I’m very well acquainted too with matters theological,
I understand the intricacies of arguments biological,
About the age of Earth and how species originate,
In short, in matters biblical and scientific, I am up-to-date.
(Hat-tip ChatGPT)
ChatGPT needs to learn how to scan.
And rhyme.
Everybody’s a critic.
Bill Cole:
If the goal is to learn more about the topic, then the time is ripe for you to understand that the climate problem is less about predicting specific future temperatures, and much more about how to slow down unstoppable glacier loss!
We are long past the CO2 tipping point, which has been shown to lie in the mid-range between 180 ppm and 280 ppm. At 180 ppm, glaciers form up to a mile high above the future NY city. At 280 ppm, virtually all glaciation disappears. Every 100,000 years, for 800,000 years, this pattern has repeated itself - - even without having a good equation for estimating specific temperatures in specific years.
Now that CO2 is at 400+ ppm, the tipping point is well behind us… and it will continue to be so - - until humans develop a global scale to fixing and removing CO2 from the air!
Watch this comprehensive NOVA video… or at least the last 30 minutes (search the title in YouTube):