Postdiction vs. Prediction

Very much so.

2 Likes

If the EHT collaboration had taken a picture of some other object, but applied the same analysis techniques to it (i.e. treating it as a black hole), would they been able to find a usable fit and find “M”?

1 Like

Perhaps not, but the measurement of M is just data fitting, a type of measurement, not a proper prediction. Of course this is a measurement deeply enmeshed in a model. So it is probably well phrase as “the observations along with the model predict a mass” noting we have not actually measured the mass in a direct way.

Testing between alternate models or a null hypothesis (no black hole) is close to how science works, without as much regard for chronological ordering.

Molecular biology is different than astrophysics in that we can actually do prospective experiments to test specific hypothesis, and many molecular biologists take this as the “gold standard” of good science. Trust me. I know. As a computational biologist I’ve had to explain how my work is legitimate and experimental too, many many many times. @mercer is just echoing the field on this, though computational biology and genomics is creating a large culture shift as my peers get tenure.

2 Likes

If the answer to the question is no (i.e. you can’t coherently apply the EHT analysis to non-black hole objects), then at some level the people fitting data to the BH image to find M are verifying a prediction of GR, even if they are not the ones actually doing the predicting. Maybe it’s more accurate to say the prediction was Einstein’s, and the EHT people @PdotdQ mentioned are mostly working out the implications of assumptions in GR and seeing how they fit to the data.

That being said, I agree with @PdotdQ’s original comment, which is that there are a lot of scientists (physicists and astronomers) who don’t really make predictions, but instead make theory-laden measurements. In Kuhnian terms, they are scientists working within the widely accepted scientific paradigm. In fact, this describes the vast majority of the daily work of scientists. In fact I would describe my own experiment, measuring the EDM of an electron, as doing just that: we are using established techniques, refined and customized for this particular species of molecule and experimental setup, in order to measure a property of the electron based on theoretical assumptions that everyone agrees upon.

(Of course, the reason that this measurement is interesting is that there are people who have made predictions about the value of the electron EDM, but those are mostly particle theorists, none of whom actually do the experimental work, which is based on “regular” atomic physics. Secondly, just as I said for the EHT case, one could say that the fact that we are able to successfully use basic atomic physics to take and fit data in a coherent manner to our daily experimental results is a testament to the explanatory power of atomic physics and quantum mechanics.)

Chronological Ordering and Evidential Status

Overall, I’m not sure that chronological ordering of the evidence is always relevant for describing the epistemic status of a theory. This has been an area of scholarship in contemporary philosophy of science. For example, Stephen Brush in Prediction and Theory Evaluation: The Case of Light Bending argues that the successful prediction of a new phenomenon is not as convincing as an explanation of past facts, “at least until competing theories have had a chance (and failed) to explain it”. In particular he uses the example of the historical verification of GR. After Einstein devised GR, he found it to be able to explain the advance of Mercury’s perihelion, a previously known fact. GR was able to precisely predict what Newtonian mechanics had failed to do. On the other hand, GR also predicted the phenomenon of light-bending, which Eddington famously confirmed via his observations in 1919. Now if one believes that chronological ordering is important, one could imagine that Eddington’s discovery should have been viewed as a stronger piece of evidence for GR than the post-diction of explaining Mercury’s perihelion. But according to Brush, the historical documents and scientific correspondence of the time show us that this was not necessarily the case:

It later became clear to the experts that the Mercury orbit was stronger evidence for general relativity than light bending. In part this was because the observational data were more accurate it was very difficult to make good eclipse measurements, even with modern technology (36, 37, 71) - and in part because the Mercury orbit calculation depended on a “deeper” part of the theory itself (36, 72) . The fact that light bending was a forecast whereas the Mercury orbit was not seems to count for little or nothing in these judgments…

But the most significant argument (though it was not often explicitly stated) is that, rather than light bending providing better evidence because it was predicted before the observation, it actually provides less secure evidence for that very reason. This is the case at least in the years immediately following the announcement of the edipse result, because scientists recognized that any given empirical result might be explained by more than one theory. Because the Mercury orbit discrepancy had been known for several decades, theorists had already had ample opportunity to explain it from Newtonian celestial mechanics and had failed to do so except by making implausible ad hoc assumptions (33). This made Einstein’s success all the more impressive and made it seem quite unlikely that anyone else would subsequently come up with a better alternative explanation. Light bending on the other hand, had not previously been discussed theoretically (with rare exceptions), but now that the phenomenon was known to exist one might expect that another equally or more satisfactory explanation would be found (74).

Brush goes on to explain that only once other competing theories had failed to explain light bending (about 10 years after the experiment), did light bending become just as important a piece of evidence for GR as Mercury’s orbit. Thus, in Brush’s view, what’s important isn’t the chronological order of the evidence, but how well the theory can explain all the evidence we have, regardless of their chronology.

1 Like

@swamidass is correct, getting “M” is just a matter of data-fitting. Note however, that non-GR black holes or even non-black hole objects can give a good fit (though we won’t know if this fit actually returns the mass of the object).

Exactly; Einstein made the prediction, not the many legitimate scientists who work on the EHT to fit the observation with models that include, among them, Einstein’s prediction.

It’s like this: If I have a bag with between 1-3 marbles in them, then I ask: “How many marbles are in the bag?”

Newton says: 1
Einstein says: 2
Crackpot says: 3

These are predictions.

What the EHT is doing is the following: to answer “How many marbles are in the bag?”,

Make the model that each marble is 1 gram each and weigh them. They can then fit for the number of marbles in the bag based on this model.

Suppose they get N=2. This number of marbles that the EHT came up with from this procedure is not a prediction. Nowhere in this procedure did the EHT team did any prediction. They can then say: Einstein’s prediction was right. But they themself did not predictions.

2 Likes

7 posts were split to a new topic: What Constitutes a Scientific Explanation?

And as a geneticist, neuroscientist, cell biologist, and biochemist.

Neither of you has shown anyone using the term “postdiction” in any professional context. Heck, my autocorrect doesn’t even like it and splits it.

Sorry, but that’s just silly.

You’re now moving the goalposts from unsupportable pedantry about post- vs. prediction to trying to pretend that I am denying any utility of purely descriptive science.

I was a mouse geneticist in the 1980s-90s, a Golden Age of description:
https://www.nature.com/articles/349709a0

There’s not a speck of hypothesis testing in there.

Why would we expect to see it used in scientific papers? We are explaining that postdiction is a type of prediction.

Agreed. The same person never has to do the predicting and testing.

That’s not even close to @PdotdQ’s usage:

How can colleagues only do postdictions and never make predictions if the latter is merely a subset of the former?

As @swamidass said, why are you expecting to see it used in scientific papers? It’s a philosophical term, and one that is not in common usage for astronomers.

Please read my last post: Postdiction vs. Prediction - #49 by PdotdQ

If you agree with that and the fact that

is not a prediction, then we are in agreement.

1 Like

How can your colleagues only do postdictions and never make predictions if the former is merely a subset of the latter, as now claimed by @swamidass?

Clearly you and @swamidass have a major disagreement about these terms, the proper usage of which appears to be extremely important to both of you.

That’s for him to answer, I never claim that postdictions are subsets of predictions.

I look forward to the public resolution of your severe epistemological disagreement.

As far as I can tell @PdotdQ and I are agreeing. I don’t think that postdiction is a subset of prediction. I think, instead, it is common to call postdiction a prediction in scientific work.

1 Like

Yeah, we agree; this is exactly what I think.

1 Like

If I could add my two cents . . .

Scientists working in the field of molecular biology tend to be a bit skeptical of postdictions. This is due to “big data” papers in genetics that have become popular over the last 20 years. Genetic association studies have found both true and false correlations between alleles and phenotypes, as one example. I could write multiple paragraphs about the pitfalls of the early gene chip data sets and studies. The ENCODE study is another example. These types have experiments are often met with skepticism, and for good reason. Of course, all science should be looked at skeptically, so there’s that.

The danger here is forming your hypothesis after data analysis. With a big enough data set you are almost guaranteed to find false associations that are statistically significant. This is probably why biostatisticians are in higher demand these days.

2 Likes

That’s not even close to what you wrote:

How do you reconcile the incompatibility of those two statements?

Linguistic usage versus ontologically precise categorization are different. In context it seems pretty clear the meaning of those quotes.

Granted I’m reading my own mind here, but is anyone else confused?