LOL, and comparing Behe to Einstein… well that’s like comparing Trump to Jesus.
Ha ha ha ha ha ha oh wait
Very funny…
Einstein built a model which modern cosmology uses. Although others have tested the model Einstein did not.
We’re in the 20th year since Behe’s model and so far no one has falsified it. Lynch simply generated an alternative model. The closest to testing Behe’s model is the Lenski experiment which shows no evidence to date that Behe’s model is so inaccurate it is essentially useless.
You must be joking.
I can easily tell you exactly what observation within the first 20 years of Einstein’s model would have unequivocally falsified it, had it been made. Einstein would have been confronted with it, too, since he was an active researcher for the rest of his life which did not end until 1955, some four decades after the first publication of the theory in question. In your opinion, what kind of observation could there have been that would have falsified Behe’s model, had it been made?
Your complete igonrance is showing again Bill.
Nobody bothers to ‘falsify’ models because …
All models are false …
… in that all models are merely an approximation to reality. Some models are a good approximation, some models are a bad approximation.
The general consensus is that Behe and Snoke’s model is a bad approximation – for reasons that have been explained to you in many threads on this forum over the years.
This is hardly surprising as neither Michael Behe nor David Snoke have any expertise whatsoever in the topic they were modelling.
That Behe would attempt this is hardly surprising, as he has a long history of failing to acknowledge his own limitations – including proposing, on the witness stand at Dover, a definition of science that would admit astrology. This does not add in any way to his credibility.
Mhh I wouldn’t say models are an “approximation of reality”, necessarily, either. I’d say models are a construction, what ever shape it takes, that contextualizes observations in such a way as to logically entail testable predictions. They are not technically true or false, or at the very least their truthness and falseness is something of secondary interest at best to the scientific enterprise, specifically.
A bad model isn’t bad because it is something like too crude of a map, if for no better reason than that in most cases models are scarcely anything at all like maps, and it is almost entirely impossible to compare them to reality (what ever that is supposed to be) so directly and to assess how well or poorly they approximate it. Rather, a bad model is so deemed because the predictions hence derived fail to match within the specified margns of error the observations actually made.
That may well be saying the same thing in different words, though I find this a far less ambiguous metric by which to assess a model’s quality. It can even serve to compare models, well beyond subdividing them into good and bad ones: A model of one phenomenon is better than another if it either makes predictions that more accurately match the data at comparable conceptual/computational cost, or makes predictions of comparable quality at a lower cost, or makes predictions both of better quality and at a lower cost.
I would.
All models involve assumptions either as simplifying assumptions or assumptions-about-unknowns. (I would suggest that if the calculation did not involve such assumptions, it would be called a “simulation”, or similar, rather than a “model”.) This means that they cannot help but be approximations of whatever they are modelling (which I am collectively calling “reality” as a generic term).
If a model by-and-large produces reasonably accurate predictions of the phenomena it is modelling, is it unreasonable to call this model “a good approximation of” this phenomena (and when extending this to all possible phenomena-to-be-modeled, using the collective term “reality”)? Likewise a model that by-and-large produces inaccurate predictions, “a bad approximation”?
Yeah, I’m not sure about this. Like, if by approximation we only mean predictive modeling, then we are on the same page. It’s just that intuitively I don’t feel this covers quite everything people would think of when hearing “approximation”. To me, an approximation needs to have some actual semblance to the approximated thing, be some imitation of it. Heck, an approximation doesn’t even need to be predictive, it could be a mere simplified accounting of a given set of facts.
Scientific models, in my opinion, do need to be predictive, but do not need to have a semblance to anything. Or, at any rate, a model is not better or worse for how well or poorly it resembles the phenomenon. The only thing desired for it in that regard is that its predictions approximately match experimental data.
And if accuracy of predictions is a primary metric by which you judge whether something is a good or bad approximation as well, while actual reflection of the underlying phenomenon is not a criterion even if it were somehow measurable, then it would seem that your usage of “approximation” here is also rather… not what I think most anybody would expect.
I think we may be arguing about semantics here, possibly driven by how abstractly or concretely we are using words like “approximation”, “semblance” and “imitation”. I would consider a “model” to be arguably a “mathematical imitation” of what it is modelling (or some aspect of it). I would likewise expect a good model to give predictions that resemble the phenomena it is modelling.
I just wouldn’t expect a model of tiger predation (for example) to require actual bloody dismembered corpses for it to provide a good “approximation”, “semblance” or “imitation”.
Also, my own background is in financial and economic modelling (where testing predictions often isn’t feasible), rather than scientific modelling, so have in mind a more expansive conception of “approximation” – but still one that would be inclusive of testing predictions against reality, where feasible (and where accurate prediction is still the gold standard for being certain whether your model is good or not).
That’s blatant misinformation.
Science isn’t about falsifying models. It’s about falsifying hypotheses. Do you really not understand this basic point?
I sort of don’t (see discussion above). What is a hypothesis, if not an untested or insufficiently tested model of the full phenomenon or some part of it? And is it really about falsification? Would all of science fall apart if every guess happened to be consistent with outcomes of tests that could have disconfirmed them? Falsifiability is necessary for a model to actually be some sort of useful, I’d say, but falsification per se is neither a goal nor a requirement of science. What say you?
Nothing in the definition of “hypothesis” specifies whether it has been tested or not, for starters. Many hypotheses have already been tested and not falsified.
It really is. That’s why Behe never states his hypotheses overtly. That’s why pseudoscientists stay away from hypotheses and specify “models” or “arguments.” No ID proponent has ever tested an ID hypothesis, despite stumbling into them regularly.
Of course not, but that wouldn’t happen and you’re completely missing the point.
Scientific hypotheses are not, and never have been, mere guesses.
I don’t apply falsifiability to models. That’s why Bill is avoiding addressing whether or not Behe’s hypothesis is falsifiable or whether it has any basis in reality by using the term “model” instead. Keep in mind that Behe stated under oath that he thinks others have the responsibility to test his hypotheses, which is literally abandoning the scientific method.
Attempting to falsify hypotheses (not falsification per se) is the very essence of science.
In my opinion, they definitely do not and that is not how they are used.
Then what’s the point in having them at all? What does “understanding [ some natural phenomenon ]” mean, if it does not mean being able to predict outcomes from outsets?
Surely the point of science is not merely to enable our looking and sounding like wise men who know things. If anything, that’s what pseudoscience is: Vacuous babble dressed up in technical jargon, best presented by a charlatan dressed up in a lab coat, but without any practical utility, because they are either unwilling to correct any of their babble when their prophesies fail, or refuse entirely to present models whence testable predictions that might conflict with the data can be logically derived to begin with.
Perhaps this is a bias I adopted with my field. That a scientific theory “works” is, to me, synonymous with it accurately matching data without assuming them as given first. I genuinely understand no philosophy of science wherein the rendering of predictions is not the central point and purpose of all scientific theory crafting.
@colewd, I think there’s a comma missing there. You wrote “Although others have tested the model Einstein did not”, but it should have been “Although others have tested the model, I’m totally clueless”.
Happy to help.
Sorry to barge in, just my opinion but here sketches of my position on falsifiability:
The difference between in principle falsifiability and practical falsifiability is important to me. Many claims about the past are unfalsifiable in practice but not in principle. This does not affect my judgment about whether such questions or claims are “science.” I do think that claims that are not falsifiable in principle (omphalos a classic example) are outside of science.
Falsification may not be a “goal” of science broadly speaking, but experimental hypothesis-driven science does seem to rely on hypothesis testing which should be viewed as the deliberate attempt to falsify.
“Them” being what, exactly? There are major vocabulary issues here.
What’s an “outset” in this context?
The empirical predictions come from mechanistic hypotheses. In biology, models help in refining and testing hypotheses, but they are not at the center.
The statement “All models are wrong, but some are useful” isn’t a joke. For example, I can model population genetics by blindly drawing marbles if I don’t understand basic probability. That’s precisely why Behe crows about having a model, and never mentions any hypothesis.
You started by defining “hypothesis” as untested, which is utterly wrong with no gray whatsoever, so now you’re jumping to “theory”? Oh boy…
The hypotheses are crafted to be mechanistic and to render testable, empirical predictions.
Very few hypotheses become theories, which is what we call them after they have a long track record of successful predictions, with iterative modifications less “crafted” and more responses to the data, with far more craft going into designing clever experiments that rigorously test those predictions. I’ve only had one epiphany in my career relating to “crafting” a hypothesis, but I assure you it was entirely driven by thinking about and discussing the bewildering data that I had produced by testing my initial (wrong) hypothesis.
I don’t know what your field is, but I’m afraid you’re not really speaking the language of biology here. If the abstract or specific-aims page of your US NIH or NSF grant application started with, “I (we) have crafted a theory,” few reviewers would bother reading further before chucking it in the triage (unreviewed, bottom-half) pile.