Does Embryo Development Require God's Guidance?

Dr. Jonathan Wells, a fellow at The Discovery Institute, believes that the process of cell differentiation in embryo development requires constant addition of new information in each step of cell differentiation, as per Group Theory.

Membrane Patterns Carry Ontogenetic Information That Is Specified Independently of DNA

Since it seems obvious to me that there is a nice group of accomplished mathematicians here, including the host, I thought it would be an interesting topic to delve into, and for someone like me, to learn something…
The most interesting part of the video begins at 38-40 min. mark, for those who don’t want to watch the basics…

Design Beyond DNA

Is Bio-Complexity a peer reviewed legitimate journal? And what qualification are necessary to be a Fellow at the Discovery Institute?

1 Like

Would peer reviews by another journal change the problem presented by J. Wells in the article published by Bio-Complexity?

Some once said: “Wisdom is vindicated by its results”.

If you prove the article makes false statements, the wisdom of peer reviews by other journals will be vindicated… If you, or anyone else, can’t refute Wells’ theory, Bio-Complexity will stand vindicated…

Do you think it is fair?

Science is about scrutinizing results and claims.

Great! Let’s scrutinize!
Your turn…

Bio-complexity is not a legitimate scientific journal. Therefore J. Wells paper in it is not considered credible science. Good-bye.

1 Like

Short answer to the OP question is that it does not appear that embryology requires God’s guidance. Though, it is not surprising that ID leaders might think it does.

1 Like

A lot of peer reviewed papers turn out to be junk… and the results cannot be replicated.

So whats the harm in scrutinising Dr Wells claims?

Bio-complexity is not a legitimate peer reviewed scientific journal. Therefore J. Wells publication in it automatically lacks credibility. It is not science. And I won’t even read it.

Let Dr. Wells get though the peer review process of a legitimate journal first to see if there are claims with enough validity to be scrutinized.

So Peer review is a guild now? As opposed to a process?
I believe ideas need to be scrutinised based on claims made.
And I find Dr Wells claims in this regard very interesting.

Yes, claims need to be scrutinized by experts in the field of science the claims are being made in. Without that first step, the claims are not taken seriously.

Given the track record of Bio Complexity, I find the Dr. Wells claims are either bogus, lazy, or both. If Dr. Wells has something of value to the scientific community, he would have went through the trouble of getting his claims through the peer reviewed process at a legitimate Journal.

1 Like

They do have a peer review process… and last time I checked, a lot of whacky stuff does get published in regular peer reviewed journals… like a recent paper about imported :octopus:.

So it seems more a guild issue… I will, you don’t belong to the club…

They are not a legitimate journal.

Another way of saying they are not part of the guild.

When did science stop being about methods and processes?

Science has always been about credibility, testibility, falsifiability. Every paper, ever published by Bio Complexity lack all of these. Plus a journal that publishes 3 to 4 paper a year is not worth anything. Scientists work on impact factor of their papers. Bio Complexity is not even including in the impact rating indexes. Bio Complexity is like AIG Answers Journal - creationism/religion disguised as science.

Science overall has issues with reproducibility of tests. This article from nature might be interesting to you.
https://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970

An interesting quote from the paper is below-
Data on how much of the scientific literature is reproducible are rare and generally bleak. The best-known analyses, from psychology and cancer biology[2] found rates of around [40%] and [10%] respectively. Our survey respondents were more optimistic: 73% said that they think that at least half of the papers in their field can be trusted, with physicists and chemists generally showing the most confidence.

A lot of theories in physics such as the string theory have a falsifiability problem.
Yet, leading journals continue to publish papers on string theory.

One thing I have learned as an engineer is that you can’t trust institutions. It’s well defined processes and standard methodologies that make the difference.
Perhaps science needs a continually improving standard of how claims need to be analysed.

If reproducibility is at best at 50% and at worst at 10%, then it can’t be claimed to be a hallmark of peer reviewed papers. And if results cannot be verified, how are they credible?

The idea of guild makes more sense to me.

@Ashwin_s, there is just so much misinformation here, it is hard to know what to start with. Science is credible and this is not merely about “the guild.”

Psychology is on the fringes of “science”, and not at all comparable to the rigor we find in science. The cancer biology figure is widely misreported. It is about translatability, not reproducibility. These are very different things.

Many things have a falsifiability problem because, as we have covered before, there are many things outside the streetlight (Ockham’s Razor and Reality). Popper’s falsifiability has is not a coherent guide for science, in part because it is a relative term. First off “string theory” is not a single theory, but a collection of hypothesis. They are regularly ruling out classes of theories. As experimentalists like @dga471 make progress, they will rule out more. Of course we need scientists carefully formulating what specific hypothesis would entail or not. That is a valid effort.

Science does have continually improving standards of how claims are analyzed. Why in the world would you think any thing different?

They are credible because good scientists are good at identifying good reproducible work. Important findings are also verified by reproducing them.

Or maybe, just maybe, science is complex, beautiful, and sprawling enterprise that will take a lifetime to engage and understand.

1 Like

I am not the one who is doing this “misreporting”… the source for the article is reliable and ultimately practising scientists themselves…
By translatability, I guess you mean benefits found in mice did not translate into benefits for human beings? I.e the premise of the test itself was wrong?

Pls note my comments were a reply to @Patrick claim that science has always been about credibility, testability, falsifiability etc… And his assumptions that being peer reviewed by a trusted source gives a special value to a paper. I don’t see much empirical evidence for the same.

Who are the “good” scientists? Or are all scientists “good”. And how do you identify “good reproducible work” without actually checking if it is reproducible.
You are basically asking for trust inspite of evidence of a poor track record.
This is classic of a guild… where the “in” people and institutions are trusted because they are “good”.

Or maybe science is just like any complex human endeavour which is susceptible to error, fraud, presumptions etc unless there are strong and clear guidelines to prevent the same. And how does one check if the guidelines are good enough? By checking on measurable parameters such as repeatability, how often someone gets away with fraud… and for how long…etc.

Pls don’t think I am attacking your profession. I know that engineers are susceptible to the very same issues. Hence my trust in standards… which are themselves evaluated for their efficacy and frequently updated.

I am attaching the original article here. The authors concern is about issues of methodology and insufficient standards.
For example, he mentions how only data that supports a hypothesis is presented in some cases even though the complete Data Set had some negative results also.
Some results could not be reproduced even after taking the help of the publishers and going to the extent of trying it in the publishers lab.
He also raises the alarm that many papers which did not give results that were reproducible (including those which could not be translated into a successful clinical programme), went to be cited numerous times in other papers.

I like his conclusion. He not only reports the problem, but suggests ways in which the research standards can be improved. I don’t know if any of them were implemented. If they were, then kudos to whichever institute did it. In such a case,a follow up study could be done a few years down the line to see if institutes/journals which implemented stricter standards produced better quality work.If so, it would become justification to make it a “best practice” which would then over time become a mandatory requirement to get published.

Is this what you mean when you say -

https://www.nature.com/articles/483531a