Faith in mechanisms that would be outside our reach and understanding (as a matter of principle)

Yeah, which is then evidence you have to deal with somehow that isn’t merely to vaguely wave your hands in the direction of “physics and chemistry”.

Michael Behe has done no work that shows phylogenetic inference “was a non-explanation from chemistry and physics”(whatever that even means).

No if it was just a bald assertion then there wouldn’t be phylogenetic evidence for it. The inferences are based on that phylogenetic evidence. Hence it’s not a bald assertion, it’s an inference from real data.

And the moon isn’t made of green cheese.

1 Like

So what is that science uses data to create hypothesizes.

This is the smallest gene set that can increase populations based on experimental science. Going forward we may show a smaller gene set working but that is speculation at this point.

This is nothing but speculation based on assuming evolution is true. Have proteins evolved yes but we have very little evidence of this being a frequent occurrence.

Or similar complex molecules that get broken down through the citric acid cycle.

This is the evidence we have. When I see different evidence I will change the number.

The illustration of lotteries keeps coming up, and then so has the criticism that NO probabilities have been worked out for origin of life.

I’ll first talk about lottery statistics in a simple lottery. I’ll address a few OOL combinatorial probabilities in another comment.

Say we have raffle drawing which is in effect a lottery. There a 100% chance a ticket will be drawn. 100%.

If there are 2 tickets in the raffle barrel, the chances of any given ticket being drawn is 1 out of 2. But the chance some ticket will be drawn is 100%.

If there are 1000 tickets in the raffle barrel, the chances of a given ticket being drawn out are 1 out of 1000, but the chances some ticket will be drawn is 100%.

etc. etc.

In all cases the chance that some ticket will be drawn is 100%. In fact it only takes 1 trial since there is only 1 drawing in a simple lottery!

However in the case of combinatorial probabilities, with 1 trial there is no guarantee a given outcome will happen at all. Say we have 500 fair coins in a jar and we throw the coins out of the jar onto the floor. In one attempt, the odds are strong that it won’t be 100% heads, in fact so astronomically remote would be considered practically impossible. Certainly an outcome I would not bet my life and soul on. So unlike the lottery, there is no guarantee of a winner, there is only a guarantee of some outcome.

The probabilities pertaining to some very basal issues in the origin of life are of combinatorial variety, not the lottery variety.

In case you haven’t noticed Michael Behe is pretty much a laughingstock in the evolutionary biology community. Virtually no one takes his popular-press-book claims seriously.

2 Likes

I’ve said that I would not classify ID as science, however there are arguments used to support the ID hypothesis for the origin of life that are well within science, namely the combinatorial improbabilities of certain features of life.

Dr. Change Laura Tan is was physical organic chemist form China and then was a Harvard post doc and is presently an associate professor of biology at Missouri. She’s analyzed the probability of a readable DNA strand forming a pre-biotic pool of nucleotides.

The linkages between the nucleotide must be to the right molecules to enable linear DNA strand that is readable. At a basal level, a readable DNA strand is simply a structure that is not expected.

In a random pool of nucleotides, the problem of creating the same kind of linkages over thousands of nucleotides is statistically like taking thousands of fair coins and tossing them and expecting them to land all heads – except for nuclotides the problem is worse. And NO, please don’t invoke selection as an explanation to get around the problem since Darwinian selection doesn’t work on something non-living like a pool of nucleotides. Dr. Tan calls this the problem of “homo linkage” (analogous to homochirality).

To overcome the problem of homo linkage in the synthesis of man-designed DNA, blue Heron

developed (as in intelligently designed) a process that happens in certain steps. Some of the proprietary details of Blue Heron are left out below, but this diagram illustrates some generic of the steps in a comparable process.

[Blue Heron was probably the vendor the Craig Ventner used, btw.]

Dr. Tan gave me permission to share parts of an internal transmission to me:

blue_heron

Synthesizing the oligoes by phosphite triester methodology
The building blocks of the synthesis are* 3’-O-(N,N-diisopropyl phosphoramidite) derivatives of nucleosides (nucleoside phosphoramidites). To avoid undesired side reactions, all the functional groups present in nucleosides were rendered unreactive (i.e., protected) by attaching protecting groups: the 5’-hydroxyl group is protected by an acid-labile DMT (4,4’-dimethoxytrityl) group (red), the amino group of adenine by a benzoyl group, the amino group of cytosine by a benzoyl or an acetyl group, and the amino group of guanine by an isobutyryl group, the phosphite group by a base-labile 2-cyanoethyl group. The synthesis of oligoes proceeds from the 3’ end to the 5’ end of the oligoes, opposite to what happens in cells. To initiate the synthesis of an oligo, the 3’-most nucleoside is attached to a solid phase support, and its 5’-hydroxy group is activated or deblocked by acid-catalyzed removal of the protection group DMT (step 1: deblocking). The resulting free 5’-hydroxy group (green) will attack the phosphite moiety of the next nucleoside phosphoramidites and substitute the diisopropylamino group, which is reactive under acidic conditions (step 2: coupling). Since not all the deblocked hydroxyl group would be coupled, the uncoupled ones are blocked from further chain elongation (step 3: capping). The newly formed tricoordinated phosphite triester linkage is next oxidized because it is not natural and is of limited stability under the conditions of oligo synthesis (step 4: oxidation). The resultant product then serves as the starting material for the next cycle of base addition. Additional bases are added one by one as determined by the sequence of the desired oligoes. Upon the completion of the oligo chain assembly, all the protecting groups are removed to yield the desired oligoes. The oligo synthesis cycle was adapted with modification from that of Integrated DNA Technologies (IDT). The insert in the middle (boxed) shows the structure of one of the building blocks, a 2’-deoxy-6-aminobenzoyl-adenosine phosphoramidite.

The odds of this are astronomical. Accepted physics and chemistry make such a structure possible, but simultaneously improbable. Whether this means ID or God or whatever is formally a separate question.

But the problems Dr. Tan identified are there, and Virchow’s principle (based on observation) agrees with Dr. Tan’s improbability calculations.

One could of course appeal to some unknown law or principle of physics. But that is faith, not necessarily fact.

Blah blah blah life is too improbable blah blah blah.

You didn’t provide any odds or any “improbability calculations”. :roll_eyes:

Maybe you ID-Creationists could gain a smidgen of respect if you actually provided some realistic probability calculations instead of just boasting of having them.

Tsk tsk. Sal screws up the science again. :slightly_smiling_face:

Prebiotic Evolution (Molecular Biology)

The origin of life on Earth comprised a long series of steps: from the synthesis of small molecules within the primordial atmosphere or near hydrothermal vents, through the formation of biomonomers and biopolymers, culminating in the emergence of a self-replicating, autonomous organism. This philosophical outlook, if not the intimate details, began with the Russian biochemist Alexander Oparin and the British biologist J. B. S. Haldane, who, in the 1920s independently proposed a sequential model for the origin of life (1). Although the process of Darwinian selection may have modulated the populations of genetic macromolecules once the stage of an RNA (or “pre-RNA”) world developed, the term “prebiotic evolution” is used here to describe the presumed earlier era of synthesis and degradation that preceded self-replication. A common theme is that the ingredients for life were generated by the flow of energy (sunlight, lightning, or thermal radiation) through the primordial hydrosphere so that the putative mechanisms for the origin of life should be compatible with the conditions that would have prevailed in the early atmosphere and oceans.

(read more at link)

In summary, a significant body of literature (see Suggestions for Further Reading) has emerged that demonstrates the feasibility of prebiotic syntheses of specific compounds under particular conditions. The efficacy of cyanide as a precursor to purines, amino acids, and, to a lesser extent, pyrimidines suggests that HCN likely had a role in the formation of the first biomolecules. However, much work remains to reconcile such pathways with geochemical conditions that might have prevailed on the early Earth and to elucidate how a genetic macromolecule might have formed from a dilute primordial soup. Despite the many problems in nucleotide assembly, the postulate of a so-called RNA world has provided remarkable insights into the interrelated roles of replication and catalysis in the origins of life

That comes up as a response to misuse of probabilities.

But why is that a criticism? Nobody has claimed to have solved the problem of the origin of life.

Sal, you are starting at the wrong end of the problem.

We can compute probabilities of particular outcomes if we understand the processes leading to those outcomes well enough to capture them as probability distributions. If you don’t know the processes, you can’t define (or approximate) these probability distributions, therefore you can’t compute the probability of any particular outcome.

So, the first thing you need to do is propose one or more processes that were going on at the time first life originated. Then you need to formulate probability distributions for the possible outcomes of those processes. Only then can you assign a probability to a particular outcome out of this set of all possible outcomes.

You are starting at the other end, observing (or proposing) an outcome, and then you try to compute its probability while you skip the step where you first generate a probability distribution required for the calculations.

This is like asking, what is the outcome of rolling a ‘12’ in a game of dice without first specifying how many dice you are rolling (i.e. defining the process, and therefore the probability distribution of the possible outcomes). This is simply not possible because the probability distributions, and therefore the probability of rolling ‘12’, are very different if you are rolling 2, 3 or 4 dice (or any other number).

Bottom line, this is a very flawed approach that won’t get you any reasonable results.

1 Like

We don’t need to know every detail of a process to make statements if in general a process tends to maximize uncertainty. For example there is an inherent tendency of fair coins when flipped or juggled or otherwise shaken for them to go to an uncertain state. We don’t need every minutia of detail of the flipping, juggling, or randomizing process to state the outcomes in terms of the Law of Large numbers, namely if we take a large number of fair coins and subject them to flipping, shaking, or juggling, the system of fair coins will tend to be 50% heads.

Since you mentioned casinos, casino managers don’t care that much how some poor craps player will be rolling the dice. The casino manager doesn’t need to know the exact momentum, position, trajectory, etc. to know the distribution of the dice over time when rolled, namely it will be approximately 1/6 for every face.

Thus even in the face of numerous uncertainties, casinos can make powerful assertions about what the EXPECTED outcome will be in under almost ALL possible process except processes with some sort of feedback correction control. What I mean by that is the casino will force the craps player to throw/roll the dice, he won’t let him roll the dice and then go over to the dice and reset the faces to what he likes. Same for trying to get a set of fair coins 100% heads. To do this, someone would have to use feedback to say, “oh this coin is tails, let me change it to heads.”

In a pre-biotic environment subject to the randomizing actions of Brownian motions of chemical soups, this is much like flipping coins.

There are numereous ways to make a DNA strands, and in fact I described at least 3 methods (the way a cell makes it, the Blue Heron Process, and the idtDNA process). In principle there could be an infinite number of such processes, but these are all feedback type processes in some way that will undo the randomizing tendencies of Brownian motion in a soup of chemicals.

Some of the pre-biotic probabilities I described obey the same statistical considerations of flipping fair coins, and obey the binomial distribution and the law of large numbers.

We’ve know for a long time that “cells come from pre-existing cells”, we now know why some of the reasons from physics and math.

And there are even more probabilites that I haven’t touch such as the formation of protein complexes…

I’m pretty sure you are totally clueless here, @stcordova.

Stick to apologetics - chemistry certainly isn’t ever going to be part of your skill set.

2 Likes

I’m not saying anything that is that outrageous, otherwise Koonin and others won’t need to be appealing to multiple universes. Not to mention chemists like James Tour and his Nobel Prize winning colleague Richard Smalley.

Stick to apologetics - chemistry certainly isn’t ever going to be part of your skill set.

Well thank you for your ad hominem, my colleagues who are actually biochemists and professors of biochemistry don’t have such a low opinion of my potential to further my education as you do.

What I was referring to regarding randomizing Brownian motion are things such as tendencies for solutions to dilute to mix and for 3D orientations of certain chemicals to randomize and hence preclude homolinkages unless prevented by systems described by Dr. Tan.

Casinos can make powerful assertions because they know the probability distributions of their games very well. They even rig them in their favour.

Do your colleagues agree with your caricature of chemistry as a matter of pollen grains (or any other macroscopic particle) being bombarded by solvent molecules much, much smaller than the pollen? This is the picture you paint when you use the term “Brownian motion”.

And it is good to know you do not consider me to be a professor. Not problem - we are all equals on this board, I guess.

Apples and oranges. If you don’t understand the differences, you probably need to do a whole lot more reading.

That wasn’t my claim, but we could apply Brownian motion claims to molecular level systems such as here from the National Academy of Sciences:

Gating of acetylcholine receptor channels: brownian motion across a broad transition state - PubMed

Gating of acetylcholine receptor channels: brownian motion across a broad transition state.

Acetylcholine receptor channels (AChRs) are proteins that switch between stable “closed” and “open” conformations. In patch clamp recordings, diliganded AChR gating appears to be a simple, two-state reaction. However, mutagenesis studies indicate that during gating dozens of residues across the protein move asynchronously and are organized into rigid body gating domains (“blocks”). Moreover, there is an upper limit to the apparent channel opening rate constant. These observations suggest that the gating reaction has a broad, corrugated transition state region, with the maximum opening rate reflecting, in part, the mean first-passage time across this ensemble. Simulations reveal that a flat, isotropic energy profile for the transition state can account for many of the essential features of AChR gating. With this mechanism, concerted, local structural transitions that occur on the broad transition state ensemble give rise to fractional measures of reaction progress (Phi values) determined by rate-equilibrium free energy relationship analysis. The results suggest that the coarse-grained AChR gating conformational change propagates through the protein with dynamics that are governed by the Brownian motion of individual gating blocks.

I don’t think Acetylcholine receptor channels are any where near as big as pollen grains.

And the snippet you quoted without thinking doesn’t really relate to “tendencies for solutions to dilute to mix and for 3D orientations of certain chemicals to randomize”.

@stcordova, there is a standard textbook handling of things like Brownian motion and diffusion, and you are hopelessly muddled about these things. You claim to like math - you should avail yourself of a good pchem text and wade through this material. The great thing about this subject is that it provides some interesting historical context to boot.

2 Likes

And since I was talking about DNA and Brownian motion here are some papers on Brownian motion and DNA:

https://science.sciencemag.org/content/297/5583/987

Brownian Motion of DNA Confined Within a Two-Dimensional Array

Linear DNA molecules are visualized while undergoing Brownian motion inside media patterned with molecular-sized spatial constraints.

Here is a summary of a recent Nature Communications paper:

How DNA molecules move in fluids

Brownian motion — the seemingly erratic movement of particles suspended in a fluid — plays a key role in the transportation of small molecules and proteins in a living cell. It also controls the motion of small molecules inside catalytically active materials. These processes, the researchers say, could be better characterized by using the new method.

“The new method could also be applicable to a broad range of studies related to Brownian motion … including molecules observed in a fluid,” says Satoshi Habuchi, co-corresponding author of the study from the King Abdullah University of Science and Technology (KAUST), Saudi Arabia.

And even Nucleotides:

Classification of DNA nucleotides with transverse tunneling currents - PubMed

…In realistic liquid environments, typical currents in tunneling devices are of the order of picoamps. This corresponds to only six electrons per microsecond, and this number affects the integration time required to do current measurements in real experiments. This limits the speed of sequencing, though current fluctuations due to Brownian motion of the molecule average out during the required integration time. …

Molecules the size of nucleotides are much smaller than pollen grains.

I think I’ve proven it’s now accepted to describe motion of certain molecules including nucleotides.

Here’s something at BioRxiv:

In this work, we carry out a series of Brownian diffusion simulations to elucidate
the nucleotide importing process through the hexamer pores of HIV-1 capsid.

Casinos rig the payouts, they don’t have to rig the law of large numbers nor natural probability distributions such as those associated with rolling dice or sufficient shuffling of cards.

Of course, card-counters like me can take advantage of conditional probabilities in the game of black jack, but that is another story.

All this bloviation and you still haven’t provided a single probability calculation concerning biological life. :roll_eyes:

Sorry Sal but your “gee if you flip 500 coins” mantra isn’t going to fool anyone with an IQ over room temperature.