Uncommon or Common Descent?

Any change or variation of reproductive cells that can be used to improve fitness.

The challenge to creating new functional information is that functional information is organized in a sequence. For DNA it is four nucleotides. For proteins it is 20 amino acids. The sequences that perform functions live inside this space and as you mentioned for just binding ATP a random chain of 70 characters has a 1/10^11 chance of having nominal function. For proteins that have multiple functions the odds are even rarer of achieving selectable function according to empirical data I have looked at.

The spliceosome has 170 proteins the largest is PRPF8 which has 2335 AA’s. Based on profiling its historic conservation gpuccio of UD has estimated its functional information at 3000 bits or 2^3000 non functional arrangements. With evolutionary resources estimated at 150 bits you can see the explanation fails at the largest protein of a mission critical eukaryotic function.

But the fact that you’re tying in fixed mutations here suggests you think that this change has to be fixed in a population to count as a “trial”.

That doesn’t answer my question. Again, why do you think evolution can produce only tiny amounts of functional information in a sequence?

“less than 1/10^11” is a bit vague, don’t you think?

Do you have a link to that?
Just determining the conservation of a sequence among extant life won’t necessarily tell us what the functional information content of that function is? In this case, for example, I could imagine a scenario where lots of different sequences had some level of PRPF8’s activity and evolved stepwise up to some kind of optimum. If that optimal sequence was present in the last common ancestor of eukaryotes, then it would remain pretty highly conserved at close to it’s optimum level, especially if it’s now taking on a crucial role in the cell. For example, if splicing was useful but not essential in the ancient eukaryotic ancestor, then it might not have been too constrained, but as the eukaryote evolves to have more and more of its genes require splicing, the splicing function becomes more and more vital and suddenly the complex is “locked” in a certain configuration, such that it can improve slightly, but any reduction in activity would be extremely detrimental.
Later evolutionary events can come in after the fact and “lock in” sequences as well. Imagine a scenario where PRPF8 was fully functional on its own in an early eukaryotic ancestor, and was under pretty relaxed selection such that lots of different sequences would be possible. As the system evolves to incorporate more proteins into the splicosome, the whole system becomes more and more constrained, until you end up with something so fine-tuned that almost any mutation will severely decrease fitness.

Both of these scenarios are compatible with what you’re describing - a highly conserved functional sequence which would have a lot of functional information according to a conservation analysis - but are capable of being reached by evolution.

How was that number calculated?

@colewd, @evograd is 100% correct here. You missed something big. Do you understand why?

1 Like

You can define it differently but either way we are covered with 10^50 possible trials. This is indeed a very large number.

Because of the massive size of the sequence space. 20^2335 possible arrangements is a number so immense that we cannot relate it to anything tangible. Functional space has to be almost the same size to even find initial selectable function. The empirical data does not support this.

[quote]Do you have a link to that?[/quote]Link: The spliceosome: a molecular machine that defies any non-design explanation. – Uncommon Descent

I will think some more about the scenario you wrote regarding the origin of PRPF8 and the spliceosome.

Sure. Successful trials are only a piece of the overall trials. The number 10^50 covers overall trials. I do think fixation needs to be considered so I brought it into the conversation.

For everyone’s benefit, here is an interesting 2014 paper discussing theh evolution of the spliceosome. It’s primarily focused on intron structures but has some nice parts about the protein machinery involved too.

http://cshperspectives.cshlp.org/content/6/6/a016071.full

To quote their summary of the outline of the evolutionary appearance of the eukaryotic spliceosome:

According to this general model, spliceosomal introns evolved from invading group II introns, perhaps derived from the early mitochondrion (thought to be descended from an engulfed member of the α-proteobacteria, whose modern members contain group II introns). For some reason(s), these introns then proliferated to an unprecedented level in the host genome. Over time, the self-splicing activities of these many intron copies degenerated, which was associated with the increase of trans-encoded RNAs and proteins that promoted efficient intron splicing, setting the basis of the protospliceosomal machinery, and further releasing selective pressure on cis-intronic splicing elements. As this protospliceosomal machinery recruited more proteins and became more efficient, introns became increasingly reliant on the emerging spliceosome for proper splicing.

1 Like

This is exactly where they are right. Numerous mathematical laws show complex structure cannot result from chance and necessity. These are the ones I know:

  • Data processing inequality
  • Law of information non growth
  • Non-computability of Kolmogorov complexity
  • Sparsity of the Kolmogorov minimal sufficient statistic

Something beyond chance and necessity, such as a halting oracle, is necessary for complex structure. Naturalists are on the wrong side of math.

1 Like

I am familiar with all four of the points above and they don’t result from chance nor necessity; however, there is no need to invoke a halting oracle or intelligent designer to understand complexity theory.

Please explain. If chance and necessity cannot create complex structure, what else can?

The Mullerian Two Step (Irreducible Complexity: the Mullerian Insights - 100 yrs ago!).

None of these theories tell us that information or complexity requires a designers. Some of them tell us precisely the opposite.

The same explanation of complex structure that explains a Category 5 hurricane and the Great Spot on Jupiter.

@EricMH,

The point of this Blog/List is to show that if we agree on the basics of Christianity … then we already belief enough in the guiding hand of God we can conclude that God used TWO methods to create Humanity as we know it today!!!

Again, please explain. These theories exactly say that mutual information and complex structure require a halting oracle. The reason algorithms and chance are impotent can be reduced to the halting problem.

1 Like

For another day. I’ve written about this extensively on the BioLogos forum. I even have a paper on functional information deposited in biorxiv (not peer reviewed). One interesting thing I discovered from @AJRoberts is that scientists at Reasons to Believe generally consider these information arguments against evolution are in error. They do not affirm evolution, so perhaps you might have an easier time trusting them to explain what is wrong with them.

If chance and necessity cannot create complex structure, then these would be evidence for a halting oracle. I do not understand your point.

Disappointing :frowning:

Way too much hand wavy dismissal of arguments I see in this forum.

As you accused me of non sequitur before, this is the same. There are mathematical proofs that chance and necessity cannot create complex structure. The proofs show the only way to get such structure is with a halting oracle.

I point this out, and you say there is some other way.

How is that possible? There literally is no other alternative. Chance, necessity, and halting oracle exhaust the possibilities.

I answered in the past and I’ll probably do it again in the future. I have a job to during work hours and a family too. You could just go learn the basics of information theory and see it for yourself too. It isn’t that complex.

Certainly, I am not asking you to take away time from your family. You go above and beyond with all the time you devote to this site. However, if it is as basic as you claim, you can at least state the theorem that proves me wrong.

I’ve spent a couple years now studying information theory, and the more I learn the more it confirms chance and necessity cannot create complex structure. Hence the proofs I referred to.

I started skeptical of the information theory ID arguments, too, but it turns out they are spot on.

1 Like

A post was split to a new topic: Studying Information Theory