What does complexity in itself explain?
It explains why we see so much apparently “irreducible” complexity in life. It explains why the so called “minimal cell” is likely not the simplest cell possible. It shows why IC is not a solid argument for design. And so on and so on.
Maybe as an exception but is it the rule?
Given that there are so many neutral changes, it does not have to be usually true. The fact that it is happening so often means you have many many shots on goal. It doesn’t matter if you miss, for example, 99% of the time. That 1% of success is enough to produce innovative shifts.
You often ask about the splicesome, which has over 300 some odd proteins. This is the best explanation for it. A large amount of unnecessary complexity that has become necessary.
Complexity in itself is not irreducible complexity as irreducible complexity in many cased points to innovation. New mobility in a bacteria is clearly innovation.
We are not discussing complexity in general. We are discussing irreducible complexity, to be precise IC1 (Which Irreducible Complexity?).
8 posts were split to a new topic: Evolving a Feather By Shuffling Parts
Is “neutral selection” a typo?
@John_Harshman did you see this?
Yes. Thanks. It is neutral evolution.
A post was merged into an existing topic: Evolving a Feather By Shuffling Parts
This sounds fair enough…
One way to check would be to do a random sample of Bio-molecules in an Organism and classify how many are -
a) Two protein which are bound up neutrally… one not contributing in a significant way (positively or negatively) to the function of the main protein which does most of the work.
b) Two proteins in which one of the proteins does most of the work, but needs the second protein to function.
The frquency of such molecules in such a sample will provide good evidence of how “regularly” all this happens…
I am not even sure people can definitely say whether a group of proteins is type a, or type b…
For my own understanding, is the hypothesis of CNE that the neutral mutations which accumulate the required gene code to create the parts (proteins, and RNA) - which will eventually allow for the construction of the spliceosome - not doing anything until all the pieces are in place and the machine is ready for useful action within the cell?
No, they would be functioning the entire time. The final system would end up more complex than necessary to perform the function.
Functioning how? Beings used in other ways?
No, generally the idea is that there is a mechanistic tendency for complexity to increase neutrally (the system doesn’t necessarily end up better, just more complex). This is thought to be facilitated by certain inherent propensities toward specific types of mutations, such as deleterious point mutations and compensatory gene-duplications.
Generally speaking since the tendency for deleterious mutations and gene duplications are high, genes that slowly accumulate deleterious mutations will be compensated for by having increasing numbers of genes, effectively masking the effect of lower-performance expressed genes, by increased gene-dosage effects. In this way the genomic and functional complexity might increase, while the overall system retains a similar level of fitness, or related measure of system performance.
This can happen both to enzymatic pathways, and to molecular machines. With respect to enzymes, as deleterious mutations of small effect might accumulate in the genes encoding the enzymes, duplications of these genes can compensate for the lower performance of the individual enzyme by literally having more copies of lower performance enzymes able to do a similar job(an analogy is two one-armed men doing the work of a two-armed man).
In molecular machines, as the individual protein components of the system degrade due to deleterious mutations, more and more additional proteins are needed to compensate for their lower degree of function, be that structural stability, effective docking spots for other proteins, etc.
That’s basically the gist of it. That inherent mutational tendencies of high probability (deleterious mutations of small effect, together with gene duplications) work together as balancing forces that result in a sort of increasing functional bricolage. Genomic complexity increases(the constructive part), even as the overall measure of fitness remains more or less the same (the neutral part). Constructive neutral evolution.
This can even potentially result in the evolution of novel functions. it is possible that we can get new functions and more complexity through a process that adaptively “breaks” or “degrades” many more genes than it “creates” or “enhances”!
There’s nothing logically problematic about that. Like this(I hope this is comprehensible, tossed together fast in mspaint):
Squares represent genes, colors and intensity represent functions and their degrees. Red rectangles highlight what is being duplicated and passed on.
This is “adaptive devolution” of increased complexity, and new functions, by mostly “degrading” and mostly “breaking” genes. Because these extra genes are costly to express, their death is adaptive, and so is the eventual deletion of them. But because the still functional copies continue to accumulate deleterious mutations, as these are are more frequent than beneficial ones, their duplication is also some times adaptive(more expressed genes compensates for each individual gene being weaker).
Eventually a previously dead gene locus, a black square (effectively having become non-coding DNA) evolves into a de novo protein coding gene. So one new function is evolved and enhanced, while all the rest degrades and breaks. The net result is more complexity and more functions than there was to begin with. And it happened almost exclusively through neutral and adaptive degeneration.
Splicing and other sorts of functions too.
Thank you Rumraket, this is a very helpful explanation! This makes sense to me, and I think it is a strong counterargument against the ID conception of IC. I’m curious what (if any) responses Behe has made on this.
Is it generally true that the more complex an organism, the more parts makeup it’s molecular machinery? For instance, the Ribosomes inside a starfish or a sponge, has fewer components than the Ribosome in a crayfish, insects, etc?
The only part I’m not sure I follow, is how dead genes (now junk) can form entirely new functions. I’d love to see the details on how this is envisioned.
I have seen this simulation before, and have had a few nice discussions with Dr. Walsh. I’ll have to look at it again now that I have a much better understanding of how CNE works. Thanks!
I don’t know specifically whether sponge or starfish ribosomes are more or less complex than in crayfish or insects, but I think I remember a talk by Loren Williams who stated ribosomes in eukaryotes are generally considerably more complex than in prokaryotes, and they appear to be at least among the most complex in mammals IIRC. I think it was in this talk: “RNA and Protein: A match made in the Hadean” presented by Loren Williams
Thanks, I’ll check it out.