The Argument Clinic

Actually no, that IS often the case. As I mentioned previously; in order for RNA polymerase to be able to transcribe any sequence of DNA, it MUST HAVE a binding affinity to DNA that’s non-specific with respect to the nucleotide sequence. If you eliminate this non-specificity, you would have to have a unique RNA polymerase for every gene with a unique sequence. That’s obviously not the case. Initiation factors biases the affinity of the holoenzyme towards promoters or promoter-like sequenses, but it does not eliminate the non-specificity of RNA polymerase. It would be lethal if it did.

People are often saddled with the idea that the inner workings of the cell work in lock step like the gears of a clock, and any presence of noise just happens in the background that is detrimental to function and has to be corrected for. On the contrary, the stochasticity in biochemistry very often UNDERPINS function. Noise plays a role that is VITAL to life.

Noise in Biology - PMC The new generation of “biological physicists”, many of them trained in nonlinear dynamics and statistical physics, started to view fluctuations not as a nuisance that makes experiments difficult to interpret, but as a worthwhile subject of study by itself. Researchers are finding more and more evidence that noise is not always detrimental for a biological function: evolution can tune the systems so they can take advantage of natural stochastic fluctuations.

All processes in Nature are fundamentally stochastic, however this stochasticity is often negligible in the macroscopic world because of the law of large numbers. This is true for systems at equilibrium, where one can generally expect for a system with N degrees of freedom the relative magnitude of fluctuations to scale as 1/𝑁‾‾√. However, when the system is driven out of equilibrium, the central limit theorem does not always apply, and even macroscopic systems can exhibit anomalously large (“giant”) fluctuations Keizer (1987). There are many examples of this phenomenon in physics of glassy systems, granular packings, active colloids, etc. Biology deals with living systems that are manifestly non-equilibrium, and so it is not surprising that noise plays a pivotal role in many biological processes.

No, I fully understand the point you are trying to make. However, your argument wholly omits the reality of non-specific binding. While you acknowledge this happens, you dismiss it’s importance and refuse to see how this makes the reasoning that binding is an indication of function highly specious… EVEN when your very own citation spells this out in the very next paragraph following the one that you quoted. Good grief.

Their reasoning was flawed not simply because of that. According to their reasoning, the presence of any “biochemical activity” (such as protein binding and RNA transcription EVEN if it occurred ONLY once in one cell type) is definitionally the same as “functional”.

An integrated encyclopedia of DNA elements in the human genome | Nature

The Encyclopedia of DNA Elements (ENCODE) project aims to delineate all functional elements encoded in the human genome1,2,3. Operationally, we define a functional element as a discrete genome segment that encodes a defined product (for example, protein or non-coding RNA) or displays a reproducible biochemical signature (for example, protein binding, or a specific chromatin structure).

The vast majority (80.4%) of the human genome participates in at least one biochemical RNA- and/or chromatin-associated event in at least one cell type.

In other words, any sound = signal. No room for noise under this thinking.

Wrong, that’s NOT what you have been trying to say. You have said, right in the previous comment even, that there was nothing wrong with the reasoning that the people of the ENCODE project used to assign function at the mere presence of biochemical activity (of any degree).

What WE and that particular paragraph are pointing out is an exact counter to such specious reasoning. While assays that detect biochemical activity are important (NOBODY here is saying otherwise, that’s not what we disagree on), the mere presence of binding cannot be interpreted as an indicator of function, because such biochemical activity can occur due to stochastic processes.

2 Likes

It’s clearly too much to ask you to stop repeating your idiocies, but can you at least limit yourself to repeating each idiocy only once per post, rather than two or three times each, as you did here with your boilerplate text on Penrose, Aierts and genetic code?

My scroll-wheel is suffering deja vu.

But you have to remember that the only thing @Meerkat_SK5 has going for them is argumentum ad nauseam – it’s not as though their arguments are based on solid logic, a solid command of the evidence, or even a passing understanding of the science. To stop repeating themselves is tantamount to admitting that they have no coherent argument. And, as interminable experience has shown, the only way we are going to stop them repeating themselves is by stopping taking the bait and responding to their interminable incoherent claims. @Meerkat_SK5 is Churchill’s very definition of a fanatic: they “can’t change [their] mind and won’t change the subject.”

Surely Brandolini’s law should imply that, given the strong asymmetry, we should target our efforts against internet bullshit that has most potential for doing real-world harm. The internet has an overflowing supply of bullshit, and if we tried to target it all, we’d all be exhausted and driven insane before we made even the slightest dent.

Does anybody believe that, even left unrebutted, @Meerkat_SK5’s BS has any serious potential to be taken seriously by even the most batshit-insane politician or educator – particularly when they have the less-blatantly-BS claims of the DI, RTB, AiG, etc competing for their attention (and being more professionally presented)?

That isn’t a theory either.

No, your explanations are based entirely on words, not on any observations.

There is everything wrong with their placing description above experimentation. ENCODE is/was descriptive. That’s why it’s so weak.

It’s descriptive and weak.

Do you have any idea how obvious it is that you don’t have the slightest idea what you are writing about?

ENCODE was in vivo. You can’t keep anything straight.

The ENCODE approach is descriptive, not experimental.

Yes, but ENCODE didn’t bother with that.

The question was about non-specific transcription. You didn’t answer it.

It isn’t.

I think we could track Meer’s probability of being taken seriously with Alcoa’s stock price.

1 Like

I would think Theranos’ stock price would be a more apt comparison. :stuck_out_tongue:

3 Likes

What you have just described is not actually random unguided processes though. As I told @T_aquaticus, quantum tunneling needs to be extremely precise for hemoglobin to transport the right amount of oxygen to the cells of all vertebrate and most invertebrate species.

So if the conscious observer chooses to measure the momentum of a particle with precision, the observer discovers that the position of the particle is now known only approximately ± half a mile. However, according to the Heisenberg principle, the more precisely the position of some particle is determined, the less precisely its momentum can be predicted from initial conditions, and vice versa. If the uncertainty in the position becomes much greater or smaller than half a mile, hemoglobin will not function as it does, rendering advanced life impossible.

This means that, despite the Heisenberg principle being a random process, the uncertainty in the Heisenberg uncertainty principle must be fine-tuned. This shows how the right fine-tuning values were carefully chosen to allow advanced life to exist from the beginning leading up to the present.

As I just explained above, I definitely do not and did not dismiss its importance (this shows you actually did not fully read my discussion with @T_aquaticus I might add). Instead, I dismiss your hypothesis or assumptions that this is random unguided processes going on in the cell.

You and @Mercer are getting confused between prediction and hypothesis. “Guided mutations”, which imply a level of intention or directionality by a personal agent, is the hypothesis that is already supported by the fine-tuning constants.

“Non-random mutations” is the confirmed predictions that flows out of the guided mutations hypothesis that suggests a personal agent is involved in the process. This involves finding non-random mutations in the non-coding regions of the genome, which has been labeled “junk DNA”, and in regions that do encode proteins but are primarily deleterious.

No, it is not that big of a leap because the competitive endogenous RNA hypothesis provides a comprehensive model for pseudogene function. It not only identifies function for individual members of this junk DNA class, but also presents an elegant framework to explain the function of all members of the category either direct or indirect, in protein synthesis or in gene expression regulation.

As Mattick and Dinger suggested, these noncoding RNAs, when tested…

"usually show evidence of biological function in different developmental and disease contexts, with, by our estimate, hundreds of validated cases already published and many more en route, which is a big enough subset to draw broader conclusions about the likely functionality of the rest.

It is also consistent with the specific and dynamic epigenetic modifications across most of the genome, and concurs with the ENCODE conclusion that 80% of the genome shows biochemical indices of function (Dunham et al. 2012). Of course, if this is true, the long-standing protein-centric zeitgeist of gene structure and regulation in human development will have to be reassessed (Mattick 2004, 2007, 2011), which may be tacitly motivating the resistance in some quarters."

In fact, Professor Alistair Forrest of the Harry Perkins Institute of Medical Research and his research team have found many more examples of function in the non-coding regions:

“There is strong debate in the scientific community on whether the thousands of long non-coding RNAs generated from our genomes are functional or simply byproducts of a noisy transcriptional machinery… we find compelling evidence that the majority of these long non-coding RNAs appear to be functional, and for nearly 2,000 of them we reveal their potential involvement in diseases and other genetic traits.”

Secondary source: Improved gene expression atlas shows that many human long non-coding RNAs may actually be functional – ScienceDaily

Primary source: An atlas of human long non-coding RNAs with accurate 5′ ends | Nature

Overall, this makes their conclusion the most probable explanation even though there are other possible explanations as you guys alluded to.

I don’t remember saying that. When I said there was nothing wrong with their reasoning, I meant this generally speaking because inductive reasoning is a fundamental part of the scientific method. This means that this reasoning would only be flawed if there was an equal or more probable explanation for the biochemical activity. So far, you guys have only provided possible alternative explanations, but this is an invalid objection as I will explain…

Simply because evidence is consistent with an alternative hypothesis does not mean that it falsifies the original theory. In science, it is not enough to merely show that a theory is consistent with one set of observations; the theory must also make testable predictions that can be verified through experiments or observations.

For a theory to be considered falsified, it must make a prediction that is not consistent with the evidence. If the evidence is consistent with both the original theory and an alternative hypothesis, then the original theory may still be valid, but further testing may be needed to distinguish between the two hypotheses.

In summary, falsifying a theory requires evidence that is inconsistent with the predictions of the theory, rather than just evidence that is consistent with an alternative hypothesis.

Depending on what you mean, there is some truth to that.

Throughout history, both Richard Owen’s common archetype and Charles Darwin’s common ancestor were two different explanations that explained the same data, such as the morphological and molecular similarities and differences, nested hierachies, and complexity of life.

However, the specific mechanism via natural selection that Darwin’s theory provided and Owen’s theory lacked was an important difference between the two that made Darwin’s theory superior because it could be subject to rigorous testing.

But, this was only true in the past. Over the past 20 years, empirical support and rigorous testing of the quantum mind theory, which encompasses well tested theories, has turned Owen’s theory into a real scientific endeavor.

In summary, my theory is a synthesis of the quantum mind theory and Owen’s theory, which I refer to as the universal common designer theory.

This is not a problem anymore because many other researchers have independently confirmed their results and supported their hypothesis ever since.

I am not sure what your point is here.

It actually is. For example, molecular motors simply bias stochasticity to produce net movement in one direction. There are plenty of reverse steps.

2 Likes

This kind of statement is why you do not get many replies from my direction. I cannot take your grab bag of ideas seriously.

Random processes can be influences by all sort of factors, and they do not cease to be random. This is true of both quantum and classical domains. The motion of electrons is random even when biased to a current due the presence of a voltage difference. The vector motion of gas molecules is random even given flow from a pressure gradient. Bias and constraint do not cancel randomness. Noise, biological or otherwise, is not signal, and indeed is a random, unguided, process.

3 Likes

Quantum tunneling isn’t an instrument. There is no sense in which it is or can be “extremely precise”. However, this misses Nesslig’s point entirely, anyway. The stochastic modeling of intracellular mechanisms is inherited from classical statistical mechanics, the foundations of thermodynamics, not (necessarily) from quantum theory. The claim is that the locations and times of individual chemical interactions are distributed randomly, with rates and densities known only under sufficiently coarse graining, or within confidence intervals.

This is gibberish. The sort of gibberish I’d expect from an AI trained to use jargon in constructing grammatically coherent phrases, but with no training in the field to construct any with coherence in content. Knowing a particle’s position does not preclude “predicting precisely its momentum from initial conditions”. It precludes there being a precise momentum any more. The “initial conditions” are not phase space coordinates in quantum theory. To describe a state at any point in time the entire wave function is required. Doing a measurement interferes with the system, enough to where the simplistic description that took it into account whilst ignoring the lab just does not apply anymore. It’s not that you can’t evolve the wave function anymore, it’s that the one you might have had before the measurement no longer suffices to describe the system afterwards.

Meanwhile what on earth this has to do with haemoglobin and where the “half a mile” number comes from is entirely unclear to me. What “the position” is it, exactly, that haemoglobin’s function requires to have a half mile uncertainty?

Heisenberg’s principle is not a random process. It is no process in any physical sense at all. It’s an inequality, a mathematical consequence of Fourier’s formalism. Needless to say, since it is no process, there is nothing to tune about it, finely or otherwise. But it gets worse, of course, because the lower bound it states is not even universal. Rather it depends entirely on the choice and definition of the observables on the other side of the inequality. The only completely excluded values are non-reals (except maybe positive infinity, I’m not actually quite sure off the top of my head) and negative reals.

What, like the fine tuning that some unspecified “the position” had to have a finely tuned carefully chosen “half mile” uncertainty, or else haemoglobin wouldn’t work? And what exactly shows this, pray tell? To me this sounds like at best a vaguely formulated claim, built at best upon reasoning that reflects no notable understanding of subject matters it pretends to be rooted in, and at worst on outright technobabble.

Wait, that sounds familiar. Almost like I saw that same text in the very same message not too long ago…

4 Likes

Apologies for double posting, but I feel like I do need to come back to this once more:

This is completely absurd. Now, I admit, I am no biochemist. For all I know, haemoglobin could be picky about the momentum molecules it reacts with get to have coming in. In fact, I have no doubt that there is at least an upper bound on the energy of such a collision event that would damage either the incoming molecule or haemoglobin or both and not produce the “usual” bonds. However, Heisenberg’s principle does nothing to prevent this. If, hypothetically, some “the position” has a half mile uncertainty, that places no bounds on the value of the corresponding momentum. Even the bounds it places on the uncertainty in momentum are lower bounds. So your claim is, that haemoglobin would not function, if the momentum corresponding to the “the position” is any sharper than Planck’s reduced constant divided by a mile, but as long as the momentum is less specific than that, haemoglobin works fine.

One can of course still name the average kinetic energy from, again, thermodynamic results. Aside from some unit conversions this is what is meant by temperature. Knowing more about thermodynamic systems, like the equipartition theorem, we can say what average momenta would have to be for a given particle type in the system. With something like the Maxwell-Boltzmann distribution we can specify the uncertainty in kinetic energy and in the momenta, too. Adjustments must be made for denser systems, where particle size, shape, and interaction play a role, but it’s a start. One that still wouldn’t give us a sharp cutoff on either end, but at least it would give us some estimate of the ratio of reaction-favourable and reaction-unfavourable event frequencies. It would surprise me if the half a mile thing could be verified for temperatures typical of living cells (which have a generous range, mind you, just taking into account organisms whose temperature matches the environment’s): Just for some back-of-an-envelope calculations it turns out that the discrepancy between the uncertainty of momenta in an ideal gas of point particles with oxygen’s mass at room temperature and the minimal uncertainty in momenta corresponding to a half mile uncertainty in positions via Heisenberg’s principle is around some fifteen orders of magnitude. The former number is arrived at very crudely, of course, and if you have corrections that stand to mitigate this enormous gap, by all means, I’ll be happy to review your calculations. For now, my point is that you can’t just bark out a random length, reference something you know has to do with quantum stuff and call it a day (and continue getting taken seriously by people who actually studied any of the fields you so liberally pontificate upon). Physics is not a toy, and its jargon is not a debating technique.

1 Like

I think the reason why this is the case for you and others is because we have two different perspectives on the same data. I think @AJRoberts said it best when she came on this forum for the same reasons:

"As part of my work for RTB, I occasionally venture onto science-faith and apologetics online discussion sites. One site, called Peaceful Science, seeks to bring scientists from all faith persuasions into discussions about various origins models, including RTB’s progressive (old-earth) creationism model and evolutionary mainstream models. Needless to say, we don’t interpret some scientific data the same way, especially when it concerns origins. Discussions can be challenging!

One complicating factor is that it is often difficult to understand someone’s model from their vantage point when it seems incongruent with one’s own worldview model. Consider evolution, which says all life, extinct and extant, has developed through neutral and adaptive mutations and eons of common descent (with or without God’s preprogramming or tweaking the system along the way).

And then consider the progressive creation model, which says God created distinct “kinds”—introducing them, in due course, over long epochs of creation. Now, add 280 years of observations by scientists and naturalists who have classified organisms into various taxa according to the Linnaeus system of naming and classification (i.e., taxonomy). How does one begin to talk coherently across these two origin models? Where do we find grounds for clear communication?"

I am about to show you an example of what she is saying…

These definitions are not what I mean by random unguided processes. Instead, it is …

In evolution, there is no entity or person who is selecting adaptive combinations. These combinations select themselves because the organisms possessing them reproduce more effectively than those with less adaptive variations. Therefore, natural selection does not strive to produce predetermined kinds of organisms but only organisms that are adapted to their present environments. As pointed out, which characteristics will be selected depends on which variations happen to be present at a given time in a given place. This, in turn, depends on the random process of mutation as well as on the previous history of the organisms (that is, on the genetic makeup they have as a consequence of their previous evolution). Natural selection is an opportunistic process. The variables determining the direction in which natural selection will proceed are the environment, the preexisting constitution of the organisms, and the randomly arising mutations. pnas.org/doi/10.1073/pnas.0701072104#sec-8

AND

"> (i) they are rare exceptions to the fidelity of the process of DNA replication and because (ii) there is no way of knowing which gene will mutate in a particular cell or in a particular individual.

However, the meaning of “random” that is most significant for understanding the evolutionary process is (iii) that mutations are unoriented with respect to adaptation; they occur independently of whether or not they are beneficial or harmful to the organisms. Some are beneficial, most are not, and only the beneficial ones become incorporated in the organisms" through natural selection."

So far, none of your guy’s responses attempted to refute the hypothesis that a personal agent must be involved in the process or the prediction that the process is oriented to benefit the organism

Let me summarize the argument again so you guys can properly address it this time…

Based on the well-tested and supported quantum mind theory, the definition of consciousness is self-collapsing wave function. or " causally disconnected choice". Now, here is the formation of the hypothesis:

Observations

The natural engineering principles of electron tunnelling in biological oxidation-reduction involve optimizing the tunneling distance and energy barriers, organizing the electron transport chain, maintaining quantum coherence, and creating a robust system that can operate in a variety of conditions. These principles are essential for the efficient transfer of electrons in biological systems, which is crucial for many biological processes, including cellular respiration and photosynthesis.

Natural engineering principles of electron tunnelling in biological oxidation–reduction | Nature

Experiment

Patel showed how quantum search algorithms explain why the same code was chosen for every living creature on Earth. [22] To summarize, if the search processes involved in assembling DNA and proteins are to be as efficient as possible, the number of bases should be four, and the number of amino acids should be 20. An experiment has revealed that this quantum search algorithm is itself a fundamental property of nature. [23]

[22] [quant-ph/0002037] Quantum Algorithms and the Genetic Code

[23] [1908.11213] The Grover search as a naturally occurring phenomenon

Universal common designer Hypothesis

We can infer that a personal agent not only chose the right fine-tuning values for life to exist but chooses the right genetic code for life.

Definition of personal agent: universal self-collapsing genetic code shown by the shared DNA among all living organisms (i.e., objective reduction).

Predictions

We should find non-random mutations in the non-coding regions of the genome, which has been labeled “junk DNA”, and in regions that do encode proteins but are primarily deleterious.

Both predictions have been and continue to be confirmed:

Evidence of non-random mutation rates suggests an evolutionary risk management strategy | Nature

De novo mutation rates at the single-mutation resolution in a human HBB gene-region associated with adaptation and genetic disease (cshlp.org)

Mutation bias reflects natural selection in Arabidopsis thaliana | Nature

Sure, I will elaborate…

In biological oxidation-reduction reactions, electron transfer occurs between molecules, often facilitated by enzymes. The principles of electron tunnelling, a quantum mechanical phenomenon, play an important role in these processes. Here are some natural engineering principles of electron tunnelling in biological oxidation-reduction:

  1. The tunneling distance and energy barriers must be optimized: In order for electron tunnelling to occur efficiently, the distance between the electron donor and acceptor molecules and the energy barriers that electrons must overcome must be carefully tuned. Biological systems achieve this by positioning donor and acceptor molecules close together and by creating an environment that reduces the energy barriers to electron transfer.

  2. The electron transport chain must be well-organized: In order for electron transfer to proceed efficiently, the electron transport chain must be well-organized. This means that the sequence of electron donors and acceptors must be arranged in such a way that electrons can flow smoothly from one molecule to the next. This is achieved through the use of specialized enzymes and membrane proteins that are arranged in specific configurations.

  3. Quantum coherence must be maintained: Quantum coherence is a property of electrons that allows them to behave like waves, which can interfere constructively or destructively depending on their phase. In order for electron tunnelling to occur efficiently, coherence must be maintained between the donor and acceptor molecules. Biological systems achieve this through the use of specialized proteins that act as “quantum wires” to maintain coherence over long distances.

  4. The system must be robust: Biological systems must be able to maintain electron transfer in a variety of conditions, including changes in temperature, pH, and oxygen availability. This is achieved through the use of redundant pathways for electron transfer and through the presence of protective mechanisms, such as antioxidants, that prevent damage to the electron transport chain.

In summary, the natural engineering principles of electron tunnelling in biological oxidation-reduction involve optimizing the tunneling distance and energy barriers, organizing the electron transport chain, maintaining quantum coherence, and creating a robust system that can operate in a variety of conditions. These principles are essential for the efficient transfer of electrons in biological systems, which is crucial for many biological processes, including cellular respiration and photosynthesis.

Yes, I am. And I was not talking about quantum mechanical processes whatsoever. I was specifically referring to the processes that take place in the cell at the molecular and macro-molecular scales. Most of the motion that happens at that scale is Brownian (driven by thermal noise), a quintessential example of stochastic processes. It’s the reason for passive transport of nutrients. Without this thermal noise, which only occurs at or near 0 kelvin, even assuming the water in your body was able to maintain a liquid phase, you wouldn’t be able to breathe in and transport oxygen, no matter how much hemoglobin you have. In fact, pretty much nothing could happen. This is unlike familiar macro-scale machines that operate like clockwork, which could in theory operate at 0 kelvin.

Furthermore, even in the case of molecular motors which drive motion into a particular direction, they don’t “fight” against Brownian motion. Instead, they make use of it, releasing biochemical energy (e.g. ATP) to bias the stochastic Brownian motion into one direction on average (with some unavoidable herky jerky pauses and backtracks). One of many examples of how disorder (disordered motion) coupled with thermodynamic disequilibrium produces order (ordered motion). This is how the molecular world operates by it’s very fundamental physical and statistical mechanics. There is no escape from this.

Yes, you did. Let me refresh your memory. Here is the original comment:

So… once again… you claimed that there was nothing wrong with the reasoning they used to assign function. What was their reasoning exactly? By their very definition of ‘function’, the mere presence of such biochemical activity constituted ‘function’. That’s the crux they leaned on for their 80% functional claim. It’s not that they used “inductive reasoning”. That’s not the issue! Their very criteria established by their own definition for ‘function’ is unable to differentiate between biochemical activity due to function or due to the spurious nature of these macro-molecules. We have been trying to explain to you again and again why this reasoning is specious because of the reality of non-specific binding, yet you keep claiming otherwise. You don’t see (or simply refuse to see) this makes ENCODE’s reasoning highly flawed and ignorant with respect to the biochemical processes. That’s why I say you are dismissive of the importance non-specific binding. You are doing this (regarding RNAs)… again… here next.

Yes, it’s a big leap because their very definition for what constitutes ‘function’ doesn’t allow them to distinguish between functional RNA (the signal) and spurious RNA (the noise) among RNA products (the sound). Their very definition doesn’t allow for noise to even exist. That’s a serious error. Mentioning ceRNA is not a response to this. The fact that RNAs CAN have such functions doesn’t tell you what proportion of RNAs that are produced actually are functional. The fact that sound CAN have a signal doesn’t mean every sound you hear constitutes a signal and never noise. This is a classic case of putting the cart before the horse.

Furthermore, the 2013 paper where Mattick’s and Dinger’s defend of ENCODE’s 2012 conclusion of 80% functional DNA contains some bizarrely false statements to make their case.

Moreover, there is a broadly consistent rise in the amount of non-protein-coding intergenic and intronic DNA with developmental complexity, a relationship that proves nothing but which suggests an association that can only be falsified by downward exceptions, of which there are none known (Taft et al. 2007; Liu et al., 2013).

This is ludicrous. Another “Dog’s ass plot” moment from Mattick. There are “downward exceptions” (by which they mean a genome of a developmentally complex organisms that only has a small amount of non-protein-coding intergenic and intronic DNA). One that is often mentioned, even in the paper by Doolittle that Mattick and Dinger are responding to, is the puffer fish; which has a 400 million bp genome. It’s is almost 1/10 the size of the 3.1 billion bp human genome, and both have a similar number of 20-25k genes. You may argue that the puffer fish is far less complex than a human, but it’s among the smallest genome known. Is the puffer fish really lacking THAT much developmental complexity compared to a lungfish for example with it’s 4.3 billion bp genome (~14 times the size of the human genome, ~140 times the size of the puffer fish genome)?? The main difference is actually explained by the number of transposable elements. Puffer fish have thousands of TEs while humans have millions of them (mostly fragments in both cases). It’s far from the only example. Within amphibians, the genome size vary by 100-fold, and most amphibians have larger genomes than humans. So are amphibians on average more complex than humans? And do some amphibians 100 times more complex than other amphibians? But they dismisses these “upward exceptions” as such in the previous lines:

The other substantive argument that bears on the issue, alluded to in the quotes that preface the Graur et al. article, and more explicitly discussed by Doolittle (Doolittle 2013), is the so-called ‘C-value enigma’ , which refers to the fact that some organisms (like some amoebae, onions, some arthropods, and amphibians) have much more DNA per cell than humans, but cannot possibly be more developmentally or cognitively complex, implying that eukaryotic genomes can and do carry varying amounts of unnecessary baggage. That may be so, but the extent of such baggage in humans is unknown. However, where data is available, these upward exceptions appear to be due to polyploidy and/or varying transposon loads (of uncertain biological relevance), rather than an absolute increase in genetic complexity (Taft et al. 2007).

Did anyone catch that? Such exceptions of organisms with much larger genomes than they would expect are dismissed because the expansion of transposable elements (TEs) do not add functio… oh wait, sorry, they use the term “genetic complexity”.

Let’s get this straight. They argue that the (non-protein-coding) genome size correlates with developmental complexity, and that most of the human genome (about half of which is made of TEs) is functional as it underpins our high developmental complexity. However, exceptions to the correlation are waved away because TEs don’t actually contribute to “genomic complexity”. What does this entail? TEs do contribute to “genetic complexity” and thereby “developmental complexity”, but only in organisms they deem developmentally complex (e.g. humans); but TEs don’t contribute to “genetic complexity” only in organisms they see as deviating from the correlation? Or alternatively, TEs NEVER contribute to "genomic complexity, not even in humans, but sometimes they do contribute to “developmental complexity” but not always?

They don’t provide any explanation for when TEs do or don’t contribute to “genetic complexity” and “developmental complexity”. They only time when they say TEs do or could contribute is when they are dismissing the arguments regarding the non-functional status of the vast majority of TEs; and the only times when they say TEs do not contribute is when they are dismissing the exceptions to the correlation between genome size and developmental complexity. Aside from the fact that they don’t define what qualifies or quantifies as “genetic/developmental complexity”, their arguments appear to be blatantly inconsistent and self-contradictory.

Lastly, there is also all that stuff that was mentioned on lncRNAs. However, I have already spent way too much of my free time repeating myself to respond to the same points ad nauseam. Here is a Review article that summarizes the evidence for why most LncRNAs are spurious products, not due to function.

1 Like

NO, you are misrepresenting what they found as being functional. The ENCODE Project did much more than assign function to sequences based on its mere existence in the human genome:

ENCODE (5) defines a functional element (FE) as “a discrete genome segment that encodes a defined product (for example, protein or non-coding RNA) or displays a reproducible biochemical signature (for example, protein binding, or a specific chromatin structure).”

This certainly does not sound like the description you gave.

Furthermore, the ENCODE Project experimentally determined which sequences in the human genome displayed biochemical activity using assays that measured:

  • transcription,
  • binding of transcription factors to DNA,
  • histone binding to DNA,
  • DNA binding by modified histones,
  • DNA methylation, and
  • three-dimensional interactions between enhancer sequences and genes.

The implied assumption is that if a sequence is involved in any of these processes—all of which play well-established roles in gene regulation—then the sequences must have functional utility.

The only response I got from you guys on this point was that it " is not enough to infer function exactly because biochemical activity is a result expected from nonfunctional junk DNA". But, this is merely relying on a theory-laden definition of function that fits Darwin’s theory NOT Owen’s theory. More importantly, experiments do not support your hypothesis that random noise is sufficient, as this study suggests: “We identify two major consequences of nonfunctional protein-DNA binding. First, there are interference effects , where such binding can disrupt various molecular processes, including transcription, gene regulation, replication and mutational repair”

In their experiment, they determined function using in vitro measurements of
protein-DNA binding specificities. As a result of their experiment, they showed how "genomes have evolved to reduce the occurrence of weak binding motif. The distinct set of DNA binding
proteins coded in each species’ genome imposes a large set of global, evolutionary constraints that have ubiquitously shaped genome-wide motif statistics. " by LONG QIAN and EDO KUSSELL PHYS. REV. X 6, 041009 (2016)

Moreover, protein surfaces are carefully structured to allow strong interactions between protein pairs while minimizing the strength of the unwanted interactions between protein “strangers.”

Robust protein–protein interactions in crowded cellular environments | PNAS

Although this was just a simulation, a following study by Harvard scientists indicates that the concentration of PPI-participating proteins in the cell is also carefully designed. Protein structure and concentrations have to be precisely regulated to promote the PPIs critical for life…

As Fuz Rana suggested, “high-precision structures and interactions, exemplified by PPIs, are hallmark features of biochemical systems and, by analogy to fine-tuned human designs, point to the work of a Creator.”

Topology of protein interaction network shapes protein abundances and strengths of their functional and nonspecific interactions | PNAS

Like I said before, my explanation continues to be more probable than yours.

Out of a long post, that one sentence fragment is the only thing you choose to comment on? And then you go on to display your complete ignorance of the phylogeny you are attempting to overthrow. Do you see why people despair of ever having a real discussion with you?

Your total ignorance of phylogeny is not just a mistake. It should prevent you from opining on the subject at all.

While true, that’s also evidence against separate ancestry of “kinds”.

That’s only because there is no common design model to conflict with. You still haven’t managed a coherent description of “kinds” or an explanation for the nested hierarchy among them. I ask again: why do vertebrates have four Hox clusters while othere metazoans all get by with only one? I could also ask why teleosts, alone among vertebrates, have seven.

1 Like

That does not contradict what I said. Furthermore, that quote by itself is a bit vague. What exactly they do deem as a “reproducible biochemical signature”? Well, we don’t need to guess, we just need to read further into the paper. Here below is the critical line of the paper (highlighted in bold text for your convenience) that’s the crux they lean on for their claim that 80% of the genome is functional:

An integrated encyclopedia of DNA elements in the human genome | Nature
The vast majority (80.4%) of the human genome participates in at least one biochemical RNA- and/or chromatin-associated event in at least one cell type. Much of the genome lies close to a regulatory event: 95% of the genome lies within 8 kilobases (kb) of a DNA–protein interaction (as assayed by bound ChIP-seq motifs or DNase I footprints), and 99% is within 1.7 kb of at least one of the biochemical events measured by ENCODE.

Thus, according to them, even just ONE biochemical event (RNA and/or chromatin associated in this case), detected even in just ONE cell type, is sufficient to be deemed as ‘functional’. This leaves no room for noise (e.g. spurious RNA products) in the data. Not even considered as a possibility. Every sound is deemed a signal by default. So, I am not misrepresenting the paper.

Noise also plays a role in EVERY ONE of these processes. Yet, you assume that a sequence involved in any of these processes MUST therefore be functional. Again, there is no room for any noise (spurious products/interactions) in the data. You claim I am misrepresenting ENCODE when I criticize them for this very assumption, while directly following that up by bolstering the exact same faulty assumption. What is anyone supposed to do here with such a lack of self-awareness?

2 Likes

It doesn’t need a designer guiding the oxygen and hemoglobin.

You don’t need a conscious observer. All you need is particles capable of interacting.

2 Likes

Not true. You are steadfastly ignoring nearly all of the data. You only extrapolate to a bizarre misrepresentations of the data by reading only words.

See, there’s a fine example of you simply ignoring the data.

1 Like