The Argument Clinic

If function is anything you want it to be then it’s nonsense.

Yes, it does. If their definition of function includes non-functional DNA then it’s a nonsense definition.

Reference?

We are talking about cellular function, not an experiment.

In order to reduce non-specific interactions to the levels you are asking for it would require a complete rework of nearly every protein and RNA involved in transcription. In the real world, natural selection pushes the cell towards good enough, and that’s what we see.

Pervasive does not mean many. If every tree has 1 or 2 incongruencies then that is pervasive, but it isn’t many.

You would have the burden of proof. You are the one making the claim.

How does HRT modify the DNA sequence of genomes?

From what you have described, HRT would nearly completely disappear in just one generation. If HRT is the transfer of regulatory proteins that are not coded for in the new genome then no new regulatory proteins can be produced. They would be diluted out in every cellular replication, and they would soon be too dilute to do anything. The affect would disappear.

If you mean HGT, then we would expect numerous and obvious violations of a nested hierarchy, and we don’t see them between families of species.

There is no reason why these solutions would need to fit into a nested hierarchy.

The definition doesn’t mean anything.

I am still saying that function does not require a nested hierarchy.

None of which requires a designer to guide the electron.

No, they don’t. We don’t need to know anything about electron tunneling in order to understand inheritance of genes.

No, you haven’t explained it.

That has nothing to do with what is happening in biology.

Nothing you quoted has anything to do with how we got the laws we see in nature.

2 Likes

Well, I thought my comment addressed the rest of what you said as well. So I did not feel it needed to be responded to directly.

Remember, I am not trying to overthrow phylogeny. I am trying to overthrow your unsupported assumption that common ancestry is the explanation for it.

You keep making this mental error every time we discuss phylogenetics.

No, the real issue was described by @AJRoberts awhile ago when she came on this forum:

"As part of my work for RTB, I occasionally venture onto science-faith and apologetics online discussion sites. One site, called Peaceful Science, seeks to bring scientists from all faith persuasions into discussions about various origins models, including RTB’s progressive (old-earth) creationism model and evolutionary mainstream models. Needless to say, we don’t interpret some scientific data the same way, especially when it concerns origins. Discussions can be challenging!

One complicating factor is that it is often difficult to understand someone’s model from their vantage point when it seems incongruent with one’s own worldview model. …How does one begin to talk coherently across these two origin models? Where do we find grounds for clear communication?

The short answer is HRT. For instance, if bats and rats evolved separately through horizontal regulatory transfer, it is possible that they could both have four Hox clusters with the same gene complement. Horizontal regulatory transfer refers to the transfer of regulatory elements, such as enhancers, between different species or lineages, leading to the evolution of similar gene expression patterns.

In this scenario, it is possible that the ancestral genome of the common ancestor of bats and rats had a different number of Hox clusters or a different gene complement, but they acquired the regulatory elements that led to the formation of four Hox clusters with a similar gene complement. This would be an example of convergent evolution, where similar traits evolve independently in different lineages due to similar selective pressures.

Alright, let me try to be more clear here.

Origin of life and species model

Approximately 3.8 billion years ago, pi electron resonance clouds in single-chain amphiphile molecules coalesced in geometric pi-stacks, forming viroids with quantum-friendly regions for OR events within Earth’s deep-sea hypothermal vents. [45]

Subsequently, through natural selection and OR events, groups of viroids formed into highly ordered local domains of key biomolecules of a DNA/RNA virus or molecule, which later evolved into different species of unicellular organisms. [46]

Through HRT, these unicellular organisms underwent extensive regulatory switching and rewiring in their noncoding regulatory regions to eventually form colonies of primitive multicellular organisms, with some cells specializing in specific functions. As these colonies evolved over time, this led to the divergence of transcription start sites and gene expression levels in the formation of complex multicellular clades, such as animal, fungi, brown algae, red algae, green algae, or land plants.

Then, the same primitive multicellular organisms developed into created kinds at different times and global locations through HRT.

You need to explain how because I fail to see any discrepancies.

According to Graur’s team, the causal role definition of function is…“for a trait, Q, to have a ‘causal role’ function, G, it is necessary and sufficient that Q performs G.”5 In other words, the causal definition ascribes function to sequences that play some observationally or experimentally determined role in genome structure and/or function.*

Well, it does not. So I guess your explanation does not falsify my explanation. Instead, you still need to show how your explanation is equally or more probable in order to falsify it, which neither you or @Nesslig20 has done yet.

This study contradicts this claim…

The article “Genome-Wide Motif Statistics are Shaped by DNA Binding Proteins over Evolutionary Time Scales” by B. Franklin Pugh and colleagues, published in Cell Reports in 2015, investigated the evolution of DNA binding motifs in transcription factor proteins and their impact on genome-wide motif statistics.

The study suggested that the evolution of DNA binding motifs in transcription factors is shaped by their ability to recognize specific DNA sequences rather than through non-specific interactions. The authors propose that non-specific binding is generally rare and non-specific interactions are not favorable for transcription factor function, which supports the idea that non-specific transcription can be harmful to the cell.

Uh wrong…

pervasive

adjective

per·​va·​sive pər-ˈvā-siv

-ziv

Synonyms of pervasive

: existing in or spreading through every part of something

a pervasive odor

pervasively adverb

pervasiveness noun

No, I was stating a fact that existed well before Darwin’s time. The phenomenon of stasis, or the apparent lack of change in certain lineages over long periods of time, had been noted by many naturalists throughout the 18th and 19th centuries. For example, the French naturalist Georges Cuvier, who lived from 1769 to 1832, recognized that certain groups of animals appeared to be static in the fossil record, with little change over millions of years.

Similarly, the phenomenon of sudden appearances of new species in the fossil record had also been noted by many naturalists prior to Darwin. One of the most famous examples of this is the Cambrian explosion, a period of rapid diversification of animal life that occurred around 540 million years ago.

Therefore, while Darwin’s theory of evolution by natural selection provided an explanation for stasis and sudden appearances in the fossil record, these phenomena had been observed and discussed by scientists and naturalists long before Darwin’s time.

This means that you have the burden of proof because you are making the same additional claim like Darwin that this phenomenon is illusory.

HRT can modify the DNA sequence of genomes by introducing new regulatory elements into the recipient genome, which can then alter gene expression patterns.

When regulatory elements are transferred via HRT, they can be integrated into the recipient genome at various locations, such as within the coding regions of genes, within intergenic regions, or within other regulatory elements. The location and context of the integrated regulatory element can determine the effect on gene expression. For example, if a transcription factor is integrated within an intergenic region upstream of a gene, it may act as a new promoter and increase the expression of that gene.

HRT can also affect the evolution of regulatory networks by allowing the transfer of entire regulatory modules or networks between different organisms. For example, if a gene regulatory network that controls a specific biological process is transferred from one organism to another via HGT, it can provide the recipient organism with new functional capabilities.

Overall, HRT can modify the DNA sequence of genomes by introducing new regulatory elements and networks, which can alter gene expression patterns and affect the evolution of biological processes.

It is not functional requirements themselves that predict nested hierarchies, but the purpose or teleology behind them. The purpose is to make sure groups of created kinds fit and fill different environments around the globe

If this is true, we would expect to see nested hierarchies emerge when similar parts and functions are adapted to fit and fill different environmental niches. This is because the process of adaptation to different niches often involves the modification and specialization of existing traits, rather than the evolution of entirely new ones.

As a result, we can often trace the evolutionary history of a group of organisms by examining the nested hierarchy of shared traits that reflect their common design. For example, the nested hierarchy of shared anatomical features and genetic sequences provides evidence for convergent evolution. The resulting similarities in structure and function can create nested hierarchies that reflect convergent evolution across multiple lineages.

Overall, the use of similar parts and functions to fit and fill different environmental niches can create nested hierarchies that reflect convergent evolution across multiple lineages, providing a powerful framework for understanding the history and diversity of life on Earth.

No, it is based on observations that you still did not address. Here it is again…

Patel argues that there are striking similarities between the genetic code and the mathematical framework used in quantum algorithms. Patel suggests that the genetic code may have evolved to take advantage of quantum coherence effects to optimize protein synthesis.

Patel points out that the genetic code consists of 64 codons (triplets of nucleotides) that encode for 20 amino acids, with some redundancy built in. This coding scheme allows for some tolerance to errors in DNA replication, while still maintaining the ability to accurately specify the correct amino acid sequence for protein synthesis. Similarly, quantum algorithms use quantum states to encode information in a way that allows for efficient computation with some tolerance to errors.

Patel proposes that the genetic code may have evolved to exploit quantum coherence effects in DNA replication and transcription, which could enhance the accuracy and efficiency of protein synthesis. Patel suggests that the genetic code may have emerged as a result of natural selection acting on quantum-mechanical properties of DNA and RNA molecules.

Now, you need to explain how it does not support the argument that the genetic code emerges from quantum mechanical properties. The only argument against mine you made was that he did not use the word “chosen”. This is nonsense though because he does refer to natural selection, which all you do is assume random unguided mutations were the other mechanism at play.

Instead, we can infer that a conscious agent chose those numbers among many possible numbers to develop organisms (i.e. self-collapsing genetic code)

What is your response?

In his book “The Emperor’s New Mind: Concerning Computers, Minds and The Laws of Physics”, Roger Penrose discusses his theory of consciousness, which is rooted in quantum physics and the structure of the brain.

Penrose defines consciousness as a state of awareness that arises from the interactions between the brain’s neural processes and the collapse of quantum wave functions in the brain’s microtubules. He argues that consciousness is a non-algorithmic process that cannot be replicated by a classical computer or a Turing machine because the action of consciousness proceeds in a way that cannot be described by algorithmic processes. [8]

For instance, conscious contemplation can ascertain the truth of a statement and freely make intellectual and moral judgments. This involves distinguishing between true and false statements or what is morally “right” versus “wrong.”

According to Penrose, consciousness arises from a process called orchestrated objective reduction (Orch OR), which involves the collapse of quantum wave functions in microtubules. In this process, the brain’s neural processes “orchestrate” the quantum states in microtubules to create a coherent conscious experience.

This has implications for the observer effect experiments I referenced before. For instance, only the conscious observer has the ability to choose which aspect of nature his knowledge will probe, which is what the results of the “quantum interaction-free” experiment demonstrated. In other words, the non-algorithmic mind is the only true measurement apparatus.

For example, the observer must first specify or think of which particular wave-function he intends to measure and then, put in place a measuring device that will probe that aspect. Then, only the observer can recognize the answer and understand the results after he chooses between the many possible outcomes.

This is why consciousness is defined as the self-collapse of the wave-function. Now, let me explain how this theory of consciousness explains the fine-tuning constants…

We know through math that most of the values in the parameters will not allow life to exist if the fining-tuning values were smaller or larger. This means that we don’t need to know what values don’t allow life, just the relevant values of the constants that do produce a life permitting universe.

For example, the cosmological constant is placed at a precise measurement of 10 to the 120th power , and when scientists trace the expansion back one second after the Planck scale of our universe, the degree of precision becomes an astounding value of 10 to the 10 to the 123rd power.

Yes, this is misrepresenting them because they didn’t simply identify a few examples of function in particular members of a junk DNA category and then conclude the whole class must be functional. Instead, ENCODE researchers identified, one by one, members of a sequence elements group that displayed function and they went to great efforts to ensure that they measured activity with biological meaning.

There are other researchers that acknowledge this point and seem to concur that this was a reasonable conclusion even at first glance.

As pointed out by Bernardi:

Ohno, mostly focusing on pseudo-genes, proposed that non-coding DNA was “junk DNA.” Doolittle and Sapienza and Orgel and Crick suggested the idea of “selfish DNA,” mainly involving transposons visualized as molecular parasites rather than having an adaptive function for their hosts. In contrast, the ENCODE project claimed that the majority (~80%) of the genome participated “in at least one biochemical RNA-and/or chromatin-associated event in at least one cell type.”…At first sight, the pervasive involvement of isochores in the formation of chromatin domains and spatial compartments seems to leave little or no room for “junk” or “selfish” DNA[emphasis added]
https://onlinelibrary.wiley.com/doi/abs/10.1002/bies.201900106

Yes, I acknowledged already that noise is a possible explanation for it. However, until you roll up your sleeves and do the experimental work to support your explanation, it can’t be considered an equal or more probable explanation than the one ENCODE provides.

It did not. The paper directly contradicted almost everything you said about it.

This is gibberish. There is no such thing as “measuring a particular wave function”. Observables are measured, wave functions cannot be. This is among the very first things anyone learns about wave functions, too, if they choose to take an introductory class to quantum mechanics. Among the very next things is the fundamental unpredictability and uncontrollability of individual measuring event outcomes: there is no “choosing between the many possible outcomes”. Why would someone qualified to speak on this topic get something so basic so embarassingly wrong?

No, we do not. Mathematics is not in the business of making synthetic claims like that.

What do we need to know the life permitting values for, exactly? We already know our universe permits life. Nothing in learning the values of nature’s constants stands to fortify nor to shake that conclusion. Until we come by means to tweak any of them, fine tuning arguments remain without experimental support, no matter how well or poorly we know the fundamental constants today, or a millennium ago, or another one in the future.

Please, explain. What does “a precise measurement of 10¹²⁰” mean? How is the cosmological constant “placed at” it? What do you mean by degree of precision, and how can it “become an astounding value of 10^(10¹²³)”? What do these statements mean in practice, i.e. how would we go about testing them?

1 Like

I didn’t claim they identified a few functional members of a particular class of Junk DNA and concluded that the entire class must therefore be functional. I never said anything like that. That’s NOT my criticism… not even remotely. See next for my ACTUAL critique:

Wrong! Completely wrong. Once again… the claim that 80.4% of the human genome is ‘functional’ is based on their own definition of function; which is so broad such that just ONE instance of biochemical activity (e.g. RNA transcription) in just ONE cell type is sufficient to meet their personal criteria for ‘functional’. And no, this is not misrepresenting ENCODE. This reason behind the claim that 80.4% of the genome is functional is plainly spelled out in their paper. Here is the line again:

An integrated encyclopedia of DNA elements in the human genome | Nature
The vast majority (80.4%) of the human genome participates in at least one biochemical RNA- and/or chromatin-associated event in at least one cell type. Much of the genome lies close to a regulatory event: 95% of the genome lies within 8 kilobases (kb) of a DNA–protein interaction (as assayed by bound ChIP-seq motifs or DNase I footprints), and 99% is within 1.7 kb of at least one of the biochemical events measured by ENCODE.

So, NO! They did NOT (as you mistakenly claimed) make any effort to ensure that the activity they measured was due to function. They just assumed that any activity they detected = ‘function’. No effort was made to distinguish signal from noise in the data they collected. That’s the very issue that ENCODE was criticized for, which I am echoing here (to your deaf ears).

It’s not the ‘explanation’ that’s the issue. Although, ENCODE doesn’t really provide an explanation, not even a hypothesis for the data they collected. Still, that’s not the issue. The problem is their reasoning that relies on a tautology. They simply have their own definition of ‘functional’ wherein any biochemical activity of at least one event in at least one cell type = ‘functional’… SOUND = SIGNAL… A definition of function that doesn’t consider noise, not even as a remote possibility. According to this line of thinking, non-functional biochemical activity due to noise is just as logically impossible as a married bachelor. The reality of noise in biology (e.g. spurious transcription) alone is enough to demonstrate how inappropriately loose their definition for ‘function’ is.

Here is the FULL section for context, including the key sentences that you conveniently left out (highlighted with bold letters):

5.3. Isochores and Non-Coding DNA

The discovery of non-coding DNA (meaning here DNA that does not code for proteins) goes back to the time when the number of genes, which, up to the late 1960s, was thought to be 1 million and to correspond to the totality of the human genome, started to decrease with the discovery (thanks to hydroxyapatite chromatography) of repeated sequences. [94] It has now reached a new low of 21 306,[95] a mere ≈2% of the human genome.

Three main explanations were put forward for the existence of non-coding DNA, a problem deserving of the term “mystery,” given that it had withstood 50 years of probing. Ohno, [89] mostly focusing on pseudo-genes, proposed that non-coding DNA was “junk DNA.”[96,97] Doolittle and Sapienza[98] and Orgel and Crick [99] suggested the idea of “selfish DNA,” mainly involving transposons visualized as molecular parasites rather than having an adaptive function for their hosts. In contrast, the ENCODE project[100] claimed that the majority (≈80%) of the genome participated “in at least one biochemical RNA- and/or chromatin-associated event in at least one cell type.” This claim, however, was rejected, mainly because of the loose definition of “functional” elements, in favor of the view that “junk DNA” or “selfish DNA” correspond to 80–95% of the human genome. [101,102]

At first sight, the pervasive involvement of isochores in the formation of chromatin domains and spatial compartments seems to leave little or no room for “junk” or “selfish” DNA. HOWEVER [capitalization emphasis added],one should now consider that coding sequences are compositionally correlated with the isochores in which they are located (the compositional constraints of the latter depending upon the need to encode/mold chromatin structures as already mentioned) and yet they are expressed. This indicates that there is no problem for transposons to be, on the one hand, compositionally correlated with the “host” isochores, and on the other, to be active. [103] Needless to say, this view also leads to an understanding of the “overlapping” transcription of long non-coding RNAs that originate from the majority of DNA sequences and that plays important roles. [104]

In the first line that you cut out, Bernardi mentions the actual criticism against ENCODE. The exact one regarding ENCODE’s definition of ‘function’ that I have repeatedly explained to you here. The fact that you must have read this line and consciously decided to snip it out of the quote is extremely telling. It indicates that you aren’t missing my point simply due to ignorance. You are deliberately trying to avoid the critique of ENCODE like the plague, even to such an the extend where you will purposefully delete any lines that mentions it within sections of your citations that you wish to quote.

This is similarly the case for the second part that you cut out. The last paragraph includes two integrally related parts:

  • The first part of the paragraph, which notably starts with the phrase “At first sight…”, mentions how the pervasiveness of isochores seems to leave little room for ‘junk’ or ‘selfish’ DNA.
  • The very next part, notably starting with the sentence adverb "However…, explains why the implication mentioned in the previous sentence may actually NOT be the case.

You CUT that second part out, thereby removing the relevant context for the first sentence, in order to make quote appear to align more with your preferred narrative. There is more to be said about isochores, however…

At this point, I am no longer interested in further conversing with you. I can put up with the frustration of having to repeat myself again and again in the face of sheer obtuseness, but I have no patience for such blatant dishonesty.

6 Likes

As should be obvious, you were wrong about that. It didn’t even address what you were supposedly responding to.

That’s not my error; it’s yours. You have no idea, apparently, what “phylogeny” means. It doesn’t refer to the nested hierarchy but to the inferred pattern of descent that creates the hierarchy. That’s exactly what you deny. Other explanations for nested hierarchy are not phylogeny.

That’s just nonsense, and it’s an excuse for the inability to defend a position that untenable based on the evidence.

So much wrong there. First, Hox clusters are not regulatory elements; they’re sets of complete protein-coding genes. Second, your “explanation”, if we applied it to genes, could explain any distribution at all and so explains nothing. Your idea has no expectations and makes no predictions. If bats had no Hox clusters at all while rats had seventeen, it would fit you ideas equally well. Whole-genome duplications, on the other hand, are expected to leave being evidence in addition to Hox cluster number, and we do observe that evidence. Nor do similar selective pressures produce anything like genome duplications.

You try and fail. Just regurgitating the same nonsense, word for word, is not even trying. And none of that explains what “kinds” are; it just uses the word “kind”. Nor does it explain the nested hierarchy, since any pattern whatsoever, or no pattern at all, would be consistent with your “explanation”.

Two rounds of whole-genome duplication early in the vertebrate lineage, with an additional round early in the evolution of teleosts, explains the pattern of Hox clusters. Independent evolution in hundreds of separate lineage at different times is so highly unparsimonious as to be ludicrous. A shared toolkit in metazoans and a separate one in plants is evidence for the independent origin of multicellularity in two clades but also of common ancestry within each clade. Again, this “HRT” explains anything and therefore nothing.

No, the same can’t be said with common descent. Nested hierarchy is the inevitable outcome of common descent with branching. Do you really need the reasons for that explained to you?

1 Like

@Meerkat_SK5 didn’t cut those lines out. Fuz Rana did, at RTB. [1]

@Meerkat_SK5 didn’t consciously decide to omit anything, because as usual he hasn’t bothered to read the source he cited. He is foolishly relying on Rana not to misrepresent that source, just as he has previously foolishly relied on Luskin, Dembski, Ross and others.

@Meerkat_SK5’s dishonesty isn’t in misrepresenting Bernadi, it’s in once again lying about his actual source - which he has done so often, and been caught out by misquotes and quote-mines so often, that you really should expect it by now.

Any ‘quote’ @Meerkat_SK5 posts should be assumed to be copied from some secondary source, even when he links to the primary source, and should be considered unchecked, unread and uncomprehended, and also potentially inaccurate and out-of-context. The only response needed to any of his ‘quotes’ is to note that he is probably lying about his sources again, and is unlikely to have read the original.

This is entirely @Meerkat_SK5’s own fault.


  1. Unless Fuz Rana also copied this ‘quote’ from some-one else. (No link because I’m posting from my phone again - search for the phrases either side of the ellipsis and you’ll find it) ↩︎

7 Likes

He really is a total waste of time to talk to. He never really thinks about what he says or what the concepts mean and he only responds with quotes that he doesn’t appear to have read for comprehension. It’s why I can’t be bothered arguing with him.

It’s really odd that creationists so often are only doing this strange discussion-by-assertion thing where no attempt at comprehension and argumentation is made. It’s like they think a discussion is a sort of staged debate where two sides just take turns confidently making assertions and declarations at each other, with zero consideration of each other’s arguments.

Which reminds me, they also do seem to have this deep infatuation with formal debates. It’s as if in place of an education in critical thinking, the scientific method, and philosophy, they have instead been taught all they know by grand authoritative declarations stated with confidence and fake smiles(think William Lane Craig’s plastic face), and rote memorization.

Like when I have an interest in a subject I could go read articles on it or watch lectures and then think about it. Often times by people who address the subject from different perspectives, or who disagree.

But creationists don’t do that. Instead they go to their favorite apologetics outlet and get literal propaganda material. Apologetics courses full of assertions and declarations, and opening statements presented in formal debates, as if such a thing is meant to actually teach you anything about ideas, concepts, and fundamental principles.

4 Likes

Oh, thank you. I haven’t considered the possibility that @Meerkat_SK5 could’ve been mislead by secondary sources that he blindly trusted. That’s a mistake on my part.

But indeed, there is still the issue of him actually relying on unreliable secondary sources, even though he always provides links to the primary sources to make it appear as if he read them himself, which he probably didn’t and doesn’t understand. He just parrots talking points without him having any original thoughts or comprehension about the topics and concepts being discussed.

It’s less damning than the dishonesty of cutting out key lines to remove integral context and nuance from quotes, but it’s still a problem that @Meerkat_SK5 needs to avoid.

2 Likes

Not really, given that @Meerkat_SK5 is trying (often unsuccessfully) to conceal his actual sources.

It’s still deliberate deception.

3 Likes

In that case, it is clear that we are now talking past each other because I thought they were criticized for using the well-established causal role definition of function Dan Graur’s team defined as…

“for a trait, Q , to have a ‘causal role’ function, G , it is necessary and sufficient that Q performs G .”5 In other words, the causal definition ascribes function to sequences that play some observationally or experimentally determined role in genome structure and/or function.*

After going back to check our previous conversations, your objection seems to mainly be against the broaden definition of it, which involve indirect effects. Keep in mind, before you dropped into my conversations with @Mercer and @T_aquaticus, they were arguing that both new and old definitions were flawed and not very informative.

I was arguing with you under the same pretense. If your main or only issue is how they used the broaden definition, then I don’t think you are misrepresenting them or necessarily wrong in your assessment.

I thought they also used evolutionary conservation, epigenetic marks, and other features to help distinguish functional elements from non-functional sequences.

And therein lies the reason why we have been talking past each other. For some time now, I have been arguing that the common designer theory is a better explanation than common descent. I provided a specific hypothesis for the data ENCODE and other researchers collected over the years. My arguments and discussion points were crafted according to that singular focus. However, you have insisted on hammering away on this irrelevant point about a specific thing that ENCODE did in their 2012 paper.

This might be a moot point to bring up now that I hopefully cleared things up in our discussion. But, this does nothing to refute the point I was trying to make before. From my understanding, you were apparently arguing that their definition of function and reasoning behind it was wrong and flawed based solely on their 2012 paper. My point from my response before was that it was not unreasonable to make this conclusion based upon the known literature back then.

The first part of the Bernadi quote clearly showed how he acknowledged that it was not unreasonable at that time. This is why, as @Roy pointed already, I decided to copy and paste the quote from a secondary source I trust since I could not get access to the primary source without paying a lump some of money.

For this reason, I don’t regret doing what I did because I knew that whatever Bernadi said after the first part of the quote was not going to derail or refute my point.

And I was right…

The second part of what you highlighted reinforced my point that it was reasonable or understandable at first glance, but it is now considered to be wrong as he suggested:

However, one should NOW consider that coding sequences are compositionally correlated with the isochores in which they are located (the compositional constraints of the latter depending upon the need to encode/mold chromatin structures as already mentioned) and yet they are expressed. [emphasis added]

This is exactly how inductive reasoning works. A probable conclusion today may become less probable in the future. Either way, it does not matter. They were not being unreasonable within their paper at the time.

So there was no deliberate deception to quote mine in order to prove a narrative that you think I advocated for in the first place.

Well, I tried to clear things up because I am not trying to waste anybody’s time or effort on what I am trying to do here. But, if you insist, then I guess that is your decision and I apologize to you and @Gisteron that we got off on a bad start in our first discussion together.

However, if you ever reconsider, please try to stay on topic this time so we can avoid confusion and understand each other’s points sooner. Your expert opinion is valued and appreciated here.

Again, this discussion is not supposed to be about ENCODE. It is about whether the common design theory is viable enough to be a scientific theory and a better explanation for biological phenomena.

My specific hypothesis involves a personal agent that not only chose the right fine-tuning values for life to exist but chose the right genetic code for life.

Definition of personal agent: universal self-collapsing genetic code shown by the shared DNA among all living organisms (i.e., objective reduction).

You actually did not show how this was the case yet:

Let’s make a pin in that and return to it later, once we are done with the first paper.

This may seem minor to you, but it was a very important part of my entire point. With that said, I went back and referenced this experiment instead in order to avoid confusion.

I have already explained how this is no longer the case. The universal wave-function represents the totality of existence and is regarded as the “basic physical entity” or “the fundamental entity, obeying at all times a deterministic wave equation.” (Wikipedia) We have experimental confirmation that the wave-function is real.

I advise you to read all the articles below so you can accept the premise that digital information transcends classical space-time and the wave-function is part of reality. Some articles are studies and others are reviews that are not peer-reviewed but are still informative and helpful.

On the reality of the quantum state | Nature Physics
Phys. Rev. Lett. 113, 020409 (2014) - How Undefined control sequence \ensuremath-Epistemic Models Fail at Explaining the Indistinguishability of Quantum States (aps.org)
Are the Quantum World and The Real World the Same Thing? | NOVA | PBS
(Measuring the reality of the wavefunction - Mapping Ignorance
(Measurements on the reality of the wavefunction | Nature Physics )

On the Reality of the Wavefunction | SpringerLink

That is not true. We can and have tested it. For instance, using the Planck scale Wilkinson Microwave Anisotropy Probe (WMAP), researchers have demonstrated that the fine-structure constant in physics has remained fixed over the universe’s history. [14] For the first time, a team of five physicists led by Nathan Leefer has confirmed the constancy of the fine-structure constant to the entire geographical extent of the universe.

This limits constraints on models of dark energy that invoke rolling scalar fields and curtails the parameter space of supersymmetric or string theory physics models. [15]

Although the Planck satellite results do not yet rule out all these models of dark energy that show change over time, there is no evidence that the cosmological constant varied, and we now have evidence suggesting it was probably constant throughout time and space.

Hypothetically, this indicates that if our universe’s expansion rate had different values with larger amounts of dark energy, the universe created during the expansion that formed planets and stars, where life of any kind might evolve, would have most likely blown apart the cosmic material instead. If our universe’s expansion rate had different values with smaller amounts of dark energy, the universe created during the expansion would most likely have collapsed back into a singularity before it ever reached its present size.

[14] CONSTRAINTS ON SPATIAL VARIATIONS IN THE FINE-STRUCTURE CONSTANT FROM PLANCK - IOPscience](ShieldSquare Captcha)

[15] Thompson, R.I., Bechtold, J., Black, J.H., Eisenstein, D., Fan, X., Kennicutt, R.C., Martins, C., Prochaska, J.X. and Shirley, Y.L., 2009. An observational determination of the proton to electron mass ratio in the early universe. The Astrophysical Journal, 703(2), p. 1648.

https://iopscience.iop.org/article/10.1088/0004-637X/703/2/1648

According to my theory, God must exist in all possible worlds or universes to create and sustain them via self-collapse (i.e. necessary). In other words, if God did not exist, we would not even be able to ask the question “why does God exist?” in the first place.

Predictions

If this is false, we should see signs that the ratio of masses for protons and electron and the cosmological constant was stronger or weaker in the past.

If this is true, we should find out that they are dependent on each other or directly related.

You can infer patterns of design as well, which would create nested hierarchies just like common descent.

Whether you like it or not, there is another viable hypothesis for the origin of Hox clusters that is based on the idea of genetic toolkit expansion. This hypothesis proposes that the evolution of Hox clusters was driven by the acquisition of new genes and regulatory elements that enabled more complex developmental programs. As such, the expansion of the genetic toolkit was likely an important factor in the evolution of both cell differentiation and Hox clusters.

Stuart Hammeroff offered a predictive model for it as well:

A critical degree of microtubule activity enabled consciousness during evolution

Fossils will show organisms from early Cambrian (540 million years ago), had sufficient microtubule capacity for OR by �≈ℏ/�� of less than a minute, perhaps resulting in rudimentary Orch OR, consciousness and the ‘Cambrian evolutionary explosion’. It is clearly hard to know an answer to this one, particularly because the level of consciousness in extinct creatures would be almost impossible to determine. However present day organisms looking remarkably like early Cambrian creatures (actinosphaerum, nematodes) are known to have over 109 tubulins [56].
Consciousness in the universe: A review of the ‘Orch OR’ theory - ScienceDirect

Not true, Common design suggests that if we replay the evolutionary history of life, it would lead to identical or nearly identical outcomes that would naturally produce nested hierarchies. The same cannot be said for common descent because…

A study by biologists demonstrated, at the molecular level, that evolution is both unpredictable and irreversible. The study focused exclusively on the type of evolution known as purifying selection, which favors mutations with no or only a small effect in a fixed environment. This is in contrast to adaptation, in which mutations are selected if they increase an organism’s fitness in a new environment.

No, you are wrong. I am interested in discussion where we follow the evidence and reasoning where it leads to find out the truth. I am NOT interested in debates where we tried to prove to others who is right.

“It” being what exactly?

Okay, so, for one, neither of your predictions were rendered before experimental verification, so I’ll say calling them “predictions” is a bit of a stretch. That’s charitably assuming (don’t get used to that, by the way), that by “stronger” and “weaker” with respect to a mass ratio you mean greater or smaller. But it’s worse, of course, again, because your predictions do not seem to follow from your… “theory”. Perhaps you can help me fill in the blank:

Premise 1: God must exist in all possible worlds or universes to create and sustain them via self-collapse (i.e. necessary).

[ enter further premises or inferences here, clarify utilized inference rules ]

Conclusion: Therefore, we find that the ratio of proton and electron mass either depend on one another or directly related.

Premise 1: It is not the case that God must exist in all possible worlds or universes to create and sustain them via self-collapse (i.e. necessary).

[ enter further premises or inferences here, clarify utilized inference rules ]

Conclusion: Therefore we find that the ratio of proton and electron mass and the cosmological constant were greater or smaller in the past.

Additionally, in my opinion, this reply fails to address the actual passage you marked it as a reply to. In case you accidentally elected to not read the actual questions and waste your energy on composing a reply unrelated to them, I here present them to you anew:

Please, explain. What does “a precise measurement of 10¹²⁰” mean? How is the cosmological constant “placed at” it? What do you mean by degree of precision, and how can it “become an astounding value of 10^(10¹²³)”? What do these statements mean in practice, i.e. how would we go about testing them?

1 Like

And then as Meerkat did above, they claim to know that the same evidence is being interpreted differently.

It’s a colossal and deliberate lie, because one would have to be intimately familiar with the evidence BEFORE having any opinion on how anyone is interpreting it.

Keep in mind that we weren’t. We were pointing out that the ENCODE definition was so broad as to be worthless.

Evidence? Why do you follow words instead of evidence, then? If you were following evidence, you would cite evidence itself, not paste blocks of text that you obviously haven’t read.

2 Likes

That’s both irrelevant and wrong. What you quoted was regarding your misunderstanding of the word “phylogeny”, which you ignored. And of course you can’t infer patterns of design, because design could result in any pattern or no pattern, depending on the whims of the designer. Your design hypothesis has no explanation for nested hierarchy.

The reason for the expansion of Hox clusters is irrelevant to the pattern, which you have not attempted to explain and seem not to understand at all. Note also that your claim here would implicitly be that teleosts are more complex than other vertebrates. I doubt you actually intend that. As usual, the paper you cite (but have certainly not read) has nothing to do with the subject of Hox genes.

Common design supports nothing of the sort, and you have given no reason why it should.

Another irrelevant claim, which shows that you indeed have no idea why nested hierarchy results from branching descent. For nested hierarchy to result, it doesn’t matter why a mutation happens or what its effect is, only that it happens within a particular population and is inherited by descendant populations.

Unfortunately, you are incapable of following a discussion or of understanding evidence and reasoning. Thus, for you, all this can lead nowhere.

1 Like

Sigh… I just can’t help myself, can I?

Doesn’t matter what you think the discussion is “supposed” to be. You referenced ENCODE’s for claiming that the majority of the human genome is functional. Me and others have pointed out the problems regarding ENCODE’s reasoning for that claim. You don’t get to dictate which one of your claims is open to criticism in a discussion. If you don’t want a particular claim of yours to be scrutinized, then don’t assert it!

Not just that. The definition is SO broad, every instance of ‘biochemical activity’ they detected such as RNA transcription (even if it occurred only once in just one cell type) was deemed ‘functional’ (every instance of sound that was detected = a signal). Again, this leaves no room for non-functional biochemical activity (i.e. noise, e.g. spurious transcription) under this thinking. Since there actually is such a thing as noise, especially in biological processes, it renders this definition useless.

They looked at those things as well in the study. HOWEVER, for the particular claim that 80% of the genome is functional, they relied on the very loose broad definition of function, which doesn’t rely on for example evolutionary conservation (only ~8% of the human genome is evolutionary conserved).

I almost considered continuing the conversation, but then in response to me pointing out that certain lines from the quote was deliberately cut out, you said this:

I mean… what the heck man? I actually felt guilty after @Roy pointed out that you didn’t edit the quote yourself. Instead, someone else did the dirty deed and you trusted them. I felt sorry that I blamed you for this, since it wasn’t your fault that the quote was dishonestly edited this way. But then… what do you do? You think something as dishonest as editing a quote to make it better fit a narrative is trivial. “It might be a moot point”. Don’t you think there is any problem with someone that you trust is cutting key lines to remove proper context from quotes?

It doesn’t matter whether you think the FULL quote with its context refutes the point you were trying to make. Whatever point you are trying to make is immaterial REGARDING the dishonesty of the particular person (that you trusted) who deliberately cut out key lines from the quote that didn’t suit their narrative. The quote mentions that the ENCODE project claimed that the majority (~80%) of the genome is participated “in at least one biochemical RNA-and/or chromatin-associated event in at least one cell type.” …but then they cut out the following sentence that mentions the fact that this claim has been disputed and for what reason, almost like they don’t want anyone to know that this claim by ENCODE was criticized. Then later, the edited quote finishes with the line that says “this seems to leave no room for “junk” or “selfish” DNA.”… but the following sentences were cut out, which mention how this first impression is now after consideration not necessarily the case, almost like they want anyone to believe that the first impression still holds true. Again, removing integral context to make it better fit a narrative.

Regardless of how many times you would like to assert that you were not mislead by this and that the point you were making with this quote isn’t refuted… nobody in their right mind can argue against the fact that cutting out these particular lines is misleading and therefore extremely dishonest. If you can’t see the problem with this, then you have a serious issue.

4 Likes

This does nothing to address my objection. We no more “measure the totality of existence” than we do the wave function. That string of words doesn’t mean anything. We don’t even have to take this by quantum jargon either. Nobody can “measure a pendulum” either. We can measure it’s oscillation frequency and amplitude, positions, momenta, mass, length, dampening strength, any number of observables. But “the pendulum” is not an observable. We can’t “measure a gas” either. We can measure its spectrum, identify components in so doing, we can measure its temperature, density, mass, pressure, volume. Again, we measure observables, not states. The same is the case in quantum mechanics. One can of course characterize the entire state knowing the values of all observables said to describe it, but “measuring the state” is, and remains, gibberish.

Meanwhile, there is still no choice one can make measuring of the value an observable takes in any individual measuring event. For that matter, even the entire distribution is chosen, if at all, upon the system’s preparation, not in measurement events.

But, as always, it gets worse with the things you brought up despite them having nothing to do with the criticism raised: If there is, as you insist, in any meaningful sense, a coherently definable universal wave function that encompasses all of reality, then terms like “choice” cease to have meaning altogether. Unlike Newton’s, quantum mechanics actually is entirely deterministic. And with a universal wave function it becomes outright superdeterministic, even. Let’s assume that there exists a Hamilton operator to describe all of reality. If a solution to Schrödinger’s equation with that operator exists - and I’ll say judging by the fact that there would seem to be a reality to begin with, all indication is that indeed it would - then that solution is unique. There is no “free will” in this model, no decisions any souls or spirits “have any say over” (what ever that means even absent determinism). In that clockwork world, no being is a choice-maker. God is in no meaningful sense a “creator” nor “sustainer” as you went on to call your “theory”, but either an object as subject to the laws of quantum mechanics as the rest of us, or just another name for the entire thing, rendering it a baggage-laden label with no accurate descriptive properties absent from terms like “the cosmos” or “the universal wave function” you love so much.

Now, the intellectual undesirability of such implications is of course no reflection on the truth of the underlying claims. It should give someone with commitments to certain conclusions pause before making them, of course, but that would require a minimum amount of comprehension of the subject matter, which I’m slowly learning can be a rather tall order with you, at times. Be that as it may, I am not particularly concerned with the “truth” of such matters. Truth is something for philosophers to debate over. To me, a model is justified by its experimental predictive prowess and efficiency, and superdeterminism by its very nature cannot yield experimentally testable predictions. It is therefore, true or not, a model of no scientific interest.

Oh, and mathematics is still not in the business of making synthetic claims, whether we accept a universal wave function or not. There exists no theorem of mathematics stating life-permitting “fine-tuning values” or their permissible variations.

With all due respect, I’m not wasting my time parsing sources you in all likelihood never read yourself, all in the hopes of finding support for arguments that have nothing to do with the points you dumped them as a response to nor even the overall discussion you dumped them into.

1 Like

Not true. For instance, phylogenetic relies on similar features to draw or reconstruct phylogenetic trees. These features can be morphological, molecular, or behavioral traits that are shared among different organisms, but the most commonly used features are morphological and molecular traits that are shared among different organisms.

From this, there is no difference between inferring and predicting common ancestry and common design from these phylogenetic relationships

For instance, observations show that viruses were not only the probable precursors of the first cells but also helped shape and build the genomes of all species through HRT, which also produces nested hierarchies I might add:

Four components shape the functional architecture of bacterial regulatory networks: 1. global transcription factors, which are responsible for responding to general signals and for module coordination; 2. strict, globally regulated genes, which are responsible for encoding products important for the basal machinery of the cell and are only governed by global transcription factors; 3. modular genes, which are modules devoted to particular cell functions; and 4. intermodular genes, which are responsible for integrating, at the promoter level, disparate physiological responses coming from different modules to achieve an integrated response. All these functional components form a nonpyramidal, matryoshka -like hierarchy exhibiting feedback.
Regulatory Networks, Bacteria | Learn Science at Scitable (nature.com)

It is actually not irrelevant to the pattern. For instance…

Because regulatory networks are complex and hierarchical in nature, with individual transcription factors regulating the expression of multiple downstream genes, which in turn can regulate the expression of other genes.

When a new regulatory element is acquired through HGT, it is integrated into the existing regulatory network of the recipient organism, and its downstream targets become part of the regulatory network as well. Over time, the regulatory network can become more complex and hierarchical as additional regulatory elements are acquired.

As a result, even in cases of HRT, the regulatory networks of different organisms can still exhibit a nested hierarchy, with more closely related organisms sharing more similarities in their regulatory networks than distantly related organisms. This hierarchical structure arises because regulatory networks are constrained by the underlying biology of the organism, and changes to the regulatory network must be integrated into the existing network.

In summary, while HGT can complicate the formation of a clear nested hierarchy based solely on gene sequences, the hierarchical nature of regulatory networks can still result in a nested hierarchy of evolutionary relationships, even in cases of HGT.

Bacteria that acquire the new regulatory elements would be able to use the new nutrient source, while those that do not acquire the elements would not. Over time, this could lead to the emergence of subpopulations within the larger population that are specialized for different nutrient sources, creating a nested hierarchy of metabolic capabilities.

Oh yes it does…

As I mentioned before, the genetic code is nearly the same for all living organisms, and variations in the code are responsible for differences in traits between organisms. As organisms evolve and diversify, the genetic code is inherited and modified through a process of HRT, resulting in the formation of new species and groups of organisms.

This process of HRT creates a nested hierarchy of organisms based on their shared design and the extent of their genetic similarities. Organisms that share more recent common design will have more similarities in their genetic code and will be grouped together in smaller, more closely related categories, while organisms that diverged from a common design further back in time will have more differences in their genetic code and will be grouped together in larger, more distantly related categories.

Why were these numbers chosen rather than some other numbers?

Patel showed how quantum search algorithms explain why these numbers were chosen. [22] To summarize, if the search processes involved in assembling DNA and proteins are to be as efficient as possible, the number of bases should be four, and the number of amino acids should be 20.
An experiment has revealed that this quantum search algorithm is itself a fundamental property of nature.

In other words, this single optimal solution must be employed to address a common set of problems faced by organisms possessing different characteristics and living in different habitats,

This is why a nested pattern is a necessary consequence of the process of evolution and common design. It provides a clear and systematic way of classifying and understanding the diversity of life on Earth.

That was not the reason why I referenced it. I was just showing you where I got the prediction. The Orch-OR theory potentially explains the origin of cell differentiation which is related to the origin of Hox genes:

Graham Bell (1982) says “Sex is the queen of problems in evolutionary biology.” Richard Dawkins’ (1989) trademark “selfish gene” view finds sex “counter-productive, throwing away half one’s genes with every reproduction.” Ironically, Dawkins lists the origin of sex as one of three remaining mysteries in evolution, along with consciousness, and differentiation, the mechanism by which “genes influence bodies.” All three mysteries can be explained through microtubules. https://galileocommission.org/wp-content/uploads/Library(all)/Hameroff-2017-The-Quantum-Origin-of-the-Life-How-the-Brain-Evolved-to-Feel-Good.pdf

Using the Planck scale Wilkinson Microwave Anisotropy Probe (WMAP), researchers have demonstrated that the fine-structure constant in physics has remained fixed over the universe’s history. [14] For the first time, a team of five physicists led by Nathan Leefer has confirmed the constancy of the fine-structure constant to the entire geographical extent of the universe.

Although the Planck satellite results do not yet rule out all these models of dark energy that show change over time, there is no evidence that the cosmological constant varied, and we now have evidence suggesting it was probably constant throughout time and space.

I never suggested that these predictions were confirmed. But, they were tested and you just asked me to show how to test for the fine-tuning constants.

I am going to bring some context from this article first…

"Popper noticed that two types of statements are of particular value to scientists.

The first are statements of observations, such as “this is a white swan”. Logicians call these statements singular existential statements, since they assert the existence of some particular thing. They can be parsed in the form: There is an x that is a swan, and x is white.

The second are statements that categorize all instances of something, such as “all swans are white”. Logicians call these statements universal."
https://www.wikidoc.org/index.php/Falsifiability

Universal common designer theory is testable because universals are considered falsifiable and thus scientific as the article has suggested:

"It is impractical to observe all the swans in the world to verify that they are all white.

Even so, the statement all swans are white is testable by being falsifiable. For, if in testing many swans, the researcher finds a single black swan, then the statement all swans are white would be falsified by the counterexample of the single black swan."

If the cosmological constant varied at all in the past, it would show a world in which possible configuration of life could have evolved without a constant being finely tuned.

Watch 8:30-13:00 and 18:00-19:30 for more on how this is a validated method for testing this theory:
God is not a Good Theory (Sean Carroll) - YouTube

This brief video should answer these questions for you:

This article explains how:

If it is true string theory cannot accommodate stable dark energy, that may be a reason to doubt string theory. But to Vafa it is a reason to doubt dark energy—that is, dark energy in its most popular form, called a cosmological constant. The idea originated in 1917 with Einstein and was revived in 1998 when astronomers discovered that not only is spacetime expanding—the rate of that expansion is picking up. The cosmological constant would be a form of energy in the vacuum of space that never changes and counteracts the inward pull of gravity. But it is not the only possible explanation for the accelerating universe. An alternative is “quintessence,” a field pervading spacetime that can evolve. “Regardless of whether one can realize a stable dark energy in string theory or not, it turns out that the idea of having dark energy changing over time is actually more natural in string theory,” Vafa says. “If this is the case, then one can measure this sliding of dark energy by astrophysical observations currently taking place.”

So far all astrophysical evidence supports the cosmological constant idea, but there is some wiggle room in the measurements. Upcoming experiments such as Europe’s Euclid space telescope, NASA’s Wide-Field Infrared Survey Telescope (WFIRST) and the Simons Observatory being built in Chile’s desert will look for signs dark energy was stronger or weaker in the past than the present. “The interesting thing is that we’re already at a sensitivity level to begin to put pressure on [the cosmological constant theory].” Steinhardt says. “We don’t have to wait for new technology to be in the game. We’re in the game now.” And even skeptics of Vafa’s proposal support the idea of considering alternatives to the cosmological constant. “I actually agree that [a changing dark energy field] is a simplifying method for constructing accelerated expansion,” Silverstein says. "But I don’t think there’s any justification for making observational predictions about the dark energy at this point.

Yes, this is why I am using the fine-tuning constants as well to ultimately support my argument.

Superdeterminism and free-will are testable actually. I encourage you to read these brief articles on it:

https://www.newscientist.com/article/2131874-a-classic-quantum-test-could-reveal-the-limits-of-the-human-mind/

Penrose interpretation - Wikipedia

No, you guys complained about more than just that. Do you need me to refresh your memory?

But that’s just it. You did not scrutinize MY claims in the first place. Instead, you were more focused on certain details of what ENCODE did or claimed in a 2012 paper that ended up being irrelevant to what I was arguing. I am actually trying to encourage you to refute or challenge MY claims and hypothesis.

And this point is no longer relevant because they don’t even define function in that way anymore and has indeed changed its stance on the way it equated biochemical activity with function in response to this criticism.

The revised definition emphasizes that a functional element must have a reproducible and specific effect on a biological process or phenotype, and that biochemical activity alone is not sufficient to define functionality. The revised definition also acknowledges that the distinction between functional and non-functional elements is not always clear-cut and may depend on context and interpretation.

Overall, the ENCODE Consortium’s revised definition of functional elements represents a more nuanced and cautious approach to defining the functional elements of the genome.

Not quite, I trust the RTB organization but this does not mean I think they are perfect.

I don’t know whether that was their intention or not. Like you pointed out, context is everything. I just know that it was not my intention at all.

Again, you are asking me to address something that someone else did, which I have nothing to do with. This is both puzzling and disappointing that you would make this the point of contention between us.

You have to follow that with an argument for why it isn’t true. Instead you just regurgitate, verbatim I believe, past irrelevancies. None of it had anything to do with an expectation of nested hierarchy resulting from design.

Again you regurgitate irrelevant verbiage.

You are describing, to the extent you are describing anything, similarity due to common descent, though you don’t seem to know that.

Nothing you have said supports this conclusion.

Are you using “genetic code” in its standard meaning here? If so, the first part of your sentence is correct, but the second is absurdly wrong. Later on you clearly misuse the term.

Why?

What numbers?

Nothing preceding that has anything to do with a nested pattern.

It doesn’t and it isn’t. Stop with the nonsense. And again your quote has nothing to do with the question. All in all, you have made a non-responsive non-response to the questions I raised. One may doubt you even know what Hox clusters are.

1 Like

F-. Return next month for a resit.

1 Like