Junk DNA, High R, Pinnipeds, and the Multiverse

Tell me you know nothing about code without saying you know nothing about code.
:rofl: :point_right:

Yeah, and look exactly as if all of it was copied with errors from one common initial “design”. A universal common ancestor, if you will. Well, I for one welcome this more logical discussion. Let’s hear what it is that’s left for your side of it after a concession like that. Go on.

It’s pretty much a settled issue with zero evidence-based objections raised against it in this thread so far.

What’s there to say about it, then? A distant ancestor may or may not have had a lower fraction of junk in their DNA than we do. There is no reason to suppose they would, seeing as all things considered, DNA isn’t particularly expensive to produce, and junk code that doesn’t much affect the chemical environment it finds itself in piggy-backing on the synthesizer molecules in said environment is expected to happen even well before there are any cells to provide such machinery, or, indeed, before genes would be encoded specifically in DNA as opposed to some even cheaper prototypal self-replicating molecule that would have preceded it. So by the time something vaguely resembling what we call “life” came onto the scene, I would suspect a sizeable fraction, if not a majority, of that being’s genes to be junk. On the other hand, between that and, say, LUCA, maybe some of the junk mutated and became functional. Less likely than a mutated duplicate having a novel function, but not technically impossible, I suppose.

From my end all of this is speculation, as I’m sure it would be, had you been saying it, though. What, at any rate, is your point? Let’s say, for the sake of argument, that at one point there may have existed a creature that had a completely functional genome (what ever that means), and that all the junk in the genome of pretty much everything that lives today is a consequence of mutations. It still exhibits a nested hierarchy as expected from imperfect copying and re-copying. The junk parts do that, the non-junk parts do that, the inbetween parts do that, and all of them map out the same tree to within fairly tight margins of error. What about your actual objections to evolution is in any way whatsoever different between a model where the universal common ancestor was “designed” by unobservable magic and a model where it was shaped by understood chemical processes?

2 Likes

What code are you talking about C C+ objective C rust python :slight_smile:.

How much “junk” is in this rust program?

fn main() {
println!(“Hello, World!”);}

My point is the inference of UCD is speculation and as we speak we cannot reconcile the genetic differences between species based on a reproductive relationship.

There is more than one possible cause for the nested pattern if you are not limited to methodological naturalism and the nested pattern is only one piece of evidence.

The argument for 90% Junk becomes very difficult when you remove the inference of UCD.

That’s a lie.

That is also a lie.

Yes, if we allow for one-way interactions, something we have to throw all of physics completely out the window for, then it becomes possible that God magic’d the nested pattern, deceiving us intentionally or by mistake into thinking that something we can mathematically prove and confirm with simulations and confirm experimentally happens naturally to randomly changing sequences is something that happened naturally to a demonstrably randomly changing sequence rather than by divine magic. If we throw out everything we know about how nature works, pretend that all of the technology we developed with that knowledge works by sheer coincidence, then it becomes logically conceivable, that godly mischief is afoot. If that is the way you want to defend your position, I must admit that I have no means of intellectually challenging this (one might say it is challenged enough as it is), and am left with no choice but to concede. Call it a win, if you must, but I have no case I either would or could carry beyond this point.

The argument for 90% junk has nothing to do with common descent.

The fact that many of the sequences in question are physically incapable of producing proteins even when translated is an unambiguous indication of them not being functional. No assumption about origins is necessary to experimentally observe this.

The fact that many of the sequences in question can be removed entirely out of a gamete without affecting the resulting organism’s vital or reproductive functions is an unambiguous indication of them not being functional. No assumption about origins is necessary to experimentally observe this.

You consistently ignore the actual reasons many tell you, of how and why we know not only what fraction of the genome, but specifically which regions of the genome are junk. And you just lie about it. Repeatedly. It’s revolting. Shame on you.

4 Likes

How are you defining junk with respect to written programming code?

I can think of a lot of things in programming that could be considered ‘junk’, but it all comes down to how one defines it.

1 Like

Well, that isn’t true. 10% of your genome is conserved, but only 2% is protein-coding. That is, 4/5 of the functional sequence is non-coding.

2 Likes

Alright, fair enough. I was thinking of sequences with like endless repetitions of very short base pair patterns, which, even if translated, would produce a sequence of amino acids that couldn’t fold to anything functional or even remain in one piece. Admittedly, I have never studied in much depth exactly what ways there are for DNA to be interpreted by its environment. I figured something that is junk is either something that is not read somehow, or something the reading of which would not render a chemical change that persists over relevant timescales. Formation of proteins is evidently a specific subclass of functions more generally. Point taken.

Let me translate my Bayesian stuff in plain english.
P(E/A)>P(E/B) means that the probability of E under the design hypothesis is superior than the probability of E under naturalism, with E being the idea that most of the genome has a purpose.

P(A) >P(B) means that the probability that the design hypothesis is true is superior than the probability that the naturalistic hypothesis is true.

P(E)>P(nonE) means that the probability that most of the genome has a purpose is superior than the probability that most of the genome has no purpose.

With the above in mind, I think you can now better grasp what I said below:

Well, the problem here is how non conserved is defined. If it means non constrained, then, sure, non conserved or less conserved sequences cannot have as much FI as strongly conserved ones. But if for example you allow new taxonomic restricted sequences to be part of the non conserved sequences, then it is the case that they may have as much FI as strongly conserved ones.

It happens that STRs, which are small DNA repeats that make up about 5% of the genome, seems to have important functions.
https://www.science.org/doi/10.1126/science.add1250

The problem, again, with this argument, is that probabilities, in general, are a measure, not a predicate. We colloquially liberally employ a language of “probability that proposition X is true”, but in actuality, P(X) denotes the probability that sampling a random variable would yield its value an element of X. When it comes to propositions about the world we live in, this is akin to the probability, that, if the world we find ourselves in were a random sampling out of the set of all possible worlds, it would happen to be one out of the set X of worlds defined by the distinction that the proposition at hand holds within them.

And we can do all sorts of maths to this – in fact, I agree completely with your inference, that if it is the case that P(A)\geq P(B) and P(E|A)\geq P(E|B), then it follows necessarily, that P(E)\geq P\left(E^c\right) (I choose weak inequalities just to be safe, your point still stands, either bar some null set exceptions, or even completely, if such caution is unnecessary – I just in this instance could not be bothered to do the math again). But all this tells us is whether or not a randomly picked world should more likely have majority junk DNA. It does not tell us anything about what is actually the case in specifically our world, and to achieve this non-conclusion we somehow ontop of that allowed ourselves to presuppose that a randomly picked world is for some reason more likely to be governed by design than by evolution.

In actuality, it is on the outset not given that A and not given that B. These are not direct observables, and we know nothing about their respective probabilities. What we can directly observe is whether or not our world is an element of the set E of worlds where most of the genome has purpose. If it is, then we can model our world as one randomly sampled therefrom and ask ourselves, how likely it is that it was sampled out of the overlap of E with the set A of worlds governed by design, and whether that is more likely than that it happened to be sampled out of the overlap of E with the set B of worlds governed by evolution. Likewise, if it is not the case that our world is an element of E, then it is sampled from the set E^c of all possible worlds except those in E, and we can ask ourselves how likely it is that our world is within the overlap of that set with A and B, respectively.

As a matter of fact, our world is an element of E^c, so considering what would be if it were instead an element of E is at best a fun maths exercise, but ultimately idle speculation with no scientific implications. And given this fact, and your assumption that P(E|A)\geq P(E|B), it is either the case that our world is likely one out of those governed by evolution rather than by design, or that the likelihood that any sampling of all possible worlds would yield one that is likely governed by evolution rather than design.

All of this is to say, putting bounds on P(E) does not serve to advance the argument. Whether our world is within or outside of E is not a matter of what our intuitions tell us about the probabilities that it is within A or B, or, for that matter, the overlap of E with those respective sets, but a brute fact about our world we can freely investigate independently of what our gut intuitions imply we should expect in advance of the investigation. We can know how it is based on evidence instead.

1 Like

Essentially you are arguing that your commitment to ID overrides the evidence for lack of function.

That only shows that you have a very strong commitment to ID.

4 Likes

Some =/= therefore all.

Causes a disease state under certain circumstances =/= functional.

This mistake underlies literally all reasoning from studies such as this to the conclusion that all or most of the genome is functional. It’s the hasty generalization fallacy all the way down.

Edit: Also, the cumulative effect of many weak binding spots resulting in a larger total expression level is physically expected even from nonfunctional DNA. As long as you have enough of it.

Edit2: They don’t show any of them are actually functional. They merely show that they have some sort of measurable effect in that they recruit TFs and expression results. They then propose how this could in principle contribute to functional expression.

This is the same as the problem with the whole ENCODE fiasco.

2 Likes

Nor does Giltil, or he’d know that
P(A) > P(B) and P(E/A) > P(E/B) does not mean that P(E) > 0.5.

4 Likes

All that tells me is that @colewd has never done any software engineering.

5 Likes

Often times large amounts of code is simply disabled by marking it as commentary, instead of deleting it.

4 Likes

Whatever are you trying to say? You imply here, whether or not you intend to, that Gpuccio’s measure of FI is useless. You must give up one assertion. Either most of the genome is non-functional or Gpuccio’s measure of FI is useless. Of course, it could easily be both.

1 Like

Why would conservation through “deep time” (I predict that this will be your volume knob at 11 in this instance) be any more indicative than through shallow time? And where is the dividing line between shallow and deep for this parameter?

You’re obviously just making this up as you go along. Have you ever considered the power of using the scientific method instead?

The paper doesn’t come close to showing that all of them do, so your claim that 5% was functional was a fabrication.

And was this found for a minority of STRs by a a group of hardworking ID proponents or by “evolutionists”? How do you explain that?

1 Like

Thanks for the lesson in Bayesian terminology. Very edifying (though it seems some of the more science minded folks think that what you said doesn’t mean what you think it means, but I’ll leave you all to fight that out).

However, it seems to me after the dust clears my English translation didn’t differ substantially from yours.

Please feel free to correct me in terms of how our two translations differ in a way that matters, in particular in terms of how it allows you to avoid my critique.

Failing that, (and preferably) just respond to my critique.

Gpuccio’s claim is that conservation through deep time indicates high constraints that indicates high FI that indicates design. This claim is perfectly compatible with the hypothesis that most of the genome is functional for it could be the case that some parts of the functional genome have high FI whereas some other ones have low FI. And it is also perfectly compatible with the idea that some taxonomic restricted genes (TRGs) have high FI despite the fact that one could place them in the non-conserved category and hence would not be detected by Gpuccio’s methodology. This is why I said that Gpuccio’s methodology for detecting design is designed to produce true positive but cannot in principle avoid false negative. But a method that is designed to produce true positive but cannot avoid false negative can be still very useful.