The Argument Clinic

That he has any concern whatsoever for his “moral integrity” is an assertion devoid of evidence.

However, his behavior and writing are entirely consistent with what would be expected from someone completely unconcerned with his scientific reputation, so there is that.

3 Likes

It’s plenty accurate enough to show lack of feasibility. Even with a conservative 50% ratio of neutral to deleterious residue substitutions the search is way beyond what would be required for even a simple new gene/protein to be generated by known reproductive mechanisms. Current population genetics models only allow for a few functional changes.

I did this time and agreed to your correction.

It does not commit logical fallacies as far as I can tell. You can evaluate the responses and determine what is true and false. When biased humans use logical fallacies in their arguments it can generate great confusion.

I have not yet found a good proof for L^N based on statistical axioms aside from trying to show equivalence to a probability proof. If you can provide one that would help a lot.

…and Bill fails high-school maths yet again.

1 Like

Then show it, or you’re just blowing hot air.

Nevertheless failing to mention selection was a rather obvious error.

I’m sure it does. And yes, you do generate a lot of confusion, so I suppose using ChatGPT has one benefit.

Why should there be one?

1 Like

Hi Troll

Because it isn’t.

Would you kindly share that one?

No, it would not. We know it would not, because when @Roy provided a proof you had nothing to say about it, and when @Paul_King provided a proof you ignored that completely, as you did every time anybody explained to you how it has nothing to do with probability, no matter how much you want it to. All this is to say you still lied when you said

because everything else you said about anything pertaining to such things would be embarassing coming even for a high school graduate, much less someone who’s been anywhere near any of those classes.

1 Like

What you did not see is I gave a like to Pauls effort. Roy on the other hand was based on definitions and not axioms. While this is fine as a surface level attempt I think we can do better.

Your other comment about my education is simply a distraction to the discussion and essentially an hominem fallacy. Instead of this why don’t you deliver some real value to the discussion as Paul has.

Fair question. I am searching for a deeper understanding which may not be there. I have seen a combinatorial proof based on adding up all the possible combinations but don’t really find this to be a rigorous answer. Maybe I am chasing rainbows :slight_smile:

How can a person be based on definitions or on axioms?

Alright. Why do you think so?

An ad hominem fallacy would be if I said you were wrong and/or I was correct because you were a poopy-head. I did not. I said you are a liar, but that never formed the basis from whence I concluded that you were incorrect, nor was it ever stated as a justification for such verdicts. Just because you are both incorrect and lied does not mean that you are incorrect because you lied. Also, you lied. Noone made you lie, you chose to lie. No amount of righteous indignation is going to change that.

I did. You ignored it.

It is. However, since you don’t find it to be, what would you find to be a rigorous answer to your query? Why do you think that any principles of probability theory should be part of the solution to an elementary school counting problem?

1 Like

If you look at different versions of the MYH protein of just under 2000 amino acids there are over 1000 changes between versions that share a common ancestor according to the theory. How do you model this as a product of neutral random change?

If you look at skeletal alpha actin the number of consensus changes to mice and rats is 0 despite being separated by hundreds of millions of generations. Again none of this makes sense as the product of random neutral mutations.

If we look at the difference between two mammals mice and humans there are thousands of unique genes. Again none of this makes sense based on random neutral mutations.

Selection can only fix a change once a rare reproductive advantage occurs.

From UNC on population genetics.

About 90 percent of DNA is thought to be non-functional, and mutations there generally have no effect. The remaining 10 percent is functional, and has an influence on the properties of an organism, as it is used to direct the synthesis of proteins that guide the metabolism of the organism. Mutations to this 10 percent can be neutral, beneficial, or harmful. Probably less than half of the mutations to this 10 percent of DNA are neutral. Of the remainder, 999/1000 are harmful or fatal and the remainder may be beneficial. (Remine, The Biotic Message, page 221.) This model is actually not realistic, because it does not take into account the interactions between various mutations. Nor does it distinguish major mutations, which change the shape of proteins, from minor mutations, which do not. Furthermore, it does not consider that the beneficial mutations observed are generally only of a restricted kind that cannot explain evolution. However, we consider the model in some detail anyway, because it is so widely used. In addition, population genetics can help to explain the rapid adaptation of organisms to their environment by changes in frequency of existing genetic material (alleles) even without mutations.

The material you are quoting was written by a Young Earth Creationist. That is a reference to Walter Remine, another creationist. LOL

1 Like

Counting is part of probability theory if you remember the definition I posted previously.

the extent to which an event is likely to occur, measured by the ratio of the favorable cases to the whole number of cases possible.

Counting is required to determine this ratio.

That’s not from the UNC, the web-page is hosted by the UNC.

This material does not necessarily represent any organization, including the University of North Carolina and the State of North Carolina.
Unless otherwise indicated, all articles are written by David Plaisted. All errors are solely the responsibility of the author, who will try to correct any mistakes that he becomes aware of.

This page was last modified March, 2006

2 Likes

Why not?

Why not?

Like outside of sitting in an arm chair and meditating about whether it feels plausible, what is the actual, data-based problem?

Says who?

1 Like

It’s not ‘from’ UNC, it’s hosted on their servers. Do you not know the difference?

1 Like

That is not a proof of anything. That’s just an expression of personal incredulity without the numbers needed to make it anything more than a poorly-founded opinion.

And why does it have to be all neutral changes? There is no singular MYH protein, there is a family of MYH proteins.

And? Some proteins are highly conserved. Looks like you’re leaving selection out. Again.

Again, just assertions. And so far as I can see mainly based on ignoring known mechanisms.

Rare is a relative term. Again you need a firmer basis than opinions.

From a creationist citing another creationist, with no record of scientific publications. I’m not surprised you tried to pass it off as “from UNC” but it was hardly honest. Better not to cite worthless sources at all than try to -resend that they’re credible.

1 Like

Demonstrate that the existing models are mathematically incapable of explaining this. If you can’t show the math, you have no point worth considering.

2 Likes

I do remember that definition. I remember you ignoring the fact that it is useless for almost every problem.

Alright, let’s prove that this doesn’t work.

Suppose you have some process that upon a prompt yields any random number between zero and one. A random number generator, essentially. I’ll be generous and let us assume that every outcome is equally likely. That’s almost never true in practice, which is another reason counting measures are almost never useful in practice, but I’ll grant it, just for the sake of simplicity and undeserved generosity.

Now, let’s say we want to a priori determine the probability to extract a number that’s smaller than one half from that generator.

So, how many numbers are there between zero and a half? Well, if our generator returns any real number, then it’s uncountably infinitely many. If it only returns rational numbers, it’s countably infinite. So, that’s our numerator: \infty_{\text{num}}.

How many numbers are there in total between zero and one? Well, if our generator returns any real number, then it’s uncountably infinitely many. If it only returns rational numbers, it’s countably infinite. So that’s our denominator: \infty_{\text{den}}.

So the probability that a number from our random number generator is smaller than one half is \frac{\infty_{\text{num}}}{\infty_{\text{den}}}. Which is ill-defined at the best of times. Now, one could try and blurt out the mantra that “some infinities are greater than others”. Never mind that this is not what that means. But even if it did, we can prove that the infinities in our “calculation” are not, in fact different. There are exactly as many numbers between 0 and one half as there are between 0 and 1, and we can show this fairly easy:

Say x is between 0 and one half. Then there exists a number y=2x that is between zero and one. And if x_1\neq x_2, then y_1=2x_1\neq2x_2=y_2. So definitely the set of numbers between zero and one does not contain fewer elements than the set of numbers between zero and a half. But the reverse also works. For every number y between 0 and one there exists a number x=\frac y2 that is guaranteed to be between zero and one half, and again the mapping is unique. So the set of numbers between zero and one does not contain more elements than the set of numbers between zero and a half. And if one quantity is not greater or smaller than the other, then what else can it be but equal?

So the \infty_{\text{num}} in our numerator, if we are going by a counting measure, is exactly the same as the \infty_{\text{den}} in our denominator. We can just call them both \infty. So by your “definition” of probability, the probability that a random number between zero and one should be smaller than one half is p=\frac\infty\infty=100\%.

Not only that, but this could have worked with any restriction of the initial interval. The probability that a random number between zero and one should be smaller than \frac1{1000000} is also 100%, because it’s still \frac\infty\infty if we go with this counting-measures-only definition of probability, and with the mapping y=1000000x allows us to conclude that again the numerator and denominator are the same in that fraction, too.

It works for offset intervals as well. The probability to find that the generator yields something within the middle thousandth of the zero-to-one interval is also 100%. The mapping to see the identity of the cardinalities is a bit more sophisticated, but nothing a sixth-grader couldn’t find: y=1000x-\frac{999}2.

Notably, since you love Kolmogorov’s axioms so much, it may interest you that this definition of probability contradicts them. For the set of number smaller than one half shares no elements with the set of numbers greater than one half. So by the third axiom, the probability that our generator generates a number that is in either set should be the sum of the probabilities that the number be in either of them. But since both halves have a probability of 1, this means the probability of the union is 1+1=2, which contradicts the second axiom, which states that the probability of the total sample space should be 1\neq2 and the first axiom, since no probability can be greater than 1, but 2>1.

So yeah. That definition is garbage. It is completely useless for almost every statistical phenomenon anybody would ever wish to model, and it is trivial to even construct examples where it is irreconcilable with your beloved axioms of probability. That you do not understand this is one more demonstration that you lied when you said

and while being a liar doesn’t make you wrong, not being wrong because you are a liar doesn’t make you non-wrong either.

2 Likes

On the assumption all the differences are neutral (we don’t have to assume that, but we can), it’s just

[the rate of neutral change/generation at the locus] × [generations since their common ancestor] = expected total number of neutral changes that have accumulated.

That’s it. That’s the model.

The most distantly related myosins appear to have split around the time of the split of protists from other eukaryotes, about 1.6 billion years ago.

Let’s be ridiculously in your favor and assume the numbers for Homo sapiens and extrapolate to 1.6 billion years. About 100 mutations/generation, generation time of 20 years. However, we assume 90% of mutations to a protein coding gene are deleterious too, just because.

At 2000 amino acids long, that’s 6000 nucleotides of coding region for each copy(of which there’s two diverging). The human genome is ~3 billion basepairs, of which 12000 is 12000/3×109×100 = 0.0004%. So that’s 0.0004% of all mutations occurring in the two myosin paralogues. Let’s further assume that roughly one third of substitutions to protein coding regions result in amino acid substitutions.

So:
0.00004 [percentage of the 100 total mutations pr generation that would occur at the locus] × 10 [percentage of mutations to the locus that aren’t deleterious] × 1600000000 [years time since the duplication] / 20 [years/generation] = ~2.67 million mutations could have accumulated in those two genes alone, at the assumed neutral rate.

Evidently the neutral rate of mutation is much, much, much higher than needed to explain their total divergence. Since the observed divergence is much below that expected, even at a thousand differences between the two genes, the two genes are to be considered highly conserved since so little change has occurred in their 1.6 billion year shared history.

1 Like

Doesn’t that just validate what Rum has been saying???

1 Like