In fact, he openly admits that his GAE hypothesis represents a sort of Hegelian1 synthesis between creationism and evolutionism.
The ‘Hegelian dialectic’ is a system of thinking that describes a thesis , which gives rise to a reaction (an antithesis ), which leads to tension, that is eventually resolved by a synthesis of the two positions. This approach is the basis of Marxist philosophy, although it is certainly not restricted to these writers, having roots in Kant, Goethe, Spinoza, and others.
I like how CMI doesn’t even assess the validity of the idea of Hegelian synthesis, and instead goes “scary atheist Marxism”. That’s kind of what I expected when clicking that footnote, but still disappointing.
Tell us you have no clue what you are talking about without saying you have no clue what you are talking about.
You don’t need any probability theory to calculate the total sequence space. That probability calculation is not “foundational” - it’s irrelevant. Clearly you don’t have any real understanding of combinatorics (this is very, very basic combinatorics) or probability theory. And no, that is not an ad hominem - and if you think otherwise you don’t understand that either.
My favorite part of MA is where beneficial mutations cannot have selection coefficients of equal positive magnitude as deleterious mutations. Mutations can reduce fitness by 90%, but only increase it by 10% at most (so reversals are impossible in Sanford’s alternate reality).
It’s worse than that. Fitness can only be increased by 1%, not 10%. The code sets maximum deleterious effect to 0.9, and maximum beneficial effect to 0.01.
My favorite part is the code that literally halts the simulation if there are a lot of beneficial mutations.
Gee. Now what legitimate purpose could that serve if Sanford genuinely believed in GE and wanted a program to accurately model it? Your thoughts, @Giltil? @colewd
Why don’t you show a proof that you can build the N^L (number of units per set and length of the sequence) calculation without establishing P(a) x P(b) or the independent probability of a and b occurring using only the 3 probability axioms.
This maybe true but I have not seen this proof yet. You have asserted it is not foundational now let’s see if you can back up your claim.
Why would anyone try to prove something stupid nobody actually believes, which would serve no purpose, and has nothing to do with evolution or common descent?
Because it’s so trivial[1] that Bill’s inability to do it himself is further evidence that he hasn’t passed high-school maths let alone university level courses?
Of course it has absolutely nothing to do with genetics, since genetic sequences and proteins aren’t fixed-length, among other things.
number of possible sequences of l elements each of which has n possibilities is (n×n×n×…×n) = n^l. No probability required. ↩︎
Why on earth would you think that we need bother with probability? We don’t. The only thing that matters is the number of elements and the number of options for each.
Example. Take a binary sequence defined as follows. The first element has P(1) = P(0) = 0.5
All succeeding elements have a probability of 0.4 of being the same as the immediately preceding element and a probability of 0.6 of being different.
Do you think this makes a difference to the number of possible sequences? Why would it? It mucks you up if you try dragging probability into it, but if you just stick with combinatorics - which it is - it makes no difference at all.
If you have a sequence of length 1 it is trivially true. There are N possible values and N = N^1
If you have a sequence with n possible values and you add an element with m possible values the total number of combinations is n x m. i.e. for each of the possible values of the sequence there are m possible combinations - 1 for each of the possible values of the new element.
From these we may proceed by mathematical induction. We need only to show that if a sequence of L elements each of which may take N possible values has N^L possible values then such a sequence of L+1 elements has N^(L+1) possible values.
From 2) above we see that if such a sequence of L elements has N^L possible values then a sequence of L+1 elements has N^L x N possible values. N^L x N = N ^(L+1)
From 1) above we know that for L=1 there are N^L = N possible values
Therefore for all L >= 1 a sequence of L elements, each of which has N possible values has a total of N^L possible values.