Perry Marshall: What is Random?

As a side note I will mention that one area in which the general public is becoming aware that mutations are NOT “all as equally unlikely as the other” is when they get their Y-DNA reports from companies like FamilyTreeDNA. The reports include percentage probabilities of any two persons in the studies sharing a MRCA within a given number of generations—and an asterisk after such calculations always reminds the reader that each STR has differing likelihoods depending upon the given mutation. (In other words, some mutations are more likely than others. Thus, a customer might see a list of people on the report who all share a Genetic Distance of 2 from the subject—yet the probabilities of a MRCA within a given number of generations all differ.)

1 Like

A post was merged into an existing topic: Code as an Analogy of DNA?

Perhaps you are using a different definition of ‘pattern’. Just because someone uses a different definition than you does not mean they contradict themselves. Perhaps it is only that your definition contradicts theirs.

As @Perry_Marshall points out, in these sorts of discussions it is very helpful to have all definitions on the table, and as clear as possible. The field of mathematics has many such ready definitions. It might behoove us to use such definitions where we can be sure we are not talking past each other.

3 Likes

From what I understand, I’m using Shapiro’s definition. I’m using Perry’s definition. At least to the point they have specified it. Perry himself has acknowledge that there are patterns in white noise. The patterns I am pointing to are precisely the sort of patterns that Shapiro points to in mutations to declare they are not random. The “engineering” I’m pointing to in the True-RNG generator is routinely pointed to by IDists (like Axe) to dissmiss in vitro evolutionary experiments (because the experiment is “designed”).

I’m going a bit a field here. Noble, Shapiro, and Perry are not working from a mathematical framework, so we can only guess what they mean at some level. They certainly are not consistent in their usage, as I think I have shown. If they are using a new definition of random and a new definition of pattern, it would be great for them to lay it out with precision, marking out which of their past arguments were overstated. I’d welcome that sort of clarity. Until that happens, all I can do is:

  1. I will explain what randomness is within science, math, and probability. It is not ontological randomness, it does not deny purpose, and it has patterns.

  2. I’ll apply their logic consistently to show that it leads to absurdity, such as concluding hardware random number generators “not-random”.

  3. Of course, I will continue to hope they will clarify why they need to rework the definitions of random in science, math and probability, rather than educate the public about them. However, I think part of their strategy is to be strategically unclear. So I’m not expecting much, though I would hope to be surprised.

I’m happy to see how they want to proceed from here.

3 Likes

Then why not use the mathematical definition of randomness, instead of random variable, when talking about randomness?

Both wikipedia pages are saying the same thing. One wikipedia page was clearer than the other, but both are making the same point if you read them completely. Without doubt, randomness in mathematics always has
“patterns” of one sort another. There is no way around this fact.

2 Likes

I’ve been lurking on this discussion, not sure where to jump in (or too busy to). A few observations:

  1. Whatever sort of randomness we are talking about, there should be a Probability Density Function (PDF, or PMF). We have some empirical knowledge of this PDF, and from what I can tell, it is a mixture distribution of different types of events. Identifying those different event types would seem to be necessary before making claims about the mixture.

  2. Algorithmic randomness deals with sequences and compression. Randomness is this sense doesn’t apply to individual mutations, and would seem to be difficult to apply to sequences with mutations without a very high level of knowledge of biology. Any single event (or short series of events) will appear algorithmically random even if it deterministic.

  3. I’m not sure where this fits in, but censoring of various sorts is often present in biological data. The events we observe are often a biased sample, as some events may be unobserved for cause. Natural selection is a good example; if a creature fails to reproduce, we may never observe the mutation* responsible for the failure.

* assuming mutations are involved for this discussion.

3 Likes
  1. The type of randomness has also not been defined, This blog post has a nice description of different types of randomness: A classification scheme for types of randomness « Probability and statistics blog
2 Likes

Notice that “real randomness” is defined as “unknown”, matching how I have been using it. He does not make the leap to call it ontological randomness, but rather is talking about unpredictable from a human point of view.

I also take some pleasure in this type of “random” offered by the author:

Type 0: Fixed numbers or known outcomes

Type 0 randomness is the special case of randomness where the data are already known. Any known outcome, regardless of the process that generated it, is Type 0 randomness. Once known, it has become a constant. In terms of information conveyed, all Type 0 randomness has zero informational entropy (a measure of uncertainty), and all messages with zero entropy are examples of Type 0 randomness.

Did you see this @Perry_Marshall? This is pretty clear evidence that I am not the only one who says fixed outcomes are random variables of a certain sort. I described them as the degenerate case too. Of note, I already pointed you to the dirac function which makes this clear. It is, essentially, the limit of a normal distribution as variance goes to zero.

2 Likes

Suppose someone flips a coin which is known to be fair, looks at the result, but hides the result from you.
For this person, the result is known so it is type 0. For you, it is not, so I understand it would be type 1 for you (or higher).

Randomness seems to be in the eye of the beholder. Does that seem right to you?

ETA: In general, I think it is incorrect to try to define or classify random independently of the context in which the word is used. If the context is science and math, then I agree with the necessary link to a probability distribution and a sample space.

If the context is ordinary usage by some group of language speakers, then maybe we should consult with linguists or lexicographers.

But if the domain is philosophy or theology, then it would be wrong to impose the scientific definition without discussion of whether and why that might be justified. In particular, random mutation has a purely scientific application. But in theology (as I understand it), it requires analysis of what role God can play among other things.

4 Likes

In context we are talking about scientists saying mutations are “random”, which seems to demand we understand what scientists mean by this, right?

1 Like

Yes.
When composing that reply to Dan, I think I had in mind the feedback I received in the related Garte thread. It’s also not clear to me that the categories posted by Dan are restricted to that context.

1 Like

Conditional Probability is still probability. I agree that context matters; it must be carefully defined, including prior knowledge or prior assumptions.

I am reminded of card tricks that depend on the magician knowing something the audience does not.

1 Like

Wouldn’t it be far easier, instead of flinging around all of these fuzzy words, to simply define “random with respect to fitness” as the result of the Luria-Delbruck experiment? That would keep everyone’s eyes on the empirical and also might be more educational.

It also shows the poverty of rhetorical approaches.

1 Like

@Mercer, this does not make much mathematical sense. Nor does the LD experiment apply to all of biology. The experiment does not tell us that mutations are “random with respect to fitness” (nor is this a well defined concept). Rather, the experiment tells us that the mutations arose before the selective pressure was in place, in this specific case, and we know of exceptions to this rule. The experiment is meant to test induced versus background mutations, and does not demonstrate mutations are “random with respect to fitness” (though this is not well-defined anyway).

I agree, but countering rhetoric with “random with respect of fitness” is countering rhetoric with equally misleading rhetoric.

I am conceding that, hence this proposal.

I think it says more than that. It tells us that the ADAPTIVE mutations arose before any pressure was in place, and more importantly, that the pressure did not change the ratio of adaptive to nonadaptive mutations.

Do you disagree with my more detailed description, and if so, what exceptions exist?

Indeed. Those are the waters being muddied by much of EES rebranding rhetoric.

I’m proposing to counter rhetoric with a far more focused, empirical, educational, and clearer description.

1 Like

One exception you already acknowledge is the immune system. The ration of adaptive to nonadaptive mutations is increase.

This all misses the point any ways, because the LD experiment is not really the fundamental question here. It is about whether or not evolution relies upon ontological randomness, which some people claim it does. These people are wrong. Many things are not fully predictable to us, both in practice and in principle. This does not make them ontologically random to God.

1 Like

No, I think that there is a misunderstanding. I don’t know of any such measurements. Do you?

I agree completely.

1 Like

Alright, how about this then: how do you define not random?

2 Likes

In mathematical modeling, “not random” is the part of the model (not reality!) that is not modeled as a random variable. This often includes theory, structure, and other other equation or dependency relationships between random variables.

In common usage, “not-random” is the predictable observations we see, the things that fit what we expect. Earlier, when I was pointing to the dirac distribution, I was pointing out that even fixed quantities can sometime be modeled as “random,” but I also noted that this is a boundary/degenerate case. That boundary does not match reality (nothing is that certain) or common usage of “random”.

Most real events have a degree of predictability (order) and degree of variation or surprise (uncertainty). The more uncertain an event is, we might say (colloquially) it is more “random.” The less order, the more “random” it is. Whatever the case, identifying a pattern (order) in data does not suggest there is NO uncertainty or randomness in the data. This is also true of random variables. They all have a degree of order and a degree of uncertainty. Showing uncertainty does not mean there is no order. Showing order does not mean there is no uncertainty. The two exist together in the vast majority of cases.

2 Likes