In my opinion the Byers’ point™ receded in the rearview mirror about a hundred miles back.
Thank you very much for your detailed comments on the CSC video. I greatly appreciate the time and trouble you took to compose them. What a devastating takedown! Thanks once again.
Best wishes,
Vincent
I thought it was clear that my comment was not about pseudo-random numbers. We probably disagree on what we mean by “information.”
Thank you. I would add that on at least one thing they’re not that far from accurate in their criticisms, which is that how research results are presented in the popular press and in university press releases is really way too grandiose and sensationalistic. I don’t think you’ll find many people who can disagree with that. Things are exaggerated way beyond the data and presented as being revolutionary, dramatized, and sensationalistic ways in the popular press to such an extend it’s often quite misleading.
I will just note that this problem goes way beyond abiogenesis and evolutionary biology and basically affects all of science. I simply have not seen a field where this isn’t a problem. For that reason I think it’s nevertheless quite misleading to present this issue as being symptomatic of a sort of excessive naturalistic religiosity of atheist scientists or whatever, as opposed to being the product of bad incentive structures between media(trying to draw attention and generate clicks), universities(trying to draw students and donations), and researchers(trying to get funding and citations), and so on.
Where can I get my amateur uniform?
It was not entirely clear, but I commented on both, just in case. Most random number generators used in practice are, of course, pseudo-random number generators. Those that are not rely on physical processes that show fundamental randomness, such as the decay of unstable nuclei or the tunnelling of charge carriers through energy barriers semiconductors, or other similar quantum phenomena.
At any rate, when I speak of information in contexts like these, I mean information as understood in information theory. The sort of information we can actually do maths to, the sort of information entropy (be it Shannon’s, or Boltzmann’s, or von Neumann’s; mathematically they are all equivalent) is a descriptor of the dynamics of. The abstract, mathematical concept maps in all relevant respects linearly (that is to say, bar at most a conversion factor and/or a sign) to the analogous notion in statistical physics. Aside from choices of notation and vocabulary, there information is again broadly equivalent between the classical and quantum mechanical perspective, and since the latter, aside from being the more general and fundamental theory, is in particular a linear and unitary theory, a conservation of entropy follows. And if entropy is conserved in a given system described by the theory, then said system cannot lose (or gain, because of time-reversal symmetry) information between any two moments in time.
There are ways to disagree with the theorem of information conservation:
One could reject the notion that quantum theory is an adequate means to describing physical systems in a general sense. A theory’s adequacy, however, is measured by the reliability of the theory to make quantitative predictions of observations, and the precision achieved by even the more crude and early variants of quantum mechanics are second to frankly no other known theory in all of science, by a margin of several orders of magnitude.
One could reject that quantum theory scales in such a way as to retain theorems like this on a large scale. This one is hard to refute because of a shortage of evidence to the contrary, but I’d suggest that it is hard to substantiate for a lack of evidence in the affirmative also.
Lastly, one could reject these rigorous conceptions of information widely used in computer science, maths, and physics, or at least say that they are not what we are talking about here. In that case I must wonder just what it is that we are talking about here, and if that conception is defined well enough for us to intellectually assess questions of what, if anything, can generate this stuff we are talking about.
I did mention the use in cryptography. While cryptographers do use pseudo-random algorithms, they are wary of relying only on pseudo-random numbers. Encryption based on pseudo-random numbers is potentially hackable.
I tend to think of information in terms of Shannon’s theory of communication. So I see information as what is transmitted in a communication channel. Information technology creates a lot of this.
But I think we may be drifting off-topic.
Another strategy one might use, of course, is to publicly claim that you are the victim of discrimination when decisions are made regarding research funding. The person making this claim might hope that funding organizations will be more inclined to give him money rather than risk being perceived as confirming his accusations.
Just a possibility…
If genuinely-random processes create information, how can there be a Law of Conservation of Information?
There is no such law.
I’m not sure. I think if anything can create information, then such law - if at all useful - is not universal for that reason to at least that extent. However, as far as we can tell, this seems to not be the case.
@Gisteron , @Paul_King , @Roy
Correct - There is no Law of Conservation of Information, at least not in the way there is Conservation of Energy. Random processes can create information very nicely.
I recall reading in an Information Theory text a paragraph describing the Information Theory equivalent, but I would need to look it up now.
Edit: I think this the Lower Bound on the Kullback-Leibler divergence., which says you cannot increase the mutual information between two variables by “processing” the information you already have.
In layman’s terms, we cannot make two sources of information (two probability distributions) more alike by any deterministic process (a function or program). A simple example, I think, is taking a standard Normal variable Z and squaring it to get a Chi-Square distributed variable X^2. In doing so, we lose the information about the sign of Z (positive or negative).
We can’t reverse the information lost without adding a random sign (new information) to \sqrt{X^2}, and even then we don’t get the original variable back. This example doesn’t quite capture the situation for two variables, but that’s the best I can do this time of night.
By your definition of information - which is only going to be useful in some contexts - only a fully deterministic universe prevents any increase in information. In a universe that is not fully deterministic the “Law” would come down to “only processes that create information create information” which seems tautologous to the point of uselessness,
Sure, I can concede that. However, as far as we can tell, we have no examples of a less than fully deterministic universe. Any system’s wave function is a complete descriptor of its state, i.e. contains all of the information about it. A daemon that gets to know its shape at any point in time with some certainty, and the full hamiltonian the system is subjected to, automatically gets to know all past and future (with respect to that reference time) states as well with just that much certainty.
So yes, if anything can create information and if the universe is not fully deterministic, then there can be no conservation of information. Now all we have to do, is demonstrate that either of these premises is actually correct. The second premise I would go so far as to say is in serious dispute only insofar as it is in dispute where, if anywhere, there could be limits to the applicability of the basic axioms of quantum theory. With the first, things are a bit more subtle, it would seem from this discussion.
When we say that random processes “create” information, what is meant isn’t that information that wasn’t there previously pops into being upon an instantiation of a random event/variable. Rather, as these things occur (and what that means exactly is a matter of some philosophical debate, known as the Measurement Problem), part of the function that describes the system “collapses”, effectively reducing the space of states the current state can henceforth evolve into. That is to say, while for our practical needs our instrument picked up a “new” random number, the amount of parameters needed to fully characterize the system we obtained it from is now reduced. What information we “gained”, the system lost.
Aside from that, of course, if we merely start with some numerical data, where ever it may have come from, as @Dan_Eastwood points out, there is no processing we can do to it that would naturally increase the amount of information available to us in that data set, much like no thermodynamic process can reduce a closed system’s entropy. In terms of raw information volume, the best we can hope for is to not “lose” any, where losing denotes dissipation into a reservoir whence we could no longer recover the same information again. That is not to say that processing cannot yield anything of use, of course. We could construct filters, for instance, that suppress surplus/unnecessary information (“noise”, if you will) in favour of emphasizing what we consider to be relevant (“signal”, as it were). But at any rate, what is gain or loss of information in the context of data science is a gain or loss of a subset of the total information that satisfies certain accessibility criteria, not at all unlike how the (Gibbs’ or Helmholtz’) free energy is a subset of a system’s total energy that satisfies certain reusability criteria as well.
We should be careful with this interpretation - without writing out the math it could be easy to get it wrong. I was attempting to express the meaning of “Conservation of Information” in this context. In terms of the main discussion there is no law preventing new patterns of biological function emerging from randomness. There might an analogy here with entropy in an Open system. There is definitely an energy requirement for creating new biological patterns, whether or not that constitutes “new” information.
I think that all that really means is that your definition of information has no practical use. If, that is you are right about the universe being deterministic.
However, I’m not really convinced that the narrowing down of parameters represents a loss of information in the system.
That is fair. I suppose one could use a more biology-oriented notion of information, treat life as an open system and leave aside whether or not in a grander scheme there is conservation of information or not, asserting instead that certainly this “life” system is one information can flow into. If the outside is well enough outside our view, then one might just say that within view there is no rule against information spontaneously coming in or emerging. With more care one might just have a black-box-like reservoir that can pump information in or out as necessary, analogous to heat or particle reservoirs used to model thermodynamic systems.
If I were to place a wager, I’d bet that few creationists would think of information in these most abstract terms to begin with, and if they too focus their view as we would for this argument on some open and small system, and then still maintain that information cannot spontaneously emerge, I agree it is a fair counter to pose the challenge to demonstrate exactly what it is that would prevent information from generating naturally. I chose instead to consider random events as transfers of information, rather than production, and grant that in some sufficiently broad sense there may indeed be no way to genuinely “create” information. My argument, then, is that if there is no way de novo information can come about, then either all of the information the universe currently has already existed when the universe came to be as we know it (be that through a creator or not) in one form or another, with no need of a creator to produce this information post facto, or the creator chose to launch a universe with some different amount of information, only to have to change it sometime later due to what I can only assume must have been a lack of foresight on their part about what amount of information it was that they wanted the universe to ultimately have.
This is a very odd claim since the video is only one in a series. This particular video is focused on RNA self-replication, but to claim that the authors pretend that this is the entire field of OOL research is comical.
It’s also curious that you would say: “[They pretend] that nobody in the field is aware of the unsolved problems” since they specifically refer to preeminent OOL researchers, such as Dr. Jack Szostak, when talking about the unsolved problems. In fact, they show that it is Szostak who is bringing up the problems.
Basically, two paragraphs of “Nuh-uh”.
This paper doesn’t help you out. For two reasons.
One is that the longer sequences found were adsorbed (adhering to) the mineral and were not available to be replicated. They were being protected from replication.
The second is that they weren’t even replicating the RNA. They used a recombinase ribozyme to join shorter RNA fragments (which they supplied) together. Spiegelman’s monster is a replicase.
Simply joining random fragments together willy-nilly is not what you need in an OOL scenario. You need some sort of replication process.
You are misrepresenting the paper.
The paper is not saying that folded RNA can be unfolded and used for replication, it actually confirms the problem of folded RNA ribozymes being poor templates for replication.
When the paper talks about a lower melting point, they are talking about separating the two strands in the process of non-template directed replication.
And not only is the video not disregarding the paper, they actually refer to it in the footnote.
You should also consider the implication of “non-heritable” in the title of the paper.
You are misrepresenting Tour. He isn’t saying that these RNA wouldn’t work as ribozymes, he’s saying that the 2’-5’ linkage would be a problem for template-directed replication. It would “gum up” (my term) the works.
OK, so what functions do ribozymes perform (those found in living organisms)?
The “gish-gallop” of problems that you call “supposed” actually come from Jack Szostak himself. At 7:00 of the video, they show (a cartoon) Szostak and one of his papers. The video also links to another of Szoskak’s papers, which is the 2nd one you linked to. You know, the one you said they were disregarding. That paper mentions the problems as well. Did you really read your paper?
Not sure what you’re trying to say here, but it you think that the problems listed in the video are irrelevant, then you are just fooling yourself. Szostak certainly thinks that they are a problem.
The projection in this statement is simply staggering.
Mod edit: fixed quote formatting.
However, I’m not really convinced that the narrowing down of parameters represents a loss of information in the system.
This I do not quite understand. Suppose system A is fully characterized by α parameters, and system B is characterized by β, and without loss of generality, let’s assume α < β. To fully describe the state of system B I need to specify more individual values than I need to specify to fully describe the state of system A. Or putting it in another way, I need to measure more values before I know all there is to know about system B than I need to measure before I know all there is to know about system A. Or perhaps rather than “need” one could say I can harvest more values out of B before I know all about it, thus exhausting its potential to tell me anything about itself, than I can harvest out of A, before A is done revealing itself to me. If this does not amount to saying system B is richer in information than system A, then what else could we mean by statements of that sort?
You are inferring the cause or origin of the rRNA in the ribosome based on its function or effect. The function of the rRNA doesn’t explain how the rRNA could have arose.
Also worth mentioning is that rRNA is transcribed from DNA. I would love to hear a how you think rRNA could have transitioned from being produced by some sort of RNA replication to being transcribed from DNA.
Is that really the best evidence for the RNA world?