@MrAnderson It’s worth noting that Genetic Entropy is a newer take on the old Second Law of Thermodynamics Prevents Evolution argument. The 2LoT argument fails because living things receive energy (ultimately from the sun) to maintain themselves and grow. The GE argument is very much the same; energy is expended to keep the genome from decaying.
I don’t think it’s the same at all.
It’s true that GE is not formulated in terms of physics or information theory, but Sanford didn’t call it Genetic Decay, he called it Genetic Entropy. I think it’s apparent he equates the loss of genetic information through deleterious mutations to entropy.
To make the connection between GE and the 2LoT argument would require Sanford to define what he means by “information”. He won’t do that, of course, because then GE automatically fails. You may recall this previous discussion on the unwillingness of Creationists to define information.
I should mention that Sanford probably doesn’t know how to define information in the context of GE. I’m pretty sure and definition in terms of Shannon Information would show the whole concept is invalid.
I wonder if there might be a proof of that …
I shuffled this to a side discussion to keep it from getting lost among other discussion.
It would seem to me to be a more specific version of the general Second Law argument. Where the Second Law argument speaks in broad, general terms of just “things” decaying into chaos (setting aside for the moment whether that even is a consequence of actual principles of thermodynamics in any circumstance, let alone if the evidence is consistent with that prediction), the Genetic Entropy argument says that, specifically, as random changes of a genome accumulate, its entropy increases with respect to some prior reference state and, presumably, that is a problem for the viability of organisms built from that altered genome.
While the Genetic Entropy argument references more a notion of entropy rooted in some stochastic considerations, rather than the traditional theory of heat and energy, thermodynamics in this latter sense can nowadays (i.e. for the last one-and-a-half centuries or thereabouts, depending on how much of the translation we would consider a sufficient coverage of the overall field) be completely reconstructed out of just those same foundations of probability theory. Still, that is not to say that the arguments are fundamentally related. For it is not necessarily the case, that either interlocutor presenting or faced with one or both of these arguments would understand that the topics are so linked, and as such, they would subjectively justly treat them distinctly.

the Genetic Entropy argument says that, specifically, as random changes of a genome accumulate, its entropy increases with respect to some prior reference state and, presumably, that is a problem for the viability of organisms built from that altered genome.
That’s just sticking entropy into the argument as a synonym for fitness, with no connection to actual, physical entropy. Further, this phenomenon, under whatever name you choose, is potentially a real thing, depending on the distribution of mutational effects and the way selection works. (We know it isn’t a real thing because species are still around after 4 billion years.)

It’s true that GE is not formulated in terms of physics or information theory, but Sanford didn’t call it Genetic Decay, he called it Genetic Entropy. I think it’s apparent he equates the loss of genetic information through deleterious mutations to entropy.
It’s perhaps a metaphor rather than a real claimed relationship. The supposed mechanism has nothing to do with physical entropy.

I should mention that Sanford probably doesn’t know how to define information in the context of GE. I’m pretty sure and definition in terms of Shannon Information would show the whole concept is invalid.
I wonder if there might be a proof of that …
I am not a scientist, so I won’t pretend to tell you what the right answer is on that stuff (ok, fine, I would and have, but I’m open to being corrected…) But what I do know a thing or two about is proving a point.
In this case, I think I do have something valuable to offer.
The way to go about proving this point in a cross examination or a court, or even a debate, is as follows:
Q: you can’t tell me what information is, but you have been able on a number of occasions to tell what new information isn’t, right? Lactose persistence, for example is not an example of new information, right?
Q: And lactose persistence is particularly (whatever kind of mutation), and you would say that any mutation like that, is not an example of the evolution of new information, right?
Q: Even though it adds a function? And even though it is beneficial in context? And mutations like this can also either increase or decrease the complexity of the genome, right?
Q: Similarly, you don’t think (pick another one, citrate plus in the LTEE experiment, for example), is an example of the evolution of new information, right? And citrate plus (or whatever) is a class of (whatever kind) of mutation, and you would say that those kinds of mutations also are not the kind of thing that adds new information, right?
You would agree that it adds function though? And you would agree that in this particular circumstance, it has a benefit to the organism? And it could increase or decrease overall complexity?
…(go through examples of all of the different kinds of mutations, and give examples, asking at each point if that example qualifies as an example of the evolution of the addition of “new information”, and whether it nonetheless adds function, and is beneficial in the organisms environmental context, and increase or decrease complexity)
Once done, you say, "right, so none of these types of mutations that we know about individually can add “information” as you define it. What combinations, if any, do you believe would qualify as adding “me information?”
(There will be none he knows about)
"So, mutations can occur, in many different ways, which add functions, which are at least circumstantial beneficial, and which increase or decrease the complexity of the organism, without qualifying as new information, right?
“So what is it about “new information” that in your view makes it in any way necessary for evolution?”
“It seems to be, based on what we just discussed that a series of circumstantially beneficial mutations that increase complexity, and add new functions could occur, and cause significant morphological changes over time without any “new information” as you define it being added. Isn’t that true?”
“It is possible, based on these mechanisms, for a single celled organism to evolve into homo sapiens without the addition of any “new information” as you define it, simply by adding functions through changes like the ones we have just been discussing, isn’t that true?”
“You are aware of no mechanism which would require the addition of"new information” in order for an organism to undergo morphological changes over time, are you?"
“You are further not aware of any fossil specimens of creatures which had “more information” in thier genomes such that they were clearly superior to thier now descendants, are you?”
“So, sitting here today, you not only cannot define “new information” you are not aware of any examples of such information, and you are unable to show me any evidence that information as you define it exists, or that it plays any role in evolution, isn’t that true?”
If you would like to see an example of me actually applying this kind of technique to a resisting opponent, and getting damaging admissions, see my debate with Kent Hovind.
Even he is unable to slip away when this method is applied carefully and properly. That being said, he is one of the slipperiest, most resistant guys out there, so it does take me over an hour to pin him down, but this does work, I assure you.
Apologies for my clumsy two-part attempt to move these comments to a separate thread. The mobile app makes it possible to move things around on my phone, but it’s a lot easier from the desktop.

Apologies for my clumsy two-part attempt to move these comments to a separate thread.
It’s understandable. Bite off one of your fingers as a form of penance, and then all is forgiven.

It would seem to me to be a more specific version of the general Second Law argument. Where the Second Law argument speaks in broad, general terms of just “things” decaying into chaos (setting aside for the moment whether that even is a consequence of actual principles of thermodynamics in any circumstance, let alone if the evidence is consistent with that prediction), the Genetic Entropy argument says that, specifically, as random changes of a genome accumulate, its entropy increases with respect to some prior reference state and, presumably, that is a problem for the viability of organisms built from that altered genome.
Yes. Sanford is equating function with information, but has not suggested any way of quantifying what this means.

While the Genetic Entropy argument references more a notion of entropy rooted in some stochastic considerations, rather than the traditional theory of heat and energy, thermodynamics in this latter sense can nowadays (i.e. for the last one-and-a-half centuries or thereabouts, depending on how much of the translation we would consider a sufficient coverage of the overall field) be completely reconstructed out of just those same foundations of probability theory.
Yes, this was my line of thinking. There is no 2nd Law of Information Theory. Instead there is an inequality stating that it is not possible to increase the mutual information between two variables X and Y by applying any deterministic function. Adding information from a random function is possible.
If Sanford tries to formulate GE in term of Shannon Information, it immediately loses any connection to function. That make it useless for Sanford’s purpose, and any other useful purpose as far as I can see. (I’ve been thinking about this for a while).
From what I’ve seen of Sanford’s argument, it has nothing to do with information. It’s all about fitness and about the distribution of fitnesses of mutations.

That’s just sticking entropy into the argument as a synonym for fitness, with no connection to actual, physical entropy.
Whether the logarithm of the size of the phase space volume corresponding to a macrostate is any “synonym for fitness” I am comfortable leaving to the good judgement of biologists. It is most assuredly a form (if not definition) of entropy, and a sequence getting randomly garbled (with respect to a given reference state) most assuredly increases it, what ever biological consequence this has or not. An argument that states that the subspace of sequences with lower fitness than a reference one is much larger than the subspace of sequences with equal or greater fitness, and that a random displacement from a reference state is therefore more likely to be towards a less fit one – again, setting aside whether this is an accurate modeling of mutation, and if anyone willing to use the argument is willing or able to articulate it so cleanly – is an argument structurally no different than what justifies the macroscopic formulation of the Second Law of thermodynamics.
As I said, it may well be that either the presenter or receiver of the Genetic Entropy argument (or both, as it were) is unaware that this statistical definition of entropy liberally used in information theory is not meaningfully distinct from the entropy of thermodynamics, and that their accurate use of the vernacular is purely incidental. It may even be that the presenter equates entropy with unfitness or some other vague notion of badness, and insofar as they have no clue what they are talking about, indeed, their argument has nothing to do with thermodynamics, or the creationist arguments allegedly leveraging the Second Law thereof. Still, I submit that the participants’ ignorance, though herewith accounted for, does not erase the connection that by all means ought in principle exist, insofar as there is a very tight link indeed between Shannon’s entropy and Boltzmann’s.

Whether the logarithm of the size of the phase space volume corresponding to a macrostate is any “synonym for fitness” I am comfortable leaving to the good judgement of biologists.
It isn’t. But is Sanford talking about the logarithm of the size of the phase space volume?

Still, I submit that the participants’ ignorance, though herewith accounted for, does not erase the connection that by all means ought in principle exist, insofar as there is a very tight link indeed between Shannon’s entropy and Boltzmann’s.
But is Sanford talking about Shannon’s entropy at all? His use of “entropy” is strictly metaphorical.

But is Sanford talking about the logarithm of the size of the phase space volume?
If any part of his argument is that moving through the space of genetic sequences is more likely to change some observable (say, fitness) one way (say, decreasing) than the other, specifically because more genetic sequences exist with that observable relating in this way to the reference value, then, essentially, yes. If not, then I guess I misunderstood that by Genetic Entropy argument we mean specifically and only Sanford’s version that shares nothing with an argument like that but the misleading name.

If any part of his argument is that moving through the space of genetic sequences is more likely to change some observable (say, fitness) one way (say, decreasing) than the other, specifically because more genetic sequences exist with that observable relating in this way to the reference value, then, essentially, yes.
I don’t think that’s any part of his argument. At least it’s not an important part. If we reduced the argument to that sort of thing, we have to ignore selection entirely.

Q: you can’t tell me what information is, but you have been able on a number of occasions to tell what new information isn’t, right? Lactose persistence, for example is not an example of new information, right?
I started to answer this as if you were asking me, but realized you weren’t asking for a statistics lecture.

I am not a scientist, so I won’t pretend to tell you what the right answer is on that stuff (ok, fine, I would and have, but I’m open to being corrected…)
For your benefit, the mathematical definition of Shannon Information if the variability of message that might be sent from a sender (a probability distribution). In this sense adding randomness is ADDING more information. Shannon Information has nothing to do with the meaning of the message; it’s a measure of bandwidth needed to communicate.
Sanford is using “information” in the sense of common usage where messages have meaning. Here adding random errors may interfere with the message being communicated.
There is also Functional Information (Szostak 2003) which has nothing to do with the physics or statistical meaning of information. Sanford could potentially use this to quantify degrees of function, but he doesn’t, and he won’t. Prevarication is to his benefit.

It’s understandable. Bite off one of your fingers as a form of penance, and then all is forgiven.
OUCH!

If you would like to see an example of me actually applying this kind of technique to a resisting opponent, and getting damaging admissions, see my debate with Kent Hovind.
I will look that up!

Even he is unable to slip away when this method is applied carefully and properly. That being said, he is one of the slipperiest, most resistant guys out there, so it does take me over an hour to pin him down, but this does work, I assure you.
The other dodge available to Sanford is to not answer questions, or to only respond to those he chooses.

I started to answer this as if you were asking me, but realized you weren’t asking for a statistics lecture.
Thank you for the information about information. Though you are right that I was not asking for a lecture on information theory in this instance, I very much do appreciate the wealth of knowledge that you and your colleagues are generously sharing with me, so in general (especially if I get something wrong) lecture away!
I hope you will not take it amiss if I lecture as well about the area in which I have some small knowledge, that is, pinning these guys down. There is technique to it, and it is a learnable skill. It is also not something that scientists are generally trained to do. While you guys are trained in argument (and very good at it, I might add), you are used to arguing with someone who is an honest interlocutor. That is, generally while scientists may disagree, they are all interested in finding the truth and understanding the world around them.
Creationists do not generally operate that way, to the endless frustration of their opponents. If something does not fit with their worldview, or it is “bad” for “their side” of the argument, they will not acknowledge it or admit it if they can get away with it. They will deliberately misunderstand you, or take you out of context, or change what they were saying if they think you are getting the better of them. They will try and change the subject if they are getting cornered, or distract you from areas where they are vulnerable.
In short, they behave like litigants in a lawsuit. That’s where my expertise comes in, because that is the kind of person that I am used to trying to get information out of. As I said, there is technique to it, and you are seeking a different goal than you would be in a scientific debate. You are not trying to prove that they are wrong. Everybody already knows YEC is wrong. The goal in these kinds of discussions is to force them to say things that they don’t want to say, and admit things that they don’t want to admit.
These can be things like facts that are inconvenient for their arguments, or things that are obviously ridiculous, or that they “need more time to think about” a logical inconsistency or hole in their position.
Then, regardless of which one it is, you can then use those statements against them, again and again in future discussions to limit the degree to which they discuss these topics, or limit the degree to which they can rely on thier simplified talking points.
For example, Kent Hovind is very fond of saying that “dogs produce dogs, without exception. There has never been an instance in the history the world where a dog has produced a non dog.”
Over the course of several hours, I got him to admit:
(1) If an animal changes enough, that is, if it develops significantly different morphological features, and it is unable to interbreed with the ancestral population, that this can be an instance of an animal turning into another kind.
(2) the reason that it is ridiculous for me to think I am related to an amoeba is because single celled creatures like amoeba and multicelled creatures like me are very morphologically different. It is silly to think one could turn in to the other.
(3) If the cells in my arm were to decide they did not want to be a part of my body any more, and decided to start living as single celled creatures, that would qualify as a new kind.
I then told him that this is what Cancer is (At least some types of cancer). The genes within a cell that code for co-operation break, and the cell starts living as a single celled creature on its own.
At that point, he said that this did not really count if the cell could not live on its own outside the body.
I then presented him with evidence that there existed a veneral disease in dogs called CVTD, which is actually a form of transmissible cancer. A dog’s cancer cells evolved to become parasites and spread like a pathogen to other dogs.
I asked him whether he could tell me whether he believed dogs and venereal diseases were different “kinds” of animals.
He says he needs time to think about it.
Now, that question can follow him forever, any time he says “dogs produce dogs”. Also, any time I speak to him again (no plans to, by the way) it’s one of the first things I will bring up. It becomes a millstone around his neck.
That is the goal in these kinds of interactions.

(We know it isn’t a real thing because species are still around after 4 billion years.)
Well, it could be a real thing because most species that evolved during those years are extinct. However, there’s no real evidence that it’s a real thing. The more relevant criterion for extinction is decreased genetic variation, not increased.
Why aren’t any IDcreationists arguing against preserving species by outcrossing if they truly believe Sanford’s notion? Shouldn’t they be arguing instead for more inbreeding and isolation?