Brian Miller: Thermodynamics and the Origin of Life

Design

#22

Hmm, this sentence puzzles me. In thermodynamic equilibrium under constant pressure, the probability of a process happening is proportional to Exp(-G/kT), where G is the Gibbs free energy, defined via dG=dH-TdS, with H the enthalpy, T the temperature, and S the entropy.

If an arrangement increases the enthalpy while also decreasing the entropy, it will become thermodynamically disfavored, because both the increase of enthalpy and the decrease of entropy drives the process towards being thermodynamically disfavored.

How do you get the hydrophobic effect then? Sorry if this is a stupid question, I have not done any biology or even chemistry for ~10 years.


(S. Joshua Swamidass) #23

Reading over my comment I said part of it wrong!!! I’ll fix it later tonight, or may be you will before it get to it.


(Arthur Hunt) #24

As we are seeing in this discussion, there are many ways to approach this subject. Many processes in living cells are thermodynamically unfavorable, in isolation. Usually, these are coupled to favorable processes so as to render things favorable overall. This also usually occurs in situations where there is constant flux through a system, and thus the system is not at thermodynamic equilibrium. This is what Morowitz talks about. Whether or not a particular reaction involves increases or decreases in entropy is beside the point.

I suspect that you probably know this, and that we are talking past each other because physicists and biochemists speak different languages. I apologize if I am stating the obvious and coming across as a bit impatient or condescending.

Maybe. I expect that someone with one of the first microscopes would have looked at Fox’ protocols and said otherwise. Moreover, if one starts off by asking whether the properties I listed are associated with living cells, the answer would almost certainly be yes. It is just as defensible IMO, to call these what a 18th century microscopist might have as what a 21st century geneticist would.

But this is really a different and interesting debate for another time.


(Arthur Hunt) #25

Great suggestion.


(S. Joshua Swamidass) #26

I am unconvinced by that paper and it seem to overstate what it means for abiogensis.


(Arthur Hunt) #27

I liked the unconventional ideas. And the fact that physicists are thinking along lines that have potential to align more with how biologists think about these things.


(Matthew Pevarnik) #28

Granted a physicist is perfectly fine saying:

“You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant,” England said.

A more recent paper of his (under revision) like design of conditions for emergence of self-replicators I think could help the community make progress as a whole. Suppose the self-replicating step becomes an area he contributes to- abiogenesis has many steps to occur that eventually lead up to the first ‘living’ organism. There is definitely some potential to help supplement and orient OOL researchers working on the topic. Does a physicists toy model of entropy solve or even make a dent in some outstanding research questions/fields? Well, it’d be cool if it does (though we’re a long ways from it) and pop-sci articles tend to be a bit flashy in their titles.

Perhaps it more helps address the question of the thread and I think can be submitted as further evidence that abiogenesis does not have a problem with thermodynamics and entropy.


(S. Joshua Swamidass) #29

That is an example of absurdity that does more to create confusion, and even anger, than anything else. I’m no ID advocate, but I entirely understand why an ID advocate would be exasperated that quote being embraced by other scientists and in the media.

It is nonsense, but flourished nonsense that flatters. It is not helpful.


(Matthew Pevarnik) #30

Was it a helpful comment for those that reject any scientific progress on the topic of abiogenesis? Certainly not. I think that most scientists don’t really care what a community thinks that has contributed virtually nothing to our understanding of the natural world beyond ‘y’all can’t explain this and never will, therefore intelligent designer.’

My blood pressure started rising the other day when I casually did an internet search for ‘homochirality problem.’ Is the creationist/ID community working to solve anything related to outstanding problems in science? One could get the impression that the answer is no. Since they don’t seem to be interested in solving any unknowns but rather positing supernatural explanations why should the scientific community even bother phrasing anything to tailor to their specific demands? Perhaps the only thing the scientific community could learn is how to speak to our culture more broadly speaking. Maybe like this:
These Scientists Formed a Fortnite Squad to Teach Players About Climate ChangeGizmodo › earther


(Jordan Mantha) #31

I’m interested in hydrophobicity. My biology colleagues talk about it all the time but I don’t teach it explicitly in my general chemistry or physical chemistry courses. I may need to think about this a bit more as I work with a lot of biology majors. We do talk about intermolecular forces and entropy in forming solutions

Is it really entropic? I would imagine it would be the enthalpy, not entropy, difference that would dominate.

It’s really the entropic and not enthalphic terms that dominate? All liquid -> solid phase transitions should lower the entropy. That water expands when it freezes seems more to do with the hexagonal network or hydrogen bonds that lock the water molecules into place than entropy.

Okay, time for me to show my ignorance I guess. I would have guessed that the perfectly separated mixture would have the lower entropy. I’m using the Boltzmann definition of entropy (S=k \ln\Omega) and I would think the the number of microstates, \Omega, in a perfectly separated mixture would be less than that in a perfectly mixed mixture. Can somebody help me out?

I can see how they would be massive generators of entropy, but it’s not clear to me that it’s more entropy than the components separately. Is that even desirable? I must admit, I normally think about free energy more than entropy specifically, because it’s free energy that determines whether a chemical reaction will happen spontaneously.


(Arthur Hunt) #32

Maybe the illustrations in this short essay can help.


(Jordan Mantha) #33

@Art, thanks for the article, I’m gonna have to look at it at more detail though. On first pass they are defining the system as the hydrophobe in one part and then use that to talk about the entropy of a different system (clathrate cage). That seems like a big mistake to me. They should use a consistent system, and it should be the hydrophobe+water.

Also, what’s with the emphasis on entropy? Whether a reaction will actually happen is determined by the Gibbs free energy, not the entropy. Usually Gibbs free energy is mostly dominated by enthalpy, however the article seems to indicate with hydrophobic forces the entropy term is much larger. Again, I might be showing my ignorance of biological systems but this just seems odd. It’s like when biologists tell me that they tell students that ionic bonds are weaker than covalent bonds. It makes me cringe a bit.


(Dr. Patrick Trischitta) #34

\Omega in a perfectly separated mixture would equal to 1. \Omega in a perfectly mixed would equal to a large number.


(Arthur Hunt) #35

I think the clathrate cage = hydrophobe + water.

This goes back to Brian Miller’s assertion that the order seen in cells is reflective of low entropy, and how this leads to a conclusion of design in biology. I guess any discussion of entropy is going to wander into more general ones of thermodynamics, then to Gibbs free energy, enthalpy, etc.


(Brian Miller) #36

Cell Membrane
The coalescing of lipids into a cell membrane is energetically favorable, so it happens spontaneously. However, the hard part is obtaining significant quantities of the fatty acids, which are proposed for the initial cell, in the first place. Since they are high-free-energy molecules, they will not be easily synthesized in any realistic prebiotic environment. In addition, the fatty acids would normally precipitate out of solution in the presence of Mg(2+) which would have been present in the early oceans, so additional helper molecules would have been needed to allow for stable membrane formation.

Compounding the challenge, the membrane would need to allow for the right molecules to enter and to block out contaminants. Such careful selection would require the right combination of lipids and other molecules. In addition, the membrane would have to be unidirectional. If molecules could enter and leave just as easily, the right molecules could not accumulate inside. Relatedly, active transport would be quickly required to maintain the protocell in a state of disequilibrium with the environment. In short, the practical challenges are far greater when examined in detail.

Free Energy Challenge
In terms of entropy, the article I mentioned by Davies estimates the reduction in entropy associated with the concentration of key molecules in a yeast cell is around DS = -14.4 x 10^-14 J/K. Additional reduction in entropy results from the formation of the macromolecules and other cellular structures. The formation of the cell membrane is not a challenge if the lipids are available in high concentration, but synthesizing and concentrating them also represents a reduction in entropy. I have never encountered an OOL researcher claim that the formation of a cell does not represent a reduction in entropy.

The cell also represents an increase in enthalpy which Morowitz estimated to correspond to an absorption of about 10^-9 jouls of energy from the environment. This amount if scaled would be like a bathtub of room temperature water absorbing enough heat from the surroundings to start to boil. The entropy and the enthalpy (energy) are often combined into the free energy, and all spontaneous processes move from higher to lower free energy. However, OOL requires basic chemicals to move from lower free energy to higher free energy which never happens. Morowitz calculated the probability for such an event near equilibrium to be around 10^-11.

Morowitz hoped that the solution to OOL would lie in non-equilibrium dissipative systems where energy and/or mass enters a system and then leaves. The flow results in cycles which can generate order formation such as a tornado or roll patterns in boiling water. Everyone in my graduate research group worked on such systems. The problem is that the order seen is completely different than the order required for life. In particular, the addition of energy into a system results in an increase in the entropy: DSt = Q/T + DSi.
DSt – Total entropy change
Q – Heat entering system
T – Temperature
DSi – Entropy generated inside the system

The addition of heat does increase the probability of higher energy states becoming occupied, but it also dramatically increases the entropy, since out of equilibrium systems drive gradients in T, concentrations, velocities, and other variables. And, these gradients result in entropy production moving the system away from a state conducive for life. This challenge was formalized by Jeremy England’s work which does not help OOL but severely challenges it. Specifically, he demonstrated that systems driven far from equilibrium tend toward states of higher entropy and greater energy dissipation which is the exact opposite of what is needed.


(Brian Miller) #37

Solution to Life: Engines and Information
The way that life overcomes the free energy challenge is that it contains highly sophisticated machinery which transforms one form of energy into high energy molecules (e.g. ATP) which provide the needed energy to drive energetically unfavorable reactions. These driven reactions overcome the processes which constantly drive the cell toward higher entropy. In addition, information-rich enzymes couple the breakdown of the high-energy molecules to driving the target reactions to maintain the metabolism.

Positive Case for Design
Beyond the challenges of forming a cell, an equally significant point is that life demonstrates positive evidence for design, particularly in the requirements for a self-replicating system. The question of self-replication was addressed by NASA engineers who analyzed the minimal functional requirements. Parallel to this effort, biologists have analyzed the minimal requirements for a functional cell. Both groups converged on common components:

  • Large repositories of information and information processing
  • Manufacturing centers that construct all of the essential pieces
  • Assembly and installation processes
  • Energy production and distribution machinery
  • Automated repair and replacement of parts
  • Global communication and coordination with feedback control systems
  • Sensing of environment and calculation of needed responses
  • Self-replication which directs the duplication, distribution, and installation of every part.

All of these functions demonstrate high levels of goal direction, coordination, and foresight which are unmistakable signs of intelligent design. By “high levels” I mean numerous components, complex interconnections, high specificity requirements, and multiple interacting hierarchical levels.

One cannot appeal to selection for assistance since these functions are required before self-replication is possible. Appeals to pre-cellular replication are futile since prebiotic selection always selects against life’s origin. Specifically, any self-replicating entity will be selected only for efficiency of self-replication and against any other function useful for OOL. This driving tendency away from functional outcomes has been demonstrated in experiments on replicating viral RNAs such as “Spiegelman’s monster.”

The key challenge is that the arrangements of molecules in life transcend their chemistry and physics in the same way the parts in a car transcend the chemical and physical properties of metal, glass, and rubber. The reactions in metabolism would never occur naturally at reasonable rates if enzymes did not force them to move in the right direction. Life is only explainable by the information which defines its makeup and operations.

In addition, the simplest possible cell demonstrates top-down design which transcends its physical makeup. The goal of self-replication demands the aforementioned functions must exist. Each of those functions demands that certain sub-functions exist. For instance, self-replication requires that DNA is replicated, all of the cellular components are duplicated, and everything is equally divided between two cells. The replication of DNA demands the existence of multiple components, such as machinery to separate the strands, replicate both strands, and loosen the tightening of the remaining DNA which results from the separation. Note that I have not mentioned any biochemical details which happen to differ in different types of cells. The functions preexist their embodied structures.

Moving down the hierarchy, the loosening of DNA during replication demands the existence of a molecular machine which can cut the DNA, loosen it, and then bind it back together. This entity is known as topoisomerase. Please see this video to appreciate its genius. It performs the functions I mentioned plus the following:

  • Opening and closing gates
  • Passing one DNA strand through the broken strand
  • Accessing energy from ATP to power actions

This enzyme is required for DNA replication before DNA is even long enough to encode the enzyme. It is over 1000 amino acids long, and it must perform every step properly, or it is useless. Can we really consider the possibility that an ancient ocean just happened to be filled with trillions of trillions of 1000-amino-acid-long chains, and one just happened to stumble across the right sequence, and it just happened to drift into a cell membrane which just happened to have all of the other machinery needed for cellular replication?


(S. Joshua Swamidass) #38

@bjmiller thanks for the thoughtful engagement here.


(Joseph Akins) #39

I highly suggest his 2017 talk at MIT. As one greatly impressed by the thinking of Ilya Prigogine and a personal fondness for entropy I find Jeremy England’s approach the most promising. However, as an empiricist I find his off hand comment towards the end on in silico “experiments” as offensive, but admittedly intriguing.


(Dan Eastwood) #40

I recall that information theorist Gregory Chaitin also advocates in silico experiments.


(S. Joshua Swamidass) #41

As do I. Simulation is a fundamental part of science now. It is a new class of experiment in many ways: The Role of Simulation in Science. The key point is that even with the correct laws in mind, we do not know what the consequence of said laws. Likewise, simulations provide an important cross check on our mathematics.