Brian Miller: Thermodynamics and the Origin of Life

Continuing the discussion from The Origin of Life: Can Science Show Intelligence Was Required?:

@dga471 and @PdotdQ, I wonder what your thoughts are on this. Brian is a PhD in physics, but I couldn’t make sense of these arguments. Perhaps you can.

The original post by @vjtorley and the resulting thread might be helpful to read: The Origin of Life: Can Science Show Intelligence Was Required?. I’ll talk to @bjmiller, and perhaps he can join in.

I don’t have the time to read all the articles now, so I will comment only on the first article, “Thermodynamics of the Origin of Life”.

My understanding from talking to my astrobiologist colleagues is that the statistical unlikeliness of life forming is indeed a puzzle. I am not an astrobiologist myself, so my views are somewhat simplistic, but the main problem as I see it is the following:

In a thermodynamics process that could create either a living or a nonliving thing, the ratio of probabilities are given by the ratio of Boltzmann factors:

P(living)/P(nonliving) = (g_living/g_nonliving)*Exp[-(E_living - E_nonliving)/kT)]

Where E_living is the energy of a living thing, E_nonliving is the energy of a nonliving thing, g_living the number of states that can be considered alive, and g_nonliving the number of states that is considered nonliving, k is the Boltzmann constant, and T is the temperature of the system in which these processes occur (i.e. temperature of the primordial muck).

Putting it back to the language he used in the article:

1. The Exp[-(E_living - E_nonliving)/kT)] term is the one that suppresses the possibility of living things due to living things having a larger energy than nonliving things (E_living>E_nonliving). I don’t know if that is true, but it seems sensible.
2. The (g_living/g_nonliving) term is where entropy enters the equation. Supposedly there are more states that we would call living than nonliving, so g_living/g_nonliving<1. This illustrates that higher entropy states (higher g) is more likely to occur than lower entropy states.

Here are my issues with the statement that “Science Show Intelligence Was Required” for life:

1. First, a lot of these terms are unknown. There are many states that could be considered living, and many states that could be considered nonliving, but in the end the ratio g_living/g_nonliving is unknown. I don’t think that the space of allowable states is even known, which makes it hard to properly evaluate g_living and g_nonliving. Further, these different states have different E_livings and E_nonlivings. The point is that no rigorous calculation has been made to compute P(living)/P(nonliving). I understand that P(living)/P(nonliving)<1, but how small is debatable.
2. Next, to get the ratio of the number of living things/number of nonliving things, P(living)/P(nonliving) needs to be multiplied by the number of interactions that could produce living things/nonliving things. I am not sure how many interactions can happen in a span of time - this is a question for the biologists, but the number could be very large. Even if P(living)/P(nonliving) is small, if the number of interactions is large, it is still possible to produce living things.
3. It is not true that adding energy to the system does not help. If the energy (say sunlight shining into a primordial pond filled with muck) heats the system, increasing the temperature of the system, then P(living)/P(nonliving) is going to be driven closer to its maximum value of g_living/g_nonliving. To see this, set T->infinity in the equation above.
4. There is the whole debate about non-equilibrium thermodynamics in the article, which I will skip because I know next to no non-equilibrium thermodynamics.
5. Moving to more philosophical objections: this is a God-of-the-gap argument. Just because right now it is a puzzle that life is statistically unlikely, does not mean that in the future we won’t find a mechanism that explains this neatly.
6. I don’t think something that pushes P(living) up has to necessarily be equated to an “Intelligence”. Might be some non-random, but thoughtless thing.
4 Likes

In the first article in the series, Miller makes this brief statement: "The simplest functional cell compared to its most basic building blocks has both lower entropy and higher energy. " This is quite completely wrong. R. O. N.G. Wrong.

This misunderstanding seems to be the basis for many of Miller’s arguments. IMO, this renders much (most) of the series irrelevant, even useless.

Sorry for being so harsh. But sometimes an egregious error needs to be called out. This is, IMO, such a time.

2 Likes

@art can you explain a bit. It does seem that, for example, proteins, have lower entropy and higher enthalpy than a solution of their constituent amino acids. Lipid bilayers might spontaneously form and be approximately net zero. Any differential between inside and outside a cell would also be a reduction in entropy too. A large number of molecules in the cell are very high enthalpy in relation to their elemental states too.

So it does seem cells are lower entropy and higher enthalpy than than their constituent parts. Of course we do not know precisely what the first cell looked like, but it seems to be a reasonable extrapolation to think it would be similar.

What am I missing @Art?

Physicists, @dga471, @PdotdQ and @bjmiller, can you help make sense of this?

1 Like

I am also interested on what @Art meant. That cells have lower entropy and higher energy than their parts seems sensible to me. Here is a non-expert argument from the physics point of view: If they do not, then it is thermodynamically more likely to have an interaction that produce cells than those that do not. This is a problem in the opposite sense: that the creation of life is too easy and there would be too much life.

2 Likes

To understand my statement, it helps to consider an example. Posed as a question - which has the greater entropy, a mixture of salad oil and water that is perfectly separated (extremely highly ordered) or one that is completely mixed (and thus extremely disordered)? The answer, of course, is the former. This is a case wherein the higher entropy state is the more ordered state. AND it requires no energy flux to spontaneously assume such a state.

This is the basis of hydrophobic interactions, which in turn are bases of protein-protein interactions in a cell. (And other macromolecular interactions). Stated briefly, the macromolecules in a cell do not form large assemblies and structures in spite of entropy (or the second law), but because of entropy. (If you are having trouble with this, remember that one needs to account for ALL of the components of a cell. This includes solvent and solutes. Bound or constrained solvent contributes as much to the overall entropy status as what may seem to be highly ordered macromolecular complexes.)

Let me know if clears things up.

4 Likes

There is the separate issue of peptide bond formation between the amino acids. How do you go from singular amino acids to a peptide polymer? It is interesting to note that high energy impacts are capable of producing peptides from free amino acids:

As you say, this is a separate issue from the argument Miller is making. He is saying, basically:

1. The amazing complexity we see in a cell involves a dramatic reduction in entropy.
2. This reduction in entropy cannot be explained by energy flow, but rather requires information.
3. Therefore design.

Point 1 is wrong.

For what it is worth, I do not think entropic considerations come into play when it comes to prebiotic polypeptide formation, but this is something that might be highly context-dependent (I would imagine). This is another discussion all together.

This may be beyond the scope of this discussion, but the essays here reveal fascinating, even exciting aspects of the discussion of the origin of life. Enjoy.

1 Like

Nice article on entropy and OOL:

Essential resource on free energy issue:

James Tour’s discussion on challenge of cell membrane:
http://inference-review.com/article/an-open-letter-to-my-colleagues

2 Likes

That does help explain what you mean. It reminds me that entropy is often non-intuitive, especially when solvent is involved.

@bjmiller, thanks for joining us.

4 posts were split to a new topic: Side Comments on Miller

Brian, do you agree with this quote (especially the bolded bit) from the first paper you point us towards?

At the physiological temperature T = 310 K this results in an entropy rate of change for a single cell in the range of 10−14 J/K s. This can be compared to only 0.7 × 10−17 J/K s of entropy reduction due to DNA transmitted information, i.e. less than one thousandth as stated above. This is not surprising since many other processes are at work to keep the cell in its metastable (low entropy) state. First of all, the membrane itself consisting of phospholipids comprises some 60% of the cell’s mass and presents a highly ordered structure requiring an entropy reduction to be put in place.

I believe misconceptions such as this lie at the heart of the issue here.

3 Likes

@bjmiller, it seems @art has a really good point that I had missed. I’m curious your thoughts.

This is interesting. I understand that what might look more ordered macroscopically might have more entropy microscopically (as you pointed out in the example of hydrophobic forces).

However, if as you claim that cells are more thermodynamically favourable than its constituents, as a physicist this leaves me puzzled. This is because then generation of cells in the lab would be trivial - one just needs to heat up a concoction filled with the cell constituents and a cell will emerge. As I understand it, this is not true and cells cannot yet be made in the lab, and probably not by just heating up its constituents.

One could say that cells are energetically penalized compared to its constituents (even though it is entropically favoured). However, this brings us back to to the point that cells are thermodynamically disfavoured compared to its constituents.

Is it a timescale issue? i.e. cells will emerge eventually but it takes an extremely long time? If so, one can increase the rate just by increasing the density of cell constituents in the solution.

3 Likes

So one of the non intuitive things here is the phase transitions of water and how this affects entropy calculations. Too hot and the entropic bonus to hydrophobic goes away because of changes to the solvent. Too cold, and we see the same thing. Solvent makes biology work in some very strange and nonintuitive ways.

1 Like

Are you saying that the entropy decrease in the cell is compensated by the entropy increase in the solvent?

Let me ask another naive physicist question:

We can enlarge the system to be the cell+solvent instead of just the cell. Then, in thermodynamic equilibrium, the probability of getting cells from interaction becomes proportional to g_living/g_nonliving, where g_living is the number of states of cell+solvent where a cell is produced, and g_nonliving is the number of states of cell constituents+solvent where a cell is not produced. As pointed out, the solvent states are necessarily not the same.

If it is impossible to just heat up a concoction (cell constituent+solvent) to produce living cells, then this suggests to me that g_living/g_nonliving is < 1 if we enlarge the system to include the solvent (assuming that states that produce cells are still energetically disfavoured compared to states that does not produce cells), even though g_cell/g_cell-constituent >1, where g_cell are the states that are cells and g_cell-constituent are states that are just cell constituents.

Wouldn’t this bring us back to square one? i.e. that while cells by itself is entropically favoured compared to its constituents, its production is not entropically favoured.

Edit: didn’t reply to the right post

I guess if “thermodynamically favorable” was only determined by entropy, I would also be puzzled. Of course, this might be very unlikely as the universe would probably be decidedly different, and we wouldn’t exist as we do. (More to the point, you should not think I am claiming that living cells are “thermodynamically” favorable in the sense that, at thermodynamic equilibrium, they would exist. I most definitely do not mean this, and my remarks about entropy driving macromolecular assembly should not be taken this way.)

As far as the last two sentences, what would your reaction be if I told you that cellular entities that were “Electrotactic, Protometabolic (Catalytic), Aggregative, Protomobile, Osmotic, Permselective, Fissive, Protoreproductive, Conjugative, Protocommunicative, (and) Excitable” actually are known to arise by heating up a “a concoction filled with the cell constituents”? I pointed to an introduction to this topic before, and I would encourage a reading of this essay.

I don’t want to look like I am kicking the can down the road, but Morowitz’ book (see Brian’s post above) addresses this nicely. (And, contrary to what one might presume by the fact that Brian pointed us towards it, it is actually quite favorable to the arguments I am making.) The title is a useful clue that helps to resolve some of these seeming paradoxes.

1 Like

Thank you for clarifying. It is true that entropically favored does not meant thermodynamically favored, which is why I added my comment on energy penalty - because one could have thermodynamically disfavored processes which are entropically favored if the energy penalty is large enough. Perhaps I won’t have the answer unless I read the Morowitz book, but is this the case for cells?

I have read this essay, and my understanding is that the entity Sidney Fox created is not a true cell, and perhaps not even a true protocell.

I’m not sure we can compute this for “cells,” as that term is to imprecise to be useful.

What @Art did very helpfully is point out a key paradox that traces to one of the great mysteries of science: hydrophobicity. It is a very reliable macroscopic observation, but just as the term implies (hydro=water) it seems idiosyncratic to water in many ways. At the atomic scale, we see it operating too too. This is a key “force” driving a great deal of “ordering” within cells, but we did not even know what it was for a long time.

It is because of hydrophobicity that:

1. Most protein-protein interactions take place.
2. Cell membranes made of lipid bilayers are such good insulators.
3. Lipid bilayers form spontaneously in solution with water from constituent parts.

It turns out that our mental image of proteins floating around in space is all wrong. Instead, everything is actually in tight association with water molecules. Note that I said molecules. Turns out that to understand hydrophobic “force” we have to understand how discrete water molecules interact with one another. The hydrophobic “force” emerges from these interactions as an entropic penalty incurred by limiting degrees of freedom in solute-associated water molecules.

At room temperature, water is a liquid but it is an ordered liquid, because of a spontaneous hydrogen bond forming networks (enthalpic bonus), but in liquid form is also an unstable network (entropic bonus). This is, as I understand it, an important for explaining another paradox of water: remember, water expands when it freezes, unlike just about everything else. That is a discussion for another day.

Regarding hydrophobic force, we find out that

water molecules can form an extended three-dimensional network. Introduction of a non-hydrogen-bonding surface disrupts this network. The water molecules rearrange themselves around the surface, so as to minimize the number of disrupted hydrogen bonds. This is in contrast to hydrogen fluoride (which can accept 3 but donate only 1) or ammonia (which can donate 3 but accept only 1), which mainly form linear chains.
https://en.wikipedia.org/wiki/Hydrophobic_effect

That special arrangement increases the enthalpy relative to another configuration, but in decreases the entropy substantially. Now the waters are stuck in a cage around the solute.

The water molecules that form the “cage” (or clathrate) have restricted mobility. In the solvation shell of small nonpolar particles, the restriction amounts to some 10%. For example, in the case of dissolved xenon at room temperature a mobility restriction of 30% has been found.[16] In the case of larger nonpolar molecules, the reorientational and translational motion of the water molecules in the solvation shell may be restricted by a factor of two to four; thus, at 25 °C the reorientational correlation time of water increases from 2 to 4-8 picoseconds. Generally, this leads to significant losses in translational and rotational entropy of water molecules and makes the process unfavorable in terms of the free energy in the system.[17] By aggregating together, nonpolar molecules reduce the surface area exposed to water and minimize their disruptive effect.

The hydrophobic effect was found to be entropy-driven at room temperature because of the reduced mobility of water molecules in the solvation shell of the non-polar solute; however, the enthalpic component of transfer energy was found to be favorable, meaning it strengthened water-water hydrogen bonds in the solvation shell due to the reduced mobility of water molecules.
https://en.wikipedia.org/wiki/Hydrophobic_effect

This fact alone is enough to explain why it is easy to form a folded protein. Essentially, all you need is alternating blocks of hydrophobic and hydrophilic residues. Because the size of the blocks is no that important, this arises by random chance very easily.

Any how, I am very appreciative to @art for reminding me of this.

How does this impact the discussion? Well, the issue is that our natural intutions here are not helping us. It seems right that the cell is more ordered. However, that does not mean it has lower entropy. That claim requires

1 Like

There may also be some additional ideas from physics worth considering (still in early stages and forgive me if this was mentioned already) related to the tendency of simple molecules to form more complex structures when under the influence of an external energy source. Part of the idea is that the cooperation or molecules leads to greater entropy production by the complex state. My brain is a bit fuzzy right now, but this article highlights some of these ideas and more, also providing some insight/summary of self-replication of non-living structures as well:

3 Likes