Explaining the Cancer Information Calculation

Great question. This article might be helpful to you:

From a biological point of view, it is now clear that cancer is an evolutionary disease. Cancer biologists use evolutionary theory because it is useful and accurate, not because they are pushing an “evolutionary agenda.” In cancer, cells evolve a set of new functions. These functions are beneficial to the cancer cell, but ultimately lethal to their host. And cancer must do much more than just grow quickly. It must also…

  1. ignore signals to die,
  2. evade immune defenses,
  3. grow blood vessels to obtain nutrients,
  4. invade surrounding tissue,
  5. survive in the bloodstream,
  6. establish new colonies throughout the body,
  7. and even resist treatment.

Not every cancer acquires all these functions. Nonetheless, in all cases, more than just rapid growth is required for cancer to develop. Several new functions are required. Ultimately, many cancers will acquire more than ten beneficial (to the cancer cell) mutations that enable these new functions.

“Broken” functions perhaps can make sense of some of the changes, but not all of them. Some of these functions are quite complex. We can, also, quantify the information in bits too.

From the same aritcle:

One incorrect metaphor for cancer (and a misguided way of dismissing evolution) is that cancer is just cells “breaking down” or “gunk in the machine.” Superficially, the “breaking down” metaphor explains some changes in cancer. For example, some cells acquire the ability to divide uncontrollably by truncating, or “breaking,” specific proteins that normally control cell division.

The “breaking down” metaphor, however, is not adequate. When our technology breaks down, it never produces anything resembling cancer. Old cars, laptops, and watches do not grow tumors as they break down. In this way, cancer reminds us that biology is unlike any human design. Cancer is unique to biological systems, and we are afflicted with it because we are intrinsically capable of evolving.

2 Likes

Dr. Swamidass,

Thank you for taking the time to respond to my questions! I really appreciate that you are so accessible to laymen such as myself.

As someone who has found the evidence and arguments used by ID proponents such as Stephen Meyer and Michael Behe very compelling (and a big reason for my coming to faith), I have been challenging myself to find steel-man arguments against ID to keep my bias in check.

I’m at work now so I’ll have to consider this material later.

5 Likes

Thanks for taking the time to listen and understand.

Regarding Behe, you might want to see this: Which Irreducible Complexity? .

Here is the thing. God even uses bad arguments to bring us to faith. The strength of ID arguments are not in their scientific rigor, but in that their conclusions are right, that God created (“designed”) us all. They rightly bring us into to awe and wonder of his creation, they usher us into worship, even when their scientific logic fails.

For me, I came to this conclusion:

Of course, God can use anything, including both good and bad scientific arguments. We should continue to let nature guide us into worship, but build our confidence in the One to whom our arguments point, the One who actively and personally reveals himself, the One who is greater than all arguments.

I wonder if my story might be helpful to you:

https://www.peacefulscience.org/swamidass-confident-faith.pdf.

3 Likes

Perhaps the following belongs in a new thread, but I recently watched your discussion with James Tour on Capturing Christianity.

During the discussion keep saying there is a non-zero chance that life formed naturalistically. Can you clarify the point you are trying to make there? Are you just saying it is possible life formed naturally, no matter how improbable?

In his book, The Design Revolution: Answering the Toughest Questions About Intelligent Design, Dr. William Dembski estimates a “Universal Probability Bound”. He argues that we can calculate the total probabilistic resources available in the universe by multiplying the estimated number of elementary particles (10^80), by the number of seconds since the Big Bang (he uses 10^25 to be conservative), by the number of physical states available per second (10^45, corresponding to Planck Time – the smallest meaningful unit of time). The figure he gets is 10^150. Of course, there are several assumptions built into that estimate, but they seem reasonable to me. He contends that anything with a probability less than the Universal Probability Bound will remain improbable even after all conceivable probabilistic resources are considered. He also argues that his estimate is the most conservative (largest) of anything in the literature. Dr. Demski cites others such as Emile Borel who calculated 10^50, the National Research Council which uses 10^94 in securing cryptosystems, and computer scientist Seth Lloyd who estimated 10^120 as a universal probability bound.

Sure life might have happened naturally, but when it comes to plausibility, it seems to me like Intelligence is a superior explanation (for OOL) given the extraordinarily low probabilities from combinatorial and chemical standpoint.

What I find odd about Dr. Tour’s stance is it sounds like he wants to infer design given these small probabilities, yet he never states this conclusion - only that we’re collectively baffled from a naturalistic standpoint (I agree, but I also think a design inference is also justifiable). He always says he doesn’t know of a reliable way of measuring Intelligent Design, but it seems like the inference is implied, yet not stated. Would you agree? Regarding his charges of public and educational misinformation on OOL, I completely agree with him and think it’s a huge problem. It seems like you disagree? I used to think we had all this OOL stuff sorted out when I watched the Science Channel as a teenager, and it had a huge bearing on my worldview. I became an atheist nihilist.

Paraphrasing your comments later, you say “For ID to work you need to have a robust model of what the designer will and will not do. That’s the fundamental thing you need to have. ID currently doesn’t work that way. It wants us to recognize design without considering a designer and that’s just… I’ve never seen design recognized in science without considering who the designer is, and what it can or can’t do.”

I’m confused by this. It almost sounds like you are saying we must know design intent and constraints in order to infer design? Perhaps you can clarify what you mean here?

2 Likes

Thank you Dr. Swamidass,

Sorry for the slow response. I read the Cancer and Evolution article carefully and found it interesting (I still want to read your personal story but haven’t had a chance yet). For what it’s worth, I agree that perhaps ID proponents spend too much time debunking Darwinism. I think this tendency is in response to wildly out-of-date popular science education and public-school curriculum, which gives many laypersons the impression that science has proven Darwinian Evolution beyond any reasonable doubt, and that there is no controversy. I myself believed this.

Perhaps we might have different understandings as to what ID theory is and isn’t claiming. Could summarize your understanding of ID? To summarize my position: I am skeptical that naturalistic mechanisms are in principle capable of constructing new body plans (phyla), new complex systems within multi-cellular organisms (for example a digestive system), or the origin of life. I do not deny evolution in the sense of change over time, this is undeniable. I believe intelligent design is a more plausible explanation for such complexity. I also suspect that variability within a type (phyla) was intentionally pre-programmed into archetypal forms, which will then vary (into species) based on various factors typically associated with evolution, such as climate, competition, predators, etc. I think the fossil record also supports this hypothesis.

To be clear, I think that some of your critiques of ID are valid, but I suspect others are simply the result of misunderstandings. Of course, there are varying views within ID theory as well. As always, a prescriptive set of beliefs packaged within generic labels can muddy the waters. I am sure that some claims of ID proponents are mistaken and am happy to see them corrected when they are. I suspect other claims may be more robust to criticism.

Going back to the article, I wish there was more detail on the specifics of what those novel functions are and how they work. From the article alone, it is not clear to me that truly novel functions were generated. How do we know these features weren’t present, but dormant? I don’t follow how neutral evolution explains novel functions. Are you arguing that the accumulation of neutral mutations eventually can generate new functions? If so, why are there drivers and passengers? It sounds like the driver mutations are the ones really imparting novel function, not passengers. Perhaps I am simply misunderstanding.

We’re going from a cell that is part of a harmonious multicellular organism to a homogenous parasite which I presume kills its host (if left untreated). This isn’t exactly the building up of complexity that I think ID theory calls into question. If you could show that truly novel functions are being generated which were not there to begin with (perhaps in some dormant state), then yes, I would agree that this could be evidence against some claims made by ID proponents. But I think the article lacks sufficient detail to demonstrate this.

3 Likes

So you think common ancestry is true to the level of phyla? All chordates are related by descent for example?

1 Like

This argument is often brought up, so I’ll offer the short version of the rebuttal. If you name a specific event which has a probability on the order of 10^-150 and then sit around waiting for it to happen, I agree you will never see it. That doesn’t not mean events of this magnitude never happen. They are happening all around us all the time, but no one predicted them, and most are not particularly notable, so they pass unnoticed.

If we apply Dembski’s methods to common occurences, and take into account the entire event history leading up to the occurence, the probability of any event will exceed the “Universal Probability Bound”. It’s just a matter of taking a sufficiently long and detailed history into account until the multiplicative probability is close to zero. If I wished, I could show the impossibility of my having cantaloupe for breakfast this morning this way (but I did!)

I am NOT saying anything about the possibility of life forming naturally. I think most will agree that something unlikely must have occurred. What I am saying is that Dembski makes a bad argument for ID, and bad arguments help no one.

This is not an unreasonable position in itself. I don’t agree, but that’s OK. :slight_smile:

4 Likes

But aren’t we talking about specific events? The origin of life requires very specific things to happen in order for the first cell to appear. It seems to me that the argument still holds. Perhaps you can clarify more?

My apologies. This question was directed specifically at Dr. Swamidass in regards to his discussion with Dr. Tour.

Fair enough! :wink:

2 Likes

Possibly. I don’t have a firm line-in-the-sand stance on where I think the limits to ancestry end. But, I do believe there are limits. I do not recall any ID proponents predicting specific limits of ancestry, and perhaps that’s a problem too, the lack of specificity in the theory.

2 Likes

What specific things are those?

I don’t think you’ll be able to say, because that would require knowing the specific events that actually happened, whether naturally or not.

3 Likes

I’m thinking of specific improbable events whereby the basic components like RNA, Proteins, enzymes, molecular machinery, lipids, carbohydrates, etc form. Perhaps I am mistaken, but I think we have a decent idea of some of the basic hardware a self-replicating cell will require. Not just any sequence of amino acids will do, we need relatively specific things to happen for the origin of life. I think this is James Tour’s point when he talks about OOL science, simply assuming all these building blocks for life will form through the necessary chemical reactions is naive of the chemistry involved. And from my perspective, it seems like the informational problem is even more challenging for naturalistic OOL theories.

I don’t see how this “incredibly improbable events happen all the time” argument really deals with the fundamental chemical and informational problems facing OOL science.

1 Like

I don’t think you’re mistaken, but you are not being particularly specific. For example, which RNA sequences need to form, and from what? Which molecular machinery is needed? Why does it have to be that particular molecular machinery and not a completely different set?

1 Like

I think so too, but I just haven’t seen any compelling reason to believe they’re necessary prerequisites for life, as opposed to contingent outcomes of evolution.

I’m much less sure of that, but perhaps more importantly I think much speculation on the origin of life is built on the premise that the first cells have to be capable of self-replication, as opposed to being replicated by environmental processes external to themselves. A very important and crucial distinction.

A modern prokaryotic cell, take a chemoautotroph, will employ enzymes and active transport machinery to take up simple nutrients from the environment(super simple building blocks like CO2 or methane), convert them to it’s own constituents, slowly grow in size, and then through all sorts of complex regulatory processes expand and divide it’s lipid membrane into two, while also replicating it’s genome and sorting the concents of the cell more or less equally between daughter and offspring.

But experiments have shown that a super simple physical mechanism involving nothing but a growing concentration of lipids, and mechanical shear-forces in solution, can produce the same effect. Here the lipid vesicle isn’t replicating itself through a regulated genetic and enzymatic process, it is being replicated by a very simple, dynamic environmental process. Local concentration of lipid increases due to continued (external, environmental) synthesis, resulting in vesicle growth(the lipid molecules tend to clump together and eventually form vesicles when the concentration gets high enough), eventually the vesicle becomes so large it becomes unstable enough that some mechanical disturbance makes it split into two. Here replication is driven externally by the environment, rather than regulated internally by genes.

3 Likes

The argument is not about the probability it is about if the mechanism identified is likely capable of doing the job that it is predicted to have done.

When you say it’s not about probability, but then go to to say it’s about whether it’s likely capable of the job, you’re contradicting yourself. If you really mean to say it’s not about probability you need to take out the word ‘likely’ from the latter part of your statement. But if you do that, you also discard Dembski’s argument, which is explicitly based on probabilities.

It’s about probability of a mechanism. It’s not about probability on its own or of the event itself happening. The event happened; the debate is about how it happened. Thanks for the correction.

Dr. Tour addresses experiments like this directly in many of his talks. Have you watched any of his lectures?

Some machinery is necessary for life. Depending on how you want to define life. I think we would both agree that some random string of amino acids won’t do the trick. The specification problem has not been addressed by OOL research and ID makes more sense of the data to me. It’s an inference.

I was questioning your claim:

Apparently you can’t support it.

I have watched one but I don’t remember him saying anything about this. I have read multiple articles of his and don’t remember this specifically addressed either. What does he say specifically about the subject I brought up?