Developing College-Level ID/Creation Courses

These features are eminently evolvable, as it were, quite within the grasp of normal means by which heritable variation is generated.

Not in a way that is coherent, I describe the problem in slides 99-110 of part 1. Briefly, homologous motif duplication or whatever duplication will result in changes in binding specificity, worse, random mutation will cause the zinc finger to jump to different locations on the genome, essentially binding to random locations for DNA-binding zinc fingers. We wouldn’t want complexes like that pictured in slide 95 docking to random locations on the DNA and recruiting who knows what kind of chromatin modyfying complexes and making who knows what kind of chromatin modifications!

EDIT: Slide 112 shows my conclusion that the “phylogeny” is imaginary as a matter of principle.

I did not have in mind duplication and the like, although one attractive aspect of the modular nature of proteins is that different functionalities can be cobbled together in many, many ways. As far as changing binding specificities and what this might do to the gene expression program (this is, I believe, what you are talking about), it helps to keep in mind that moving around a single ZF protein is not going to have much of an effect on chromatin architecture, gene expression, and the like. This is because it is usually the case that these sorts of functionalities involve more than one transcription factor, and that a very important aspect of the process is the cooperative interactions of numerous DNA binding proteins. A single DNA binding protein sitting, as it were, off in the ozone of genome space isn’t going to have much of an impact.


It was supported - but you quote-mined away the support.


Thanks for your comments.

Amazing we at least agree on one thing, CSI isn’t a way to make an ID argument. :smile:

@stcordova what would it take to convince you that you were wrong?

Sorry, Sal. PZ Myers has already beaten you to it:



What would convince me I’m wrong?

Experiments showing things like abiogenesis can happen, specifically experiments that show something as complex as the helicase/ring loader/ring breaking system can evolve spontaneously from a cell that didn’t have one. Same for the topoisomerase complexes.

Experiments showing evolution of eukaryotic forms from a prokaryotic like creature that involves evolution of membrane-bound organelles with attendant transmembrane proteins and reformatting of localization signals as well as evolution of the orphan/TRG genes that are part of the the spliceosome.

Those are some starting things that would convince me that abiogenesis and evolution don’t require statistical miracles. Whether they are miracles in the theological sense is a separate question.

Similarity of human and chimp genomes don’t answer the statistical miracles I just laid out.

I could list many more hypothetical experiments that would fill out a 12 semester hour course…

You misunderstood me. What would convince you that your interpretation of scripture was wrong?

The “graph” shown on slide 145 (actually, it is first in slide 133 - oops) of part 2 is wrong in so many ways.

1 Like

Question posed on slide 143 of Part 2:

Can 2600 transcription factors be enough to the reprogramming of the epigenome, much
less the entire complexity of a human being?

We’ve been over this here on PS. The answer is, not only yes, it’s at least an order of magnitude more than would be sufficient, starting from first principles.


How have you determined that these are reasonable requests? How would you find out if your assessment was wrong?

Not sure where at any point this claim has been supported.

No. The answer is 40. It follows pretty simply that the combinatorial possibilities resident in the thousands of genes percolating in the biosphere at the time of origination of animals are far more than sufficient to permit new avenues in evolution. To be sure, new combinations, new interactions, new networks are needed, but no new genes. (In principle, that is.)

How have you determined this?


1 Like

Bill, could you at least think a little bit before you post? Think about how many different unique combinations of transcription factors would be possible with 40 such factors. Imagine a board with 40 holes. Each hole could have a peg in it or not. How many possible patterns of pegs are there? Given that the presence or absence of a peg is a binary choice, there would be 2^40 possibilities. 2^40 is a very big number.


FWIW, the slide on 133/145 wasn’t in the actual live presentation, but stuff I put at the end. I thought the sugar code part wasn’t controversial. I got that graph from I don’t remember where! I was assuming it was common knowledge so I didn’t provide a citation. If it’s wrong it’s wrong, and I withdraw it.

Well I look forward to your editorial corrections. That made it worth my time to post here. Thanks for catching that.

Regarding the epigenome, I posed the question to epigenetics researchers at the NIH, some of whom have worked on the histone code, where is the reprogramming information stored. They said, “we don’t know.”

We know, that the Yamanaka factors trigger the cascade of pluripotent reprogramming, but this is like saying the computer “on” button accounts for the complexity of the computer.

Is 2600 enough? Well is 4 enough (the Yamanaka factors)?

4 Transcription factors might be enough to be a central switch point, but it’s not determined that it can manage the 100,000,000,000,000 epigenomes in the human body at different developmental stages and cell phases. (100 trillion epigenomes corresponds to 100 trillion cells).

But to the specific guestimate:
each amino acid position has about 4 bits ( the log2 of 20 = 4.3), so lets say 1000 residues is roughly 4000 bits. 4000 bits times 2600 = about 10 million bits, which is 1.3 million bytes. A single epigenome has 80 megabytes (See slide 83). But we have 100 trillion epigenomes. So I think it’s optomistic to think 1.3 million bytes will provide information for 8 quadrillion locations, unless there’s a lot of compression which there is surely some, but we really don’t know. I left it as an open question.

But, even more poignant, if one thinks 90% of the human genome is junk, and ALL the information is in the genome (not the glycome or where ever), that’s about 80 megabytes. That seems like a piddly number, personally.

Gabius suspects developmental information is partly in the glycome.

Sal, if we only consider a binary decision - present or absent (or on/off) - then 2000 transcription factors are enough for 10^600 states (cells, epigenomes, or whatever). Add in some biochemical nuance and this number fairly explodes (as if 10^600 isn’t already a big number).

That’s a whole lot more than 100 trillion.


That they seem “remote” to you is not an argument.

Who says anything important is going to be broken?


Those would be arguments against UCA and evolution, not arguments for design. In addition, it is highly doubtful that your statistics actually make sense. We have seen more than our share of Sharpshooter fallacies here.