Who are all the great engineering computational biologist that you refer to? Reference at least a few papers of the best. Tell us where ID is already well established as a scientific discipline? Back up your claims.
This is reasonable request, I think. I would like to see some examples too.
Is James Shapiro on the list?
I’m never sure where to place Shapiro.
I would object to him being called a “computational biologist.” Shapiro, as far as I can tell, has made zero contributions to computational biology and has zero training in computational biology. This probably the most charitable explanation for why he has not yet figure out the meaning of “random.”
So, I am a computational biologist. I just received tenure in my department at a top five institution. Requirements for tenure here are to be within the top five assistant professors nationally in my field. I cleared that bar and was promoted with enthusiasm. You can find my publication record here: http://swami.wustl.edu/swamidass_cv.pdf.
I trained under one of the great ones, Pierre Baldi, at the University of California Irvine. He is still an active scientist with an (H-index of 92), who has written textbooks on computational biology. He is one of the leading computational biologist in the country, and he is the person I learned information theory from. You can find his publication list here: https://scholar.google.com/citations?user=RhFhIIgAAAAJ&hl=en&oi=sra.
I would like to know @EricMH, how have we unjustly made use of ID theory? Please also show me which of our papers apply ID theory to make any important advances. I would very much like to know about this.
Here I’m just going off a cursory look at your CV paper titles without digging into their contents. At a high level I see a couple topics:
- Collaborative discovery. This involves ID theory because you are relying on halting oracles (humans) to discover structure. ID theory predicts you need such human-in-the-loop processes to increase mutual information (knowledge).
- You have a bit of general ML, like deep learning, but much is very specific techniques and background knowledge. This implies a lot of algorithmic mutual information brought to bear on the problem.
- There is a bit of work on compression. Developing effective compression is an instance of Minimum Description Length, which in turn is a kind of mutual information.
- A whole lot of work determining the significance of biochemicals, which is another form of mutual information mining.
2-4 all depend on the notion there is generalizable structure to mine in biochemicals. This generalized structure would be called an independent semiotic language in CSI or context in ASC. According to standard statistical theory, it is impossible to mine such information after the fact. We are always guilty of data snooping, which is an ML nono. But, ID theory states that we can mine this sort of information after the fact if the CSI/ASC scores are high enough, because chance + determinism cannot generate such structure, thus it must be real structure and not a function of our interpretive grid of the world (i.e. like seeing a dog in the clouds).
So that is my first shot, but enough so you can see what I mean when I say much of computational biology is applied ID.
My friend, none of this stuff is ID. None of it relies on ID theory, and most of it contradicts ID claims. Don’t you see why? It is almost obviously there in your paragraph…if you just take one or two steps further to engage what you are actually describing…
No, I don’t.