You’re more than welcome to make that argument. I’m not up for playing Twenty Questions, though.
The forum software has just reminded me that this is my third reply to you in a row. So I’m going to give this part of the discussion a rest.
You’re more than welcome to make that argument. I’m not up for playing Twenty Questions, though.
The forum software has just reminded me that this is my third reply to you in a row. So I’m going to give this part of the discussion a rest.
Just click “esc.”
Are you giving it a rest because you are seeing my point?
There haven’t been 20 questions. There’s just one:
Is the start of translation intelligently designed in the same way that you claimed that stopping was, or is it not intelligently designed by the very same standard you endorsed?
We also infer non-human design all the time too. That also does not put you in the camp. We do not, and have never, inferred divine design.
Can you elaborate on this?
Neither do I.
Well, first, the optimality of the standard code would follow from its gradual assembly by adding amino acids. Add a similar amino acid, divide up the codons for the previous one. Or perhaps the current code evolves from a prior 2-base code during the addition of more amino acids.
Second, why should we find precursors of the current code in living organisms under the standard scenario?
Third, why does ID make either of your predictions? You haven’t really justified them.
Yes. In order to distinguish ID from natural evolution, you have to find something expected under ID but expected not to happen under natural evolution. That’s how science works. It isn’t evidence for ID unless it’s evidence against the alternative.
It would be better if your specifications were explicit rather than implicit.
Excellent. Of course those designers are not god, right? This raises the question of how those designer arose. You should consider that too.
Animals design things too. We infer creaturely design all the time.
SETI seeks to infer the design of hypothetical creatures. Incidentally, they also have an excellent track record of avoiding false positives (unlike ID).
We do not infer divine design because God is not susceptible to scientific inquiry.
Conventional evolutionary biology does not predict optimality. Just consider the panda’s pseudo-thumb or the giraffe’s recurrent laryngeal nerve, where a suboptimal solution was locked in early. Similarly, conventional evolutionary biology would be fine with a “frozen accident” perspective on the genetic code.
Intelligent designers, unlike the blind watchmaker of evolution, can act with foresight and choose the optimal solution. Therefore, my design conjecture leads me to expect that the first life was characterized by what can be termed “good engineering principles”. If the standard genetic code had turned out to be random, no better at minimizing errors than the next code, my conjecture would have a hard time getting off the ground. Furthermore, if further investigations into the error-minimizing properties of the code revealed “scars of history”, features that required a historical explanation, my views would be in trouble.
Conventional evolutionary biology does not predict that no precursors to the standard code should be found. As we have seen, the conventional view is fine whether the code is a frozen accident that was locked in at the beginning, or if life slogged through fitness space, looking for the optimal genetic code. If, among the millions of microbial species that are waiting to be discovered, precursors to the standard genetic code should be found, conventional evolutionary biology would have no problem.
My view, on the other hand, would be in trouble. If precursors to the code existed, the first life couldn’t have used the standard code. Especially if the precursor codes turned out to be suboptimal to the standard code. This would make it more likely that geochemistry and natural selection, not intelligent design, was behind the code.
Conventional evolutionary biology can adapt to multiple, mutually incompatible scenarios. For that reason alone, it would be a fool’s errand to base any model of intelligent design on finding things that “evolution can’t explain”.
Of course, if I were to go on such a fool’s errand, critics like yourself would criticize me for making a negative, anti-evolutionary argument.
I much prefer a different approach, inspired by Popper: Take my hunch and start fleshing out what I would expect to find if it was true. Adopt a systematic approach to analyzing data. Pursue avenues of research where my views give me firm expectations, while conventional evolutionary biology is silent or expects the opposite. Gradually build my case, or abondon the project if auxillary hypotheses are constantly required to save it from falsification.
I don’t have to. As @swamidass has just reminded us, SETI is based on the assumption that non-human intelligent design can be inferred without having to explain the origin of the designers.
Depends on what you mean by “optimality”. Is it the best of all possible codes, or is it just a pretty good code compared to a randomly chosen code? Evolutionary biology would predict a better than average code.
Anyway, I don’t think you’ve thought through your own scenario. Were the seeded organisms designed from scratch? If the designers are organic beings from a word with life, why would they reinvent the wheel? They would just tweak existing microorganisms, keeping the code that’s already working for them.
But generally, they choose the solution that’s good enough, considering effort and expense. I submit that your designers would keep whatever code their life already had. Less trouble, less possibility of disaster in re-using an already-tested format.
It has been claimed that a former 2-base code can explain some features of the code we see. Would that be a “scar of history”?
It does if the precursors are suboptimal. Selection would dispose of them if they’re competing with standard-code organisms.
Not that I like Popper all that much, but I think you’re misconstruing him. Popper would want you to flesh out what you would expect to find if it was true but not if the alternative were true. “Silent” isn’t good enough. You need “expects the opposite”.
All fine when you aren’t talking about the origin of life. In your case, though, you’re merely transferring the problem to another planet. And as I mentioned above, the designers would most likely have used their own genetic code, so you haven’t even explained its optimality.
Krause, that’s precisely what I’m getting at and you’re running away from.
What’s the difference between start and stop codons? You agreed that being able to end a protein with any residue is an intelligent design feature.
How does the start codon work? Can any residue be used to start a protein?
I don’t see that you are doing anything of the sort.
I don’t see you as willing to do that.
Not true. SETI always tries to infer origin of designers.
Unless all of those planets had Norman invasions (as in 1066), there’s no chance of them coming up with the same English language.
Without the Norman invasion, it is unlikely that French would have so heavily gotten mixed into the Anglo-Saxon tongue.
Yes, language evolution makes a excellent analogy.
The claim that the genetic code is optimal hasn’t met it’s burden of proof.
I expect the standard genetic code to be an exceptionally good code, at or close to a global optimum. If the code turns out to be merely “pretty good” compared to a randomly chosen code, I would consider that very damaging to my views.
How would conventional evolutionary biology predict a better than average code? If the genetic code had turned out to be a frozen accident, would that have been a problem for the conventional view?
Does the conventional view predict how much better than average the genetic code should be? In 2000, Freeland et al. found that with respect to substitution mutations, the standard genetic code “appears at or very close to a global optimum for error minimization: the best of all possible codes”. Is this something the conventional view predicted?
It’s interesting that you advocate that I adopt a desing hypothesis which you yourself do not share. It’s also interesting that you advocate that I adopt a hypothesis in which the designers acted in a way that mimics evolution (“modifying whatever existed”) while at the same time demanding that produce predictions about things that evolution can’t possible explain.
Yes. For my conjecture to remain viable to me, further research into the standard genetic code would show this “scar of history” to be an artifact of its error-minimizing properties. If further research underscored the need for a historical explanation (like, that the standard genetic code evolved from a 2-base code), my conjecture would look a lot less promising.
Note that I’m not claiming that the case for design has been made with any kind of certainty that requires belief. I’m not even convinced of it myself, much less demanding that others should be convinced.
I’m merely arguing that there exist hints that point to the suspicion of design, and that this suspicion can be used to generate testable predictions and avenues of further research. Whether those predictions actually come true remains to be seen.
An organism is rarely, if ever, in direct competition with another species in the exact same ecological niche. Pandas with suboptimal pseudo-thumbs haven’t been disposed, as their pseudo-thumb allows them to inhabit an ecological niche not inhabited by others. Same goes for giraffes with suboptimal recurrent pharyngeal nerves.
Multiple variants of the genetic code exist, such as in ciliates. Does the conventional view predict that these variants are superior to the standard code? If they turn out to be suboptimal, would the conventional view predict that they had been disposed by ciliates with the standard code?
In other words, I need to adopt an anti-evolutionary approach and try to prove a negative before being allowed to contemplate design. I’m sorry, but this dog won’t hunt.
I don’t see the origin of life as a “problem” that needs to be transferred. I see it as figuring out a historical reality.
Take the bacterium JCVI-syn3.0. The historical reality is that this species was designed by researchers at J. Craig Venter’s lab who stripped down the genome of Mycoplasma mycoides. In attributing JCVI-syn3.0 to design, have I accounted for the origin of any of the other designers involved in producing JCVI-syn3.0? Of course I haven’t. Does that mean that I’ve merely “transferred the problem”?
I don’t have a problem with people attempting to infer the origin of the designers behind the origin of life. I just have a hard time understanding how exactly such an investigation would play out.
Suppose we found the fossil of a Precambrian rabbit clearly labelled “Made in Alpha Centauri”. Assuming critics would accept this as sufficient evidence of intelligent design, how would one go about inferring the origin of the designers? Would one have to travel to the Alpha Centauri system and reconstruct the primordial conditions on each planet?
This is too vague to be capable of falsification.
How good is an “exceptionally good code”, which is “close to global optimum”, and how much better than “pretty good” is that?
At global optimum is unambigous, “close to” isn’t.
It seems like we have two competing arguments on our hands:
Interesting.
Who claims that conventional evolutionary biology predicts that the standard genetic code should be optimal?
The problem with nailing this down rests with the complexity of biological reality.
Consider the study by Geyer & Mamlouk (2018). Comparing the standard genetic code with one million random codes, they found that when measuring for robustness against the effects of either point mutations or frameshift mutations, the standard genetic code is “competitively robust” but “better candidates can be found”. However, “it becomes significantly more difficult to find candidates that optimize all of these features – just like the SGC [standard genetic code] does.” The authors conclude that when considering the robustness against the effects of both point mutations and frameshift mutations, the standard genetic code is “much more than just ‘one in a million’.”
The genetic code is likely the result of a compromise between providing robustness against several types of mutations. As more analyses of the error-minimizing properties of the standard code are carried out, we should get a more fine-grained understanding of the optimality of the standard code.
If this improved, fine-grained understanding starts pulling towards the view of the genetic code as a frozen accident or merely a “better than average” code, it will be increasingly difficult for me to claim that my design conjecture is of any merrit. On the other hand, if this improved understanding underscores the optimality of the standard code, I will feel encouraged to pursue my investigation.
I fully admit that this issue is difficult to quantify. That is often the case with studies involving a complex subject matter like living organisms. But I contend that my prediction is more specific than conventional evolutionary biology’s “Maybe we expect that the genetic code is better than average. Except if it’s not.”
That certainly seemed to be what @John_Harshman was arguing when he wrote that “the optimality of the standard code would follow from its gradual assembly by adding amino acids.” Of course, if he was merely talking about a local optimum, I contend that his claim about conventional evolutionary biology making the same predictions as my design conjecture should be dropped.
But that’s exactly the problem with your hypothesis. It’s not clear exactly why it predicts what you say it does.
We can of course just conjecture some super advanced alien species that ran extremely complicated supercomputer simulations to find the best possible compromise code optimized simultaneously for minimizing the fitness effects of mistranslation, point mutations, frameshift mutations, facilitating adaptive change, and protein folding, and through that predict that the genetic code should be the best of the bunch, because all possible codes were simulated extensively under trillions of circumstances and the best one was found.
And if the genetic code isn’t the best possible compromise between all these factors, at the very top of the global optimum, we can just rework the hypothesis to say that maybe the alien designers weren’t as technologically advanced, used simpler simulations to predict the fitness effects of different codes, and just selected one among the top 1% (say).
And so on. The problem here is there doesn’t seem to be a mechanism that really says why any particular effectiveness of the code should obtain. We can imagine all sorts of things designers might want, or be capable of, but it’s not really scientific.
That is often the case with historical sciences. But I contend that my prediction is more specific than conventional evolutionary biology’s “Maybe we expect that the genetic code is better than average. Except if it’s not.”
I don’t agree. With evolutionary biology you at least have a mechanism that can be simulated. If you set the initial conditions, you can calculate or simulate expectations. If codon rearrangements are possible, and if the code starts from a smaller alphabet, it should be possible to simulate how good the code should be able to get.
Right, but that’s a prediction given certain premises. If the standard genetic code evolved gradually by adding amino acids. Of course there’s another hidden premise there about there having been time for such gradual change to occur, that the code didn’t “freeze” all that hard before this happened and so on. So it’s not just a prediction of evolutionary theory, it’s a prediction given certain initial conditions and a number of other assumptions. If you adjust the premises, you get other predictions. So it’s not really true to say conventional evolutionary theory, at least all by itself, predicts any particular level of optimality of the standard genetic code.
I promised to show the results of my ancestor inference and alignments from earlier, and was about to post them when I noticed the software I used to infer ancestral nodes in my phylogenetic tree apparently had some really strange bug where it would infer mixed DNA and protein sequences(seriously, wtf? suddenly it showed letters like M, Y, N, L and so on in an ancestral DNA sequence) at the nodes of a tree inferred purely from DNA sequences.
Oddly enough it showed the result I had expected (that ancestral nodes from the bacterial and archaeal trees became more similar when subjected to pairwise alignment, compared to extant bacterial and archaeal sequences), so I didn’t look that closely at the alignments, just the overall score and similarity. Until I was about to post them and I looked it over again.
Anyway, since the results were obviously invalid due to the weird protein sequence corruption, I had to find other software to infer ancestor states with which I have done. I’m in the process of re-doing the alignments now and they seem fine this time.
Just out of curiosity, which software were you using previously, and what are you using now?