Then, why did you claim that my hypothesis was falsified already?
Anyhow, let me give you an example of why it is an empirical characteristic. In animals, injury can lead to long-lasting distress, whereby frequent exposure to pain-producing stimuli causes a progressively amplified response well after the injury has healed. This phenomenon has been referred to as “nociceptive sensitization.” Biomedical researchers have long viewed nociceptive sensitization as maladaptive because, in humans, it is associated with anxiety (Crook et al., 2014).
Dawkins (1995) has coined the term “God’s utility function” to describe this maladaptive trait and to support the gene-centered view of evolution by equating the phrase to the purpose of life. He suggests that it is a mistake to assume that an ecosystem or a species as a whole exists for a purpose or that individual organisms lead meaningful lives
Correct, but they are testable claims that can lead to empirical evidence, which is what the Orch-OR theory’s predictions have been providing up to this point.
My mistake. I actually got disorganized while I was talking with you and others during our discourse again. I got mixed up when it comes to providing a final cause versus an efficient cause. My hypothesis involves the former.
Let me as succinct and clear as I can since we have already gone over this in previous posts. The scientific model I am providing here is of a teleological nature that involves the who,what, where, and when. As I said before, this is a must-have or a prerequsite for any intelligent design hypothesis. If you want a purely mechanistic explanation for how these things happened, then I refer you to Stuart Hammeroffs article that deals with that:
Directly speaking, No it does not, but indirectly Yes. Remember, consciousness has already been demonstrated to be an empirical fact with studies like the one I just referenced above. Secular scientists just generally presume that this is epiphenomnal or an emergent property of classical physics that is no different than a computer. But, if the brain is fundamentally quantum mechanical, then it would link quantum phenomena to a known cause through a known mechanism. According to Stuart Hammeroff, the mechanism and cause would be Consciousness and quantum entanglement and tunneling. Let me show you an example of what I mean from wikipedia…
“Quantum cognition is an emerging field which applies the mathematical formalism of quantum theory to model cognitive phenomena…
The field clearly distinguishes itself from the quantum mind as it is not reliant on the hypothesis that there is something micro-physical quantum mechanical about the brain.”[emphasis] Quantum cognition - Wikipedia
My point is that the experiment on microtubules provided good evidence showing that there is something quantum-mechanical about the brain despite what the wiki article said. So, it does have something to do with consciousness, indirectly speaking.
I would note that the lede in the Wikipedia article on Quantum Cognition ends with the statement:
Since the use of a quantum-theoretic framework is for modeling purposes, the identification of quantum structures in cognitive phenomena does not presuppose the existence of microscopic quantum processes in the human brain.[11]
This is clearly disavowing connection to ORCH-OR’s Microtubule hypothesis.
Following this to the cited article, I found this in that article’s introduction:
Notwithstanding, the perspective of quantum computation in the brain, a device that operates at relatively high temperatures, is tantalizing. For instance, Penrose and Hameroff (Hameroff, 1998, Penrose, 1989, Penrose, 1994) proposed that quantum computations might be feasible in protected environments of microtubules in the neurons. In a detailed analysis of different environmental sources of decoherence in the brain, Tegmark (2000) pointed out that the time scale for decoherence is orders of magnitude faster than those calculated by Penrose and Hameroff. Hagan, Hameroff and Tuszyński (2002) claimed that Tegmark’s work did not address correctly the model proposed by Penrose and Hameroff, and if you took into account the correct dimensions at play, the decoherence time computed by Tegmark could be of a bigger order of magnitude. However, Rosa and Faber (2004) showed that Hagan et al. (2002) did not use Tegmark’s equations under the correct assumptions, and thus the decoherence time would indeed be smaller than estimated by Hameroff (1998). In any case, as Davies (2004) points out, there seems to be a lot of wishful thinking on both sides of this discussion, and quantum processing in the brain will not be widely accepted until quantum superpositions are shown to exist in some special cases in the brain. As it stands, it seems that even for microtubules, environmental decoherence would happen so quickly as to render it improbable, though not impossible, that the brain uses any quantum computations. It is hard to imagine any protected region of the brain where quantum interference could occur without fast decoherence. Despite this, there is a large volume of research on quantum aspects of the brain.
This would seem to suggest a high degree of uncertainty and disagreement over whether Penrose and Hameroff’s claims hold water.
Further in the article body, it goes on to state:
Though historically the connection between quantum mechanics and the brain started with the measurement problem, nowadays a lot of attention has been focused on the brain as a quantum computer. The brain’s extraordinary computational power led several scientists, Penrose included, to suggest that it uses quantum computation, as we mentioned in Section 1. We will not examine this topic in detail, but a good informal yet careful review of the reasons for scepticism about this claim has been given recently by Koch and Hepp (2006). Some detailed negative arguments based on the rapid decoherence process of entangled quantum particles in most environments are to be found in Tegmark (2000). So we shall not explore here the issue of decoherence. Instead, we approach the relation between quantum phenomena and the brain by asking what kinds of quantum computations are often proposed, and to see if those computations can be reproduced by classical processes. This is the main goal of the next section.
&
Though quantum mechanics received a lot of attention with Penrose’s proposal that quantum computation is related to consciousness, other researchers see quantum mechanics as a possible mechanism for other cognitive processes. For example, Khrennikov and Haven (2007) claim that quantum probability interference is present in social phenomena as well as in cognition. In a more detailed and, in our opinion, interesting paper, Busemeyer, Wang, and Townsend (2006) analyzed the dynamics of human decision-making, and showed that not only purely Markov models did not fit the data well, but a better fit could be achieved by using quantum dynamics. Because of its better fit to the data and the straightforward distinction between quantum and classical dynamics made by Busemeyer et al. (2006), we will discuss this work in some detail.
This paragraph is wrong on so many levels that it would would take far more time than I am willing to devote to it, and far more attention than it deserves, in order to document all its problems.
I will however draw attention to one particularly ludicrous claim:
More importantly, materialism in general has been officially disproven …
One has to ask what ‘official’ agency did this? Is there an International Philosophical Worldview Court that determines these things, that we are unaware of? One of the problems with philosophy is that it never seems to prove or disprove anything of any substance – you just get worldviews, etc accreting arguments for and against them, and so going into and out of fashion for a time.
This claim would seem to be further indication that @Meerkat_SK5 does not understand the English language – as it is blatant misuse of the word “official”.
This is technically a repost of post 234, but I made so many changes after I posted 234 that I decided to repost it again to make sure everyone read what I actually intended.
What? Did you mean this…
We would expect not to find animals with features that ONLY impede on the population or another animal’s ability to survive, reproduce, and fit an environmental niche.
If so, this is what I meant by sinister designs. Can you provide an example of this?
Yes, the universal common design hypothesis suggests that…
All living cells are organized by Objective reduction (OR) events in virus and microtubule pi stacks (or closely related structures)
Then, here is the origin of life model:
Around 3.8 billion years ago, Pi electron resonance clouds in single-chain amphiphiles molecules coalesced in geometric pi-stacks forming viroids with quantum friendly regions for OR events within the deep-sea hypothermal vents of the Earth.
Through natural selection and OR events, groups of viriods formed into highly ordered local domains of key bio-molecules of a DNA virus or molecule, which later evolved into different species of unicellular organisms.
Through horizontal regulatory transfer (HRT), these unicellular organisms underwent extensive regulatory switching and rewiring in their regulatory non-coding regions, which led to the divergence of transcription start sites and gene expression levels in the formation of primitive multicellular organisms and beyond.
Now, I am going to include Stuart Hammeroff’s origin of species model to finish the story:
"Within cell cytoplasm, centrioles and microtubules fostered mitosis, gene mixing, mutations (influenced by Penrose OR mediated Platonic influences in DNA pi stacks) and evolution, all in pursuit of more and more pleasurable qualia. Cells began to communicate, compete and/or cooperate, guided by feedback toward feeling good. Cells joined through adhesion molecules and gap junctions, resulting in multicellular organisms.
Specialization occurred through differentiation via gene expression through cytoskeletal proteins. In some types of cells, the cytoskeleton became asymmetric and elongated, taking on signaling and management roles as axonemes and neurons.
Neurons and other cells fused by gap junctions, and chemical signaling ensued at synapses between axons, and dendrites and soma within which microtubules became uniquely arranged in mixed polarity networks, optimal for integration, recurrent information processing, interference beats, and orchestration of OR-mediated feelings.
Neurons formed networks, EG grew larger, t grew shorter and conscious experiences became more and more intense. At EG of roughly 1011 tubulins in w300 neurons or axonemes in simple worms and urchins, t became brief enough to avoid random interactions, prompting, perhaps, the Cambrian evolutionary explosion."
Are you referring to my common design hypothesis or the Orch-OR theory?
Sure, let me provide a better example of how it supports the theory……
Your writing is nearly impenetrable, so that is unlikely.
I did. I can provide an example, but not until you get much more empirical. The whole point of the scientific method is to overcome our biases, something you are avoiding like the plague.
“Suggests” is unscientific weaseling. It has to be mechanistic enough to predict direct observations. Try harder.
No direct observations there. Adding “or closely related structures” is more unscientific weaseling.
Mutations (including chromosomal changes) can increase the probability of surviving and reproducing.
Heart failure isn’t a feature, and wouldn’t apply to an entire species.
No. Your theory, you get to fill in the blanks.
We find lots of these, and we expect to find lots of these. If your ‘theory’ says we shouldn’t, it’s toast. Also, some parasites increase the host’s ability to survive and reproduce. Yucca moths, sloth moss and dead-flesh-eating maggots, for example.
I don’t think you know what this is, since it doesn’t fit your description.
We find lots of these, and we expect to find lots of these. If your ‘theory’ says we shouldn’t, it’s toast.
(A) Humans with originally designed features that only reduce the probability of surviving, reproducing, and/or adapting, such as …
Serum response factor
Human esophagus
(B) Animals with features that are originally designed to only impede on the population or another animal’s ability to survive, reproduce, and fit an environmental niche, such as …
Toxoplasma gondii and its toxoplasmosis
Excretory or digestive system of parastic insects
Tongue eating louse of parasitic isoipod
Carnivorous behavioral genes of parasitic vertebrates
The enzyme B-1 4 glucanase
Alright, here it goes again…
Objective reduction (OR) events in RNA virus pi stacks modifies and rewires phylogenetic and functional sequences in the non-coding regions of all living cells
BTW….
Was my origin of life model, at least, specific and mechanistic enough? If not, can you please highlight where you feel it needs to be more specific and clear.
What I meant by “original design” is that God cannot be held responsible for a genuine design flaw or cruel design feature if it is due to the decaying effects of the second law of thermodynamics. Instead, they should be considered vestigials.
Moreover, if a function or a sensible purpose for a design can be or has been discovered, the “cruel” or “inferior” design argument falls apart. I am referring to design trade-offs here.
I guess not. I don’t get why it is subjective when I provided you examples of design features that were deemed to be a design flaw or evil design by secular scientists.
According to experiments, only humans have been shown to construct viruses and design them to facilitate the evolutionary process through horizontal regulatory transfer (HRT).
This means that we can infer that…
All currently living organisms have a common design (i.e. HRT) that can be traced back to this universal common designer.
These are the direct observations:
(A) We would expect analogous traits to evolve separately in nested but unrelated taxa in response to similar needs.
(B) We expect to find functional ERV’s and pseudogenes in nested but unrelated taxa
(C) We would expect to find that the phylogenetic trees for regulatory regions in nested but unrelated taxa to better fit the data than species trees.
(D) We expect to find adaptive convergent genes in genomes between nested but unrelated taxa
Alright, I am asking now. Please explain your point.
Ohhhhhhhh! I think I see where the disconnect is located.
Keep in mind, the rationale behind my hypothesis is based upon a principle regarding causation from past events, which was popularized by Charles Lyell who also influenced Charles Darwin and Stephen Meyer, of course. “Lyell argued that when scientists seek to explain events in the past, they should cite causes that are known from our uniform experience to have the power to produce the effect in question. Historical scientists should cite ‘causes now in operation’ or presently acting causes, which would be humans in this case.
This is historical science we are dealing with here NOT observational science.
It was also a total non sequitor, as @Mercer’s point was not about “‘historical science’ vs ‘experimental science’” but about looking at which previous (i.e. historical) hypotheses have been successful in order to learn what features you need for a new hypothesis to be successful.
Hypothesis–prions are infectious proteins, nucleic acids are not involved.
Prediction–proteases will reduce infectivity, DNase and RNase will not.
Observation–no treatment, protease, RNase, and DNase yield 1x, <1x, 1x, 1x infectivity, respectively.
Note that all of the interpretation is baked in IN ADVANCE, before the test.
That’s how real science works. Pretending that something you already knew was predicted is not.