What is pseudoscience?

I used to have a general idea of what I thought was pseudoscience (ID, ghosts, occult and “spooky” stuff, etc), but after reading some work by Paul Feyerabend and Larry Laudan, I’m not too sure anymore. In order to have a definition of “pseudoscience,” we would need to have an agreed upon definition of what “science” is.

While falsification seems like a solid guide, not all fields are equally open to the same types of falsification, and some philosophers of science I’ve talked to (like Michael Ruse) don’t think Popperian falsifiability is a necessary or sufficient condition for defining science.

Do you agree with Laudan that the word “psuedoscience” is a “hollow phrase” that has no cognitive meaning and simply is a emotivist term? Or is there something agreed-upon that the word refers to.

2 Likes

Science is the application of the scientific method.

image

The best examples of pseudoscience can probably be seen in the field of medicine. More specifically, they are seen in fringe beliefs that surround the medical field. For example, homeopathy is the idea that you can dilute a medication nearly endlessly and the water will still be medicinal because it remembers the shape of the medicine. This is backed by a lot of anecdotal evidence, a common feature of medical pseudoscience.

This is perhaps one of the defining features of pseudoscience. Pseudoscience never really tries to challenge itself. Instead, people are emotionally attached to the pseudoscience so that they only ever see what they think are supporting pieces of evidence. Science is trying your hardest to prove yourself wrong. Instead of anecdotal evidence, a scientific study would use something like a double blind and a placebo. You try your hardest to remove any variables that may be influencing the outcomes in a way that would give you a false positive.

Other features of pseudoscience can include made up conspiracies to keep the pseudoscience down, ad hoc rescue devices, misrepresentation of scientific results, and adherence to some kind of falsified concept like the luminiferous ether. Deepak Chopra’s quantum woo is another great example of pseudoscience.

1 Like

I covered this topic back in this thread:

You might want to take a look at Michael Shermer’s ten questions, which I quote in that thread’s OP.

Short and sweet. Anyone not trying to disprove their ideas isn’t doing science.

1 Like

Your point about openness is true, but the delimiting characteristic is whether there’s any attempt at falsification.

IDcreationism is pure pseudoscience because plenty of real science focuses on design (archaeology). IDcreationism avoids advancing or testing of such hypotheses, even though there are obvious ones staring it in the face.

Descriptive work done for the purpose of providing phenomena for hypothesis testing (the Human Genome Project being an obvious example) also qualifies as science IMO.

1 Like

Referencing @T_aquaticus’s diagram, the pseudo-scientist’s questions are really conclusions, their research is only for confirming data, and any misalignment of results to the initial conclusion is ignored or hand-waved away. Unlike more generic quackery, the pseudo-scientist will do experiments, but these will have poorly considered methods.

So it looks like this:
image

You have to apply the scientific method to produce reliable scientific data, so projects like the HGP and JWST certainly belong under the umbrella of science. Any worthwhile data in science requires things like negative controls, shared reference values, statistical analysis of noise/signal, and so forth. These are all “mini-hypotheses” if you will.

1 Like

It’s an interesting oddity that phiilosophers of science (Paul Feyerabend and Larry Laudan) find it difficult to distinguish science from pseudoscience, while the distinction is obvious to most actual scientists. This suggests that the philosophers do not have a good grasp of how actual scientists think and work.

3 Likes

To be fair, you also have to apply the scientific method to repair a car or solve a murder competently.

3 Likes

I’m no expert on philosophy of science, but

  1. I believe that philosophers of science call the “what is science” problem “the demarcation problem”, and it is difficult and controversial.
  2. Neat flow charts of how to do science are comforting, but not really comprehensive. When Galileo looked at the moons of Jupiter, was he doing science? Was he following that flow chart?
  3. For that matter I have been doing science for 63 years or so. Never once have I followed that flowchart. (I do not so much do experiments as develop methods and do mathematical work on the relationship between models and observations).
  4. Disproving things by doing critical experiments is often not possible. Instead probabilities are involved, at best. Making inferences about reconstruction of evolutionary trees is like that, as some parts of molecular sequences can contradict even a correct tree, by accident.
  5. Nevertheless many kinds of pseudoscience are easy to label as such. When someone comes along as says they have discovered that “life is like a flower” and lists ways in which that is true, it is easy to see that they haven’t discovered anything but a vague analogy to some things.

OK, I’ve stuck my neck out . Let me know why I don’t understand science …

5 Likes

At the fringes of science it can be difficult. String theory comes to mind. In the words of [Wolfgang Pauli], “It’s not even wrong”. However, there were many scientists who though String theory was worth pursuing. In the end, it is the scientific community that determines what science is.

The difficult demarcation is often between bad science and good science. As you mention, pseudoscience is so beyond the pale that no one even needs to debate where it lands on that spectrum.

2 Likes

*Wolfgang Pauli

Doh! Fixed.

1 Like

I agree with what @Joe_Felsenstein said up above.

This idea that scientists are all constantly working to prove themselves wrong just doesn’t work in practice. Usually they’re trying to prove themselves right and convince others.

I have worked in multiple research laboratories and what most of the research consisted of was basically measuring various phenotypic effects of certain drugs, mutations, and so on. Collecting data by simply doing observations is part of science, and then there’s the whole doing statistics on this data.

There’s an important distinction between naive and sophisticated falsificationism.

What I would say is scientific hypotheses must be testable in principle (but usually statistics have to be involved), but that doesn’t mean scientists are or even should be constantly trying to prove themselves wrong.

There isn’t a hard line separating science from pseudoscience as one can gradually move from one into the other. What I would say here is one way pseudoscience can be identified is to consider the behaviors of people engaging in pseudoscientific practices over longer periods of time.

There is rarely just one in stance that separates one from the other, but a number of consistent behaviors that show bad scientific practice can accumulate into a case that the person is practicing pseudoscience. I would say someone like Douglas Axe is a pseudoscientist in practice when we consider the totality of his work (both practical and theoretical, and how he communicates his findings) purporting to show the impossibility/improbability of de novo protein evolution and all of his apologetics surrounding the few concrete experiments he has done on the subject.

When I said “Science is trying your hardest to prove yourself wrong” the type of thing I was talking about are probably found in the experiments you describe. For example, you will have a non-drug or non-mutant control group. You may also have a vehicle control group where you inject mice with the vehicle for the drug (something I have done in mouse experiments). On the statistical side, you are trying to show the probability of drawing from the same population and getting the observed results (i.e. the null hypothesis). I would assume you have also designed other parts of the experiment to limit other variables, such as differences in housing and diet. These are the types of things I had in mind when I said science tries to prove itself wrong.

1 Like

To Axe I would add another marker: he (under supervision) did an earlier study that showed that barnase is resilient in the face of many mutations:

https://www.pnas.org/doi/10.1073/pnas.93.11.5590

In that paper, he did enzyme assays.

In the crap 2004 paper, he didn’t bother with assays, despite the fact that they are cheap.

Axe and his IDcreationist colleagues cite the second all the time, but ignore the first, more rigorously performed study.

I have encountered homeopaths trying to do real science. They needed some help with designing experiments, and were serious about testing their ideas. I can’t talk about the details, but their efforts were harmless in the ethical sense, and mostly meaningless in a medical sense. Also, they never published. BUT I’ll give them this much; negative results are also science. :slight_smile:

Speaking as a statistician, I will say that when I am performing an analysis on a dataset to assess a specific hypothesis, I am indeed quite disappointed when there is little evidence for a preconceived idea. That doesn’t mean null results can’t be interesting or elucidating, but I intrinsically want my idea (or a colleague’s idea) to show merit. Try as I might to avoid this, this is often what I feel.

My attitude is a bit different when performing an analysis that is more exploratory in nature to aid in hypothesis generation. Here, I am often more pleased to be shown to be “wrong” about preconceived notions because these results can be interesting, and I have no real horse in the race yet per se.

1 Like

Most people would be very fortunate to find themselves as uniformed about science as you. :wink:

Seconding this. Researchers generally don’t go to all the trouble of collecting data for a hypothesis they expect will generate a Type I error. I think John Ioannidis has a paper on publication bias that considers this question.

It might be this one: