O/T, but I once had someone bring me pCR date from mice. There were 12 mice in the study, but the amount of tissue they could sample was so tiny that all the samples had to be combined before pCR. Their data was a single number, and I had to explain why the sample size was one instead of 12.
How does that work? The point of PCR is that you can start with a really tiny sample.
In this case the sample might be just a few cells, I think.
I’m not sure this is the same paper, but if not it is related.
Check the acknowledgement.
Well, could be the reverse polymerase step that’s the limiting factor.
Can’t. Paywalled. But I’ll take your word.
Yikes!! That would require 60 mice per group for 5 biological replicates. Review boards would probably require a lot of justification for using that many animals in each group for just one experiment.
What I have noticed over the last 20 years is that power analyses are absolutely required as part of any grant application or IRB approval policy. I think this is a really good development since it requires scientists to think through their experimental design, hypotheses, and overall model.
Correct. They went back and did it four more times, for a total of 50-60 mice. Poor Mickey!
Over the course of my career (~20 years) there had been real change in the biomedical journals requiring stats review. It used to be uncommon, and now it’s mandatory for high-impact journals, and very common even among low impact journal (like Wisconsin Medical Journal).
When I started about half of my work began with (paraphrasing), “Here is my data, what do I do now?” Now every protocol gets at attention from a statistician.
There is more improvement to be made, starting with p-values.
The statistician I have worked with in the past insists on being part of designing experiments to help ensure the data will be as statistically sound as possible. I know I am preaching to the preacher, but I do think the importance of statistics is something the non-experts may not have a full appreciation of.
Preach the Word, Brother!
And if you have not seen this already, it causes howls of laughter at stat conferences: Biostatistics versus Lab Research
I’ve had this same discussion with researchers more than once.
OTOH, some statisticians I’ve worked with were hard to pin down on specific designs or numbers of repeats. Ask for a black & white decision, get grey answers. Injecting some information on $$$ per data point seems to quicken the convergence on ‘good enough’ numbers.
Entirely appropriate. Good experimental design a very important when using statistical methods.
If you have grey expectations then you get grey answers. In order to get clear answers you need to have a clear idea of what differences are biologically significant, what differences you expect to see between your control and test groups, the expected variability for technical and biological replicates, and various other factors. For example, if I tell the statistician that I am looking for a >3 fold difference in gene expression with 20% variability within groups then I can get a clear answer for how many mice to put in each group.
My response was made a bit tongue in cheek.
But to your point of experimental design… Many times the actual variability and influence of various factors are not known in advance. We can make reasonable guesses, hoping they hold out. And then it dials to the level of confidence you need. That’s actually the part where long discussions with statisticians occur. In early project stages at my work (drug discovery), we sometimes have an interesting time convincing statisticians about the relatively large amount of noise we can accept in a trade-off vs. cost. In fact, at early stages when we test many compounds for effect and having tuned the assay windows to reasonable degrees, we tend to set cut-offs based on the available screening capacity. Later, for example in safety screening, animal and clinical work, we’ll “tighten the screws”, statistically-speaking. That’s where our statisticians earn their keep. (In enzyme assays? Not so much – Mostly because with enzyme assays, we’re interested in things best measured on a logarithmic scale).
Dan, I did not expect (as someone who is not in any way, shape, or form a statistician) to relate to this hilarious video—but I absolutely did. It brought back memories of when I was a lowly part-time “consultant” in a state university’s computer center. I had mentioned previously that I used to get a lot of basic “I can’t figure out how to use SPSS for my project/assignment/dissertation” questions. Most of these questions were resolved by proofreading their control codes for the big Control Data Corporation mainframe. [By the way, about half of the faculty and grad students were still using punch cards at the time—sometimes several shoe-box sized collections of them. Only the most well-funded ones had access to Scantron machines and OCR devices.] But the remaining inquirers were hopelessly confused by basic statistical terms and concepts—and that was particularly sad because (1) I was no great expert on such, and (2) even I felt amazed that they had reached tenure or PhD candidacy or whatever without really grasping the fundamentals of what they were supposed to do with statistics.
And that is exactly why I howled with laughter when the inquirer in the video kept asking if N=3 was a sufficient sample-size. That is not an exaggeration by much! I am being totally truthful when I tell you that I had conversations even with doctoral candidates (especially from the School of Education and the School of Journalism) who would bring to me a stack of punch cards (as few as N=28 in one instance) and would have face-palm inducing expectations for what their “groundbreaking” study (of two classes of undergrad volunteers filling out surveys to get some kind of classroom credit) was going to “statistically prove” and get published in some journal.
While I tried to diplomatically explain why the sample size was too small for what they were trying to achieve, I would soon manage to lose their attention-span. While re-reading the list of questions they’d brought to ask me, they would suddenly interrupt my explanation by blurting out excitedly some of my favorite gems from that job:
(1) “Oh! I almost forgot! I need you to show me how to do kurtosis. My faculty advisor really likes kurtosis!”
(2) “Can you get me privileges on the computer center’s new Halographix?” [I probably can’t correctly recall the name brand of it after all these years but it was an electrostatic drum plotter. It cost something like $20,000. A lot of money in those days. It probably came from an NSF grant.] “A Halographix plot series in my appendices would be a huge advantage for my dissertation! That would be so cool!”
Two years later I was finally a genuine Assistant Professor at a midwestern university and the president of a nearby evangelical university calls me and asks for a “special favor”. (In truth, I did owe him multiple favors at the time.) He had just hired a new Dean of the College who had to complete and defend her Ed.D. dissertation before she could start her new post that coming fall. “She’s really stuck on her statistics. Could you make sure she gets finished in time?” I agreed. Believe it or not, she brought me one of those stereotypical N=30 types of survey-based “research studies” on educational methodologies. But this time, when I started to explain why what she was trying to accomplish would be statistically meaningless, she said, “Oh, I know that. But I just have to show that I know the basics of HOW to go about such a study if I were able to pull together a sufficiently large sample size and better methodologies.” Yes, I had learned years before that the rigor expected of Ed.D.'s in those days was not always impressive. Yet, lots of those dissertations got published in journals, nonetheless! Nevertheless, I helped this lady finish her work and she got her prized sheepskin in time to be the new Dean of the College.
My apologies to the moderators. I suppose this is a distracting subthread that should go elsewhere—but I guess I’m only piling onto an already existing fun subthread. (Yeah, lame excuse on my part.)