thegrim33
9 months ago
The results are based on a grand total of 25 people in the psilocybin group and 21 people in the SSRI group. The sample size is pretty small.
The methodology is also kind of strange, the psilocybin group got a total of 20 hours of in-person therapy during their 'treatment' and 6 follow-up skype calls, whereas the SSRI didn't get anything other than the 6 month questionaire. Those 20 hours of personalized therapy while they were dosing had no effect on their psychology? Any change was all a result of the psilocybin and not the 20 hours of therapy?
They also measured results by a self-administered 16 question "quick inventory" depression survey. To enter the study they had to be officially diagnosed with major depression by a doctor, but the results of the study were based completely around a self-reported 16 question questionaire?
drilbo
9 months ago
>The methodology is also kind of strange, the psilocybin group got a total of 20 hours of in-person therapy during their 'treatment' and 6 follow-up skype calls, whereas the SSRI didn't get anything other than the 6 month questionaire.
They got 'matched' support, which reads to me as 'equivelent': "Patients were randomly assigned (1:1) to receive either two 25 mg doses of the psychedelic drug psilocybin administered orally combined with psychological support (‘psilocybin therapy’ or PT) and book-ended by further support or a 6-week course of the selective serotonin reuptake inhibitor (SSRI) escitalopram (administered daily at 10 mg for three weeks and 20 mg for the subsequent three weeks) plus matched psychological support (‘escitalopram treatment’ or ET)."
>They also measured results by a self-administered 16 question "quick inventory" depression survey. To enter the study they had to be officially diagnosed with major depression by a doctor, but the results of the study were based completely around a self-reported 16 question questionaire?
This is only the follow up portion, and as secondary measure at 6 weeks. The original study (https://clinicaltrials.gov/study/NCT03429075) says the primary measure was; "Change in blood oxygen level dependent (BOLD) signal during fMRI in response to emotional faces during an emotional faces paradigm done inside the fMRI scanner." at 6 weeks vs baseline.
fsckboy
9 months ago
>The results are based on a grand total of 25 people in the psilocybin group and 21 people in the SSRI group.
Statistical significance is based on sample size, and is independent of population size. Let that sink in, doesn't matter if your population is 100K or 8 billion, when you sample you are trying to understand the probabilities in your sample, not in the population.
Therefore, (think about the birthday paradox, doesn't matter how many people in the world, a few dozen in the room with you is an adequate sample), it should not surprise you that statistical significance is achieved through a much smaller sample size than most non-statisticians have intuition for.
TeaBrain
9 months ago
>Therefore, (think about the birthday paradox, doesn't matter how many people in the world, a few dozen in the room with you is an adequate sample), it should not surprise you that statistical significance is achieved through a much smaller sample size than most non-statisticians have intuition for.
This response on the supposed the lack of importance of a sample size is completely wrong on just about every claim. The parent comment had a valid point. Just because a population may fit a certain distribution, does not mean that any given sample size will also fit that distribution. Samples are used to ideally create a representative group of a population, that is smaller than the population. However, the sample size required to come close to a representative distribution can vary between populations and variables being examined. Also, using the birthday paradox is a terrible example and has nothing to do with statistical significance, as the so-called birthday paradox is just a simple function.
braiamp
9 months ago
Except that sample size doesn't matter if the set of potential results/measurements of the dependent variable are very large. Someone pointed out that you can demonstrate that alcohol impairs executive functions, balance, etc. with a very small sample size, because the effects would be so large and evident that your statistical power would be. On very large variance, where the results are dichotomous in nature (can a subject walk straight in a 10 meters line, without walking outside: yes/no) can have a very small sample size.
Use this calculator, set options to: two independent groups, dichotomous, group 1 = 90%, group 2 = 10%, incidence, enrollment ratio = 1, alpha = 0.05 and power = 80%. The sample size is 10, 5 for each group. https://clincalc.com/stats/samplesize.aspx
TeaBrain
9 months ago
That other comment on alcohol is by the same guy that made the comment that I responded to here. The same issue there is that result reliability can be influenced by the effect size of the variables being tested, but this still is far from a guarantee that the results will generalize, which is more likely to be an issue with a smaller sample.
An issue with the comment I previously responded to, as I mentioned above, is that they made it out as if a small sample size could be reliable to determine a reliable statistical significance irrespective of the variables under study and tried to prove this by using an absurd analogy between statistical significance and the birthday paradox. The problem with their attempted point was that it didn't even respond to the comment above it, which pointed out that a high statistical significance is not a guarantee of reproducibility, especially with a low sample size.
braiamp
9 months ago
Sample size only matters for statistical power. After certain point, the diminishing returns will fall of a cliff. Lay persons seems to have mistrust of seemly small sample sizes, but they need to understand that doubling the size of the study only improves statistical in 5-10% for dichotomous studies. If we mandated a bigger sample size without taking into account why it's relevant and how it's calculated, it will make every study cost prohibited.
Wytwwww
9 months ago
> few dozen in the room with you is an adequate sample
It's not, though? Unless you're fine with a very high margin of error... Also the sample in studies like this is hardly ever close to being random anyway.
> it should not surprise you that statistical significance is achieved through a much smaller sample size
Sure... just with a very low confidence level.
braiamp
9 months ago
Margin of error only occurs when you expect high variability and a very large population of measurements. When the set of potential measurements is of size 2, having enough statistical power can be achieved with very low sample size. Anyways, many results have survived stage 4 analysis (real world, observational, administrative documents sourced), even when their OG sample size is relatively small.
shakna
9 months ago
Statistical analysis in psychology is traditionally very poor, because there is an exceptionally high amount of variability. This is a known problem.
braiamp
9 months ago
No, psychology studies have been traditionally very poor because most of the theory was literally out of someone ass, without proper design. Newer studies have been shown to be very robust, public freakout notwithstanding.
shakna
9 months ago
Newer studies [0] show us that
a) p-hacking is still alive and well in the psychological community.
b) p-curves are not sufficient for detecting this.
That isn't a lack of proper design. It's a case of statistics being abused to show significance when there is none.
seeknotfind
9 months ago
The issue your parent comment raises is that this random group may not represent someone that's trying to evaluate what techniques can help them, regardless of the statistical significance within that group. It makes me sad that they introduce other random variables, as incontrovertible evidence of efficacy could help a lot of people!
fsckboy
9 months ago
except it was completely non-specific about what those flaws might be, simply saying "sample is too small" when the sample is not a priori too small for a properly designed study of something that has "noticeable" effect. For example, "does alcohol get people drunk" is not hard to show on a sample of 10 people.
AbstractH24
9 months ago
A larger sample size isn't inherently better, but if a large sample size from a diverse enough pool of people can be used to eliminate and/or identify confounding variable and distortions.
risenshinetech
9 months ago
This comment is a shining example of the phrase "not even wrong".
TeaBrain
9 months ago
Just the first line alone "Statistical significance is based on sample size, and is independent of population size", was bizarrely silly in its description of the use of sample sizes, before they even got to their nonsensical analogy to the birthday paradox.
notfed
9 months ago
If that's true, I'm confused how this is a "double-blind, randomized, controlled trial"?
Also, what's up with drug studies always having such a low sample size? Is it really that hard to find people who'd volunteer to get free drugs?
taurath
9 months ago
> Also, what's up with drug studies always having such a low sample size? Is it really that hard to find people who'd volunteer to get free drugs?
They try to make for no comorbidities and and for MDD that is pretty rare. It also means that we’re often studying rare configurations compared to those commonly seen in actual practice. Statistics doesn’t like confounding elements and humans are very confounding. So either you get “bad” statistics, or you get “bad” data. And why you have front line drugs that only have a helpful effect for 33% of people.
nkmnz
9 months ago
Just to add one minor detail to this very good explanation: even if you could find 500 participants matching the criteria instead of 50, you wouldn’t want to “waste” all of them on your first study design.
braiamp
9 months ago
> If that's true, I'm confused how this is a "double-blind, randomized, controlled trial"?
Those are design descriptions.
Double blind means that neither patient nor the one administering the treatment knows which is which. Randomized only means that each subject is assigned randomly a treatment group. Controlled trial just means that the study design made sure that other variables that are not under experiment are also under control. Nothing about this preclude actions done to the subjects nor sample sizes.
andoando
9 months ago
Its expensive. Statistically speaking its really not that small. You can always argue p hacking but these are always useful as a means to do further research
Affric
9 months ago
Often you’re not allowed on your medication for any of your other health issues.
londons_explore
9 months ago
We need a new approach to randomly controlled trials.
I propose a new approach: Rather than given treatment vs not given treatment, we instead vary the dose slightly, and we include the whole world in the trial.
Ie. instead of taking 100mg of Advil, instead you will receive somewhere between 95mg and 105mg of Advil. You won't be told how much you got - but the barcode on the box will encode that info. That already might be the case due to allowed inaccuracies, but now we're gonna measure and record it.
Later, the data of which box was dispensed is combined with any other relevant medical records, and across the hundreds of millions of people involved, any benefit/disadvantage of a small increase or decrease in dose will become apparent.
malfist
9 months ago
That's not generally how drugs work. There's wide margins on how much dose is required for clinically significant results. Not only is the 110 pouns petite woman taking the same dose of aspirin as a 400lb elite powerlifter, but the effects can't be quantified in high resolution. The woman can't say her pain was 5% less than the powerlifter, and they can't say their pain was 5% more.
refulgentis
9 months ago
This doesn't sound very helpful
advael
9 months ago
I mean it sounds like a way to trick people into letting a giant company build a surveillance system, and I imagine aliens observing humans in this moment in history might conclude that's what we want, but for science or the benefit of the people "surveyed" it's mostly downsides
reissbaker
9 months ago
TBH there's no way to have a double blind trial of drugs like psilocybin (Scott Alexander has written a little bit about this [1] with respect to controlled trials for MDMA), for any reasonable dose size of psilocybin. Both the patient and the person administering the drug will become very aware, very quickly, if they're in the psilocybin group.
1: https://slatestarcodex.com/2017/06/05/is-pharma-research-wor...
notduncansmith
9 months ago
This is wrong. You can absolutely dose psylocybin at levels that make a meaningful difference in experience (certainly enough to have an anti-depressant effect) without “tripping” or being in any way impaired. For many adults, this will be around 100-200mg of psilocybin.
llamaimperative
9 months ago
It's not a question of impairment, it's a question of whether you come to some determination of whether you are in the control group or not.
dymk
9 months ago
Microgram, not milligram, of psilocybin. 100mg of psilocybin would be about 50-200 grams of dried cubensis mushroom.
There have been studies on microdosing, but it’s also valuable to know what happens at perceptible doses as well. It’s certainly a different experience.
AstralStorm
9 months ago
Unsurprisingly, the way around it is to introduce a third treatment or fourth, with classical psychedelic 5-HT2 action. There are a few such choices and some are also registered antidepressants, one being DXM/bupropion, another being an ergoloid derivative or a psychedelic amphetamine like DOM.
But then you need to run a trial in a hundred or two hundred N.
user
9 months ago
luckydata
9 months ago
Therapy is such a wash statistically that I'm not particularly confused or concerned by that, and everyone that has ever taken psylocibin knows the results are typical.
arcticbull
9 months ago
It’s kinda interesting you say that because studies show that SSRIs are not much better than placebo for treating depression, and that therapy plus SSRIs is the best treatment available right now.
napoleongl
9 months ago
The article states that both groups received psychological support though. The only mention of 20 hours I find in the article is as an option to psilocybin. Does the original research article say something else?