Skip to main content

A prospective, randomized double-blind, placebo-controlled study of safety and efficacy of a high-concentration full-spectrum extract of ashwagandha root in reducing stress and anxiety in adults

Chandrasekhar K, Kapoor J, Anishetty S ยท Indian Journal of Psychological Medicine
C
Weak evidence ยท 58/100
Several methodological limitations. The finding is suggestive, not confirmatory. Don't change behavior based on this alone.
What this study found

60-day double-blind RCT (n=64) using 300mg KSM-66 ashwagandha twice daily significantly reduced perceived stress (PSS) and serum cortisol vs placebo.

Original paper
Open on PubMed
Read the paper โ†—
PMID: 23439798DOI: 10.4103/0253-7176.106022

Formulate methodology review

Strengths
No notable design strengths identified.
Critique

Often cited as proof ashwagandha reduces stress, but the methodology limits how much weight this single trial can carry. n=64 is modest for detecting the reported effect size. 60 days is a short window for chronic stress claims. Outcomes were a subjective scale (PSS-14) plus a single morning serum cortisol draw โ€” morning cortisol is highly variable day-to-day and is not a robust biomarker of chronic stress compared to diurnal curves or hair cortisol. The KSM-66 extract is manufactured by Ixoreal Biomed, whose involvement appears across many subsequent ashwagandha trials, complicating independent replication. The direction of results has been echoed in later trials, but any one small trial of this design is weak evidence on its own.

What would be more convincing

A 12-week pre-registered RCT with nโ‰ฅ200, independently funded, using diurnal or hair cortisol alongside PSS-14, with a pre-specified MCID on the PSS, would be materially more convincing.

Opinion based on the published paper's methodology. Reviewed 2026-04-21. See our methodology rubric for scoring conventions. Not medical advice.

What these flags mean for you

Each flag on this study comes with a plain-English breakdown of why it matters and how it should change the confidence you place in the result.

โš 
Small sample size
What it means

The study enrolled too few participants for its results to be statistically reliable on their own.

Why it matters

With a small sample, random variation can look like a real effect. A positive finding in 20 people may vanish when the trial is repeated in 200.

How to read around it

Treat this as a signal, not proof. Look for larger replications before changing your behavior based on the result.

โš 
Short duration
What it means

The trial ran for weeks when the outcome it claims to affect usually takes months or years to change.

Why it matters

Short trials catch early biomarker shifts but miss tolerance, plateaus, side effects that appear later, and whether the benefit sustains.

How to read around it

Useful for acute effects (sleep, mood, energy). Weak evidence for chronic claims (bone density, cardiovascular risk, aging).

โš 
Surrogate endpoint
What it means

The study measured a blood marker or proxy rather than something you'd actually notice or care about.

Why it matters

Lowering LDL on paper doesn't always translate to fewer heart attacks. Surrogate outcomes don't always track with real-world health outcomes.

How to read around it

Ask: has this marker been shown to predict the outcome I care about in other trials? If not, the finding is mechanistically interesting but practically unproven.

โš 
Industry-adjacent
What it means

Authors have consulting arrangements, equity, or other financial ties to the industry even if the trial itself wasn't directly sponsored.

Why it matters

Conflict of interest operates through question selection and interpretation, not just funding. A researcher with equity has incentive to frame findings favorably.

How to read around it

Read the disclosures section. Consistent replication by unaffiliated groups matters more than a single favorable trial from a conflicted team.

โš 
Single-center
What it means

All participants were recruited and treated at one clinic or institution.

Why it matters

Single-center trials reflect one practice pattern, one population, and one set of local confounders. Effects often shrink in multi-center replications.

How to read around it

Consistent with a real effect, but the magnitude is probably optimistic. Multi-center replications give better generalizability.

How to read a study like this

The same questions worth asking about any research paper, not just this one. Worth a minute even if you trust the grade.

Who was studied, and do you resemble them?

Supplement effects often depend on baseline status. Vitamin D helps people who are deficient; iron helps people who are anemic. A result in people unlike you may not apply to you.

What was measured, and does it matter in daily life?

A study that shows a blood marker moved isn't the same as a study that shows people felt or functioned better. Ask what the outcome means in practice.

How large was the effect โ€” not just whether it was significant.

'Statistically significant' only means the effect is unlikely to be zero. It doesn't tell you the effect is large enough to notice. Look for effect sizes, not just p-values.

Who paid for the trial, and what did they stand to gain?

Industry-funded trials are several times more likely to report positive results than independent ones. It's not usually fraud โ€” it's subtle design and reporting choices. Weight accordingly.

Has anyone else replicated this?

Single positive trials are hypotheses. Replication by independent groups is what turns a hypothesis into reliable evidence. If the only positive trial is the one you're reading, wait.

Does the dose in the trial match what's being sold?

Supplement marketing routinely cites trials that used 5โ€“10ร— the dose in the product. If the effective dose was 2 g/day and the capsule has 200 mg, expect roughly no effect.

Not medical advice. This breakdown is for educational purposes. Nothing here constitutes an allegation of fraud or misconduct by any researcher or sponsor. Reasonable scientists can grade the same paper differently; we show our rubric and link every claim to the original study.