Skip to main content
Vitamin DMeta-analysis2017

Vitamin D supplementation to prevent acute respiratory tract infections: systematic review and meta-analysis of individual participant data

Martineau AR, Jolliffe DA, Hooper RL, et al. ยท BMJ
B
Moderate evidence ยท 76/100
Reasonable design with one or two notable limitations. The finding is credible but less certain than an A-grade result.
What this study found

Individual-participant meta-analysis of 25 RCTs (n=10,933) found vitamin D supplementation reduced risk of acute respiratory tract infection, with greatest benefit in baseline-deficient participants and daily/weekly (not bolus) dosing.

Original paper
Open on PubMed
Read the paper โ†—
PMID: 28202713DOI: 10.1136/bmj.i6583

Formulate methodology review

Limitations
No major methodological limitations flagged.
Critique

Individual-participant meta-analysis across 25 RCTs is a strong design โ€” it re-analyses raw subject data rather than just pooling summary statistics, which handles heterogeneity better than conventional meta-analysis. The overall effect is modest (NNT ~33) and the pooled estimate hides substantial variation between trials in dose, population, seasonal timing, and baseline vitamin D status. A cleaner read of the data is that vitamin D meaningfully helps people who are deficient; evidence of benefit in already-replete individuals is weaker.

What would be more convincing

A single large pre-registered RCT restricting enrollment to baseline 25(OH)D below 20 ng/mL, using daily (not bolus) dosing, would resolve the heterogeneity question and settle the 'does deficiency matter' framing that the current evidence only hints at.

Opinion based on the published paper's methodology. Reviewed 2026-04-21. See our methodology rubric for scoring conventions. Not medical advice.

What these flags mean for you

Each flag on this study comes with a plain-English breakdown of why it matters and how it should change the confidence you place in the result.

โœ“
Large sample size
What it means

The trial enrolled enough participants to detect realistic effect sizes with high statistical power.

Why it matters

Large samples shrink the role of chance. A positive finding in thousands of people is much less likely to be a fluke than the same finding in dozens.

How to read around it

Gives you more confidence the reported effect size is close to the true effect โ€” but still doesn't prove the study is well-designed in other ways.

โœ“
Real-world outcome
What it means

The trial measured something a patient would actually notice โ€” symptoms, function, quality of life, hospitalization, mortality.

Why it matters

Real-world outcomes skip the surrogate-endpoint problem entirely. If symptoms improved, symptoms improved.

How to read around it

Higher translational value than biomarker trials. What the trial measured is closer to what you'd get from taking the supplement.

โœ“
Independent funding
What it means

Funded by a public agency, university, or philanthropic grant with no commercial stake in the outcome.

Why it matters

Removes the financial incentive that skews industry-funded results. Independent trials historically show smaller effect sizes โ€” closer to the truth.

How to read around it

Weight independent results more heavily. When independent and industry-funded trials disagree, the independent result is usually closer to reality.

How to read a study like this

The same questions worth asking about any research paper, not just this one. Worth a minute even if you trust the grade.

Who was studied, and do you resemble them?

Supplement effects often depend on baseline status. Vitamin D helps people who are deficient; iron helps people who are anemic. A result in people unlike you may not apply to you.

What was measured, and does it matter in daily life?

A study that shows a blood marker moved isn't the same as a study that shows people felt or functioned better. Ask what the outcome means in practice.

How large was the effect โ€” not just whether it was significant.

'Statistically significant' only means the effect is unlikely to be zero. It doesn't tell you the effect is large enough to notice. Look for effect sizes, not just p-values.

Who paid for the trial, and what did they stand to gain?

Industry-funded trials are several times more likely to report positive results than independent ones. It's not usually fraud โ€” it's subtle design and reporting choices. Weight accordingly.

Has anyone else replicated this?

Single positive trials are hypotheses. Replication by independent groups is what turns a hypothesis into reliable evidence. If the only positive trial is the one you're reading, wait.

Does the dose in the trial match what's being sold?

Supplement marketing routinely cites trials that used 5โ€“10ร— the dose in the product. If the effective dose was 2 g/day and the capsule has 200 mg, expect roughly no effect.

Not medical advice. This breakdown is for educational purposes. Nothing here constitutes an allegation of fraud or misconduct by any researcher or sponsor. Reasonable scientists can grade the same paper differently; we show our rubric and link every claim to the original study.