Vitamin D supplementation to prevent acute respiratory tract infections: systematic review and meta-analysis of individual participant data
Individual-participant meta-analysis of 25 RCTs (n=10,933) found vitamin D supplementation reduced risk of acute respiratory tract infection, with greatest benefit in baseline-deficient participants and daily/weekly (not bolus) dosing.
Formulate methodology review
Individual-participant meta-analysis across 25 RCTs is a strong design โ it re-analyses raw subject data rather than just pooling summary statistics, which handles heterogeneity better than conventional meta-analysis. The overall effect is modest (NNT ~33) and the pooled estimate hides substantial variation between trials in dose, population, seasonal timing, and baseline vitamin D status. A cleaner read of the data is that vitamin D meaningfully helps people who are deficient; evidence of benefit in already-replete individuals is weaker.
A single large pre-registered RCT restricting enrollment to baseline 25(OH)D below 20 ng/mL, using daily (not bolus) dosing, would resolve the heterogeneity question and settle the 'does deficiency matter' framing that the current evidence only hints at.
Opinion based on the published paper's methodology. Reviewed 2026-04-21. See our methodology rubric for scoring conventions. Not medical advice.
What these flags mean for you
Each flag on this study comes with a plain-English breakdown of why it matters and how it should change the confidence you place in the result.
The trial enrolled enough participants to detect realistic effect sizes with high statistical power.
Large samples shrink the role of chance. A positive finding in thousands of people is much less likely to be a fluke than the same finding in dozens.
Gives you more confidence the reported effect size is close to the true effect โ but still doesn't prove the study is well-designed in other ways.
The trial measured something a patient would actually notice โ symptoms, function, quality of life, hospitalization, mortality.
Real-world outcomes skip the surrogate-endpoint problem entirely. If symptoms improved, symptoms improved.
Higher translational value than biomarker trials. What the trial measured is closer to what you'd get from taking the supplement.
Funded by a public agency, university, or philanthropic grant with no commercial stake in the outcome.
Removes the financial incentive that skews industry-funded results. Independent trials historically show smaller effect sizes โ closer to the truth.
Weight independent results more heavily. When independent and industry-funded trials disagree, the independent result is usually closer to reality.
How to read a study like this
The same questions worth asking about any research paper, not just this one. Worth a minute even if you trust the grade.
Supplement effects often depend on baseline status. Vitamin D helps people who are deficient; iron helps people who are anemic. A result in people unlike you may not apply to you.
A study that shows a blood marker moved isn't the same as a study that shows people felt or functioned better. Ask what the outcome means in practice.
'Statistically significant' only means the effect is unlikely to be zero. It doesn't tell you the effect is large enough to notice. Look for effect sizes, not just p-values.
Industry-funded trials are several times more likely to report positive results than independent ones. It's not usually fraud โ it's subtle design and reporting choices. Weight accordingly.
Single positive trials are hypotheses. Replication by independent groups is what turns a hypothesis into reliable evidence. If the only positive trial is the one you're reading, wait.
Supplement marketing routinely cites trials that used 5โ10ร the dose in the product. If the effective dose was 2 g/day and the capsule has 200 mg, expect roughly no effect.
Not medical advice. This breakdown is for educational purposes. Nothing here constitutes an allegation of fraud or misconduct by any researcher or sponsor. Reasonable scientists can grade the same paper differently; we show our rubric and link every claim to the original study.