Skip to main content

The effect of coenzyme Q10 on morbidity and mortality in chronic heart failure (Q-SYMBIO trial)

Mortensen SA, Rosenfeldt F, Kumar A, et al. ยท JACC: Heart Failure
A
Strong evidence ยท 87/100
Well-designed study that answers the question it set out to ask. Safe to treat the central finding as reliable, though edge cases may still vary.
What this study found

2-year double-blind multi-center RCT (n=420) in chronic heart failure found 300mg/day CoQ10 (ubiquinone) reduced cardiovascular mortality by 43% and major adverse cardiac events by 50% vs placebo.

Original paper
Open on PubMed
Read the paper โ†—
PMID: 25282031DOI: 10.1016/j.jchf.2014.06.008

Formulate methodology review

Strengths
Limitations
No major methodological limitations flagged.
Critique

Q-SYMBIO is among the stronger supplement trials in the literature. Multi-center, double-blind, placebo-controlled, 2-year duration, and a hard clinical endpoint (cardiovascular mortality). The 43% mortality reduction represents a clinically meaningful effect size, not just statistical significance. The key interpretive nuance is that enrolled patients were already receiving optimal guideline-directed heart failure therapy, so the benefit represents an additive effect on top of standard care โ€” applicable to a specific clinical context rather than general prevention in healthy adults.

What would be more convincing

Given the strength of this trial, what's missing is independent large-scale replication. A second 1,000+ patient multi-center RCT conducted in a different healthcare system would move CoQ10 in heart failure from 'likely beneficial' to 'guideline-recommended.' Smaller supporting trials exist but a definitive replication would close the case.

Opinion based on the published paper's methodology. Reviewed 2026-04-21. See our methodology rubric for scoring conventions. Not medical advice.

What these flags mean for you

Each flag on this study comes with a plain-English breakdown of why it matters and how it should change the confidence you place in the result.

โœ“
Multi-center
What it means

Participants were recruited across several independent clinics or regions.

Why it matters

Multi-center design spreads out local confounders (practice style, population genetics, seasonal variation) and makes the result more generalizable.

How to read around it

More trustworthy than single-center evidence at the same sample size. Effect sizes from multi-center trials are usually closer to what you'd see in practice.

โœ“
Long duration
What it means

The trial ran long enough to observe sustained effects, tolerance, and delayed side effects.

Why it matters

Many supplement effects fade (tolerance), reverse (rebound), or only emerge months in. Long trials catch patterns short trials miss entirely.

How to read around it

More trustworthy for chronic claims. A 12-week trial for something you'll take for 20 years is evidence, but not much.

โœ“
Real-world outcome
What it means

The trial measured something a patient would actually notice โ€” symptoms, function, quality of life, hospitalization, mortality.

Why it matters

Real-world outcomes skip the surrogate-endpoint problem entirely. If symptoms improved, symptoms improved.

How to read around it

Higher translational value than biomarker trials. What the trial measured is closer to what you'd get from taking the supplement.

How to read a study like this

The same questions worth asking about any research paper, not just this one. Worth a minute even if you trust the grade.

Who was studied, and do you resemble them?

Supplement effects often depend on baseline status. Vitamin D helps people who are deficient; iron helps people who are anemic. A result in people unlike you may not apply to you.

What was measured, and does it matter in daily life?

A study that shows a blood marker moved isn't the same as a study that shows people felt or functioned better. Ask what the outcome means in practice.

How large was the effect โ€” not just whether it was significant.

'Statistically significant' only means the effect is unlikely to be zero. It doesn't tell you the effect is large enough to notice. Look for effect sizes, not just p-values.

Who paid for the trial, and what did they stand to gain?

Industry-funded trials are several times more likely to report positive results than independent ones. It's not usually fraud โ€” it's subtle design and reporting choices. Weight accordingly.

Has anyone else replicated this?

Single positive trials are hypotheses. Replication by independent groups is what turns a hypothesis into reliable evidence. If the only positive trial is the one you're reading, wait.

Does the dose in the trial match what's being sold?

Supplement marketing routinely cites trials that used 5โ€“10ร— the dose in the product. If the effective dose was 2 g/day and the capsule has 200 mg, expect roughly no effect.

Not medical advice. This breakdown is for educational purposes. Nothing here constitutes an allegation of fraud or misconduct by any researcher or sponsor. Reasonable scientists can grade the same paper differently; we show our rubric and link every claim to the original study.