About this site

Healthspanner is a reading library for healthy adults — what actually works for healthspan, what doesn't, and how strong the evidence is. Distilled from peer-reviewed literature, graded by evidence strength, and rewritten for clarity.

The source corpus

The site is built on top of roughly two dozen long-form research reviews — each one a 40–80 KB synthesis of a single domain (sleep, VO₂ max, protein and mTOR, sauna, alcohol, geroprotectors, and so on). Those reviews in turn cite thousands of primary sources: randomized controlled trials, Mendelian-randomization analyses, prospective cohorts, meta-analyses, and the occasional landmark mechanistic paper.

Wherever a claim on this site is meaningful enough to anchor a recommendation, you should be able to find its source named in the running text — by trial, journal, and year (e.g. “PREDIMED, NEJM 2018”) — and a hyperlink to the paper or trial registry where one exists.

The method: AI-assisted deep research

Each of the underlying research reviews was produced using agentic deep-research models that read and cross-checked hundreds of papers per topic, surfaced the load-bearing citations, and reconciled conflicting findings. A human editor then distilled those reviews into the hierarchical structure you see here — homepage, pillar overview, deep dive — and tightened the prose for an average reader.

AI is good at breadth and recall; it is less reliable at calibration. So the editorial pass focuses on the things that actually matter: that effect sizes are quantified, that the study design behind each claim is named, that observational associations are not dressed up as causation, and that commercial bias (industry-funded trials, surrogate endpoints, short follow-up) is flagged.

Evidence grading

Every substantive health claim carries one of four tags:

  • Strong — multiple large RCTs or meta-analyses with hard endpoints (mortality, MI, dementia diagnosis).
  • Moderate — consistent RCTs but with heterogeneity, surrogate endpoints, or limited populations.
  • Weak / preliminary — small or single trials, mostly animal data, or only observational associations.
  • Caution — documented harm signal at common doses.

The grade lives next to the claim, not at the bottom of the page. A pillar overview tells you the rating up front; the deep-dive article behind it walks through the underlying studies.

Editorial principles

  • Plain language at the top, depth deeper down. Teasers and pillar pages are written for an average reader; acronyms and trial names appear once the context is set.
  • Quantify when possible.“30% reduction in CVD events” beats “significantly reduces.”
  • Acknowledge uncertainty. Observational data, Mendelian-randomization caveats, and industry funding are called out rather than hidden.
  • Don't moralize. Especially around alcohol, weight, or behavioral choices. The evidence is presented; readers decide.
  • No supplement, brand, or program promotion. Generic recommendations only.

What this site is not

  • Not medical advice. Nothing here is a prescription. Discuss any meaningful change with a clinician who knows your history.
  • Not exhaustive. The deep-dive articles are distillations. If a claim matters to you, follow the citation through to the primary source.
  • Not static. The evidence base moves — the alcohol J-curve has largely collapsed since 2019, the dementia-risk attributable fraction keeps climbing as new modifiers are identified. Pages are updated when meaningful new evidence lands.

Corrections and feedback

If you find a factual error, a missing citation, or a study that should overturn a current claim, the easiest way to reach the editor is the address listed in the site footer. Corrections are merged quickly; structural changes take longer.