McTaggart's "Power of Eight" rests on:
Zero peer-reviewed publications of her own experiments
Zero independent replications
A regulatory ruling that her medical claims are "unsubstantiated and misleading" (British ASA, 2022/2023)
Foundational studies that have all either failed replication, been exposed for outcome switching, or produced effects that track with experimenter beliefs rather than any genuine phenomenon
The most concise verdict comes from the British ASA: "not substantiated and misleading."
1. Nothing published in peer-reviewed journals
2. Zero independent replications
3. Post-hoc analysis / data dredging pervasive
4. N=1 or near-N=1 designs with no statistical power
5. Pervasive conflicts of interest
6. Inverse correlation between rigor and effect size
7. Confounds uncontrolled
8. Self-report without medical verification
| Action | Detail |
|---|---|
| ASA Ruling (2022/2023) | British Advertising Standards Authority ruled Power of Eight medical claims "not substantiated and misleading"; required cessation of medical claims |
| Ig Nobel Prize (1994) | Awarded to John Hagelin for the DC crime study — satirical prize mocking the methodology |
| STEP Trial (2006) | Largest prayer study ever: 1,802 patients, $2.4M, triple-blind — prayer did NOT help; patients who KNEW they were prayed for had WORSE outcomes |
| PEAR Consortium | PEAR's own multi-lab replication attempt FAILED |
| Targ Replication (2006) | Larger follow-up by sympathetic researchers: NO effect of distant healing |
| Braud-Wiseman | Effects tracked experimenter beliefs, vanished with tightened controls |
| Fairfield, Iowa | 20-25x the TM threshold, less safe than 83% of US cities |
McTaggart's "Power of Eight" rests on zero peer-reviewed publications of her own experiments, zero independent replications, and a citation strategy that systematically misrepresents legitimate science. Every foundational study she relies on has either failed replication, been exposed for outcome switching, or produced effects that track with experimenter beliefs rather than any genuine phenomenon.
The workshop benefits participants report are real — but they are the well-documented benefits of community, social support, ritual, meaning-making, placebo, and expectation. These require no appeal to consciousness-based healing, quantum fields, or intention affecting matter at a distance.
18 deep research investigations covering all of McTaggart's experiments and supporting studies. Each investigation used Perplexity's sonar-deep-research model with 30+ web searches.
| Category | Most Damning Fact |
|---|---|
| Biological intention | Leaf experiment unpublished for 18+ years — if real, publication would be the single most important validation |
| Peace experiments | Violence INCREASED 224% during Sri Lanka experiment; war ended by military force, not intention |
| PTSD healing | McTaggart herself writes: "it is impossible to declare categorically that changes were due to intention, rather than his own brain training" |
| Workshop healing | British ASA formally ruled claims "unsubstantiated and misleading" (2022/2023) |
| GDV/Biofields | Kirlian "phantom leaf" disappears when surface cleaned; GDV measures conductivity, not biofields |
| TM precedent | Fairfield, Iowa (20-25x threshold) — less safe than 83% of US cities |
| Parapsychology | Braud-Wiseman: identical protocol, believer gets results, skeptic gets nothing |
| Neuroscience | 50,000 hours of monk training ≠ 10 minutes at a workshop; mirror neuron field abandoned grand claims |
| Anchor studies | Every legitimate study McTaggart cites provides the conventional explanation that makes her claims unnecessary |
Each report examines a specific claim from the book, including what was claimed, what was actually found, methodology assessment, and key discrepancies.
Each investigation used Perplexity's sonar-deep-research model with 30+ web searches and extensive reasoning across 18 queries.
Master synthesis of all 18 research investigations — recurring patterns, methodological problems, and overall conclusions.
Deep research via Consensus.app (250M+ peer-reviewed papers). PRO_RESEARCH mode: 1,047 papers identified, 832 screened, 242 eligible, 50 included. Note: the vast majority of the 1,047 identified papers were false positives (e.g., water conservation research, industrial water treatment) — only 50 were actually relevant to intention-on-water claims.
A handful of controlled experiments reported statistically significant but modest changes. In every case, methodological problems undermine the findings:
| Claim | Strength | Key Finding |
|---|---|---|
| Focused intention changes water pH/conductivity/spectra | 3/10 | Small effects undermined by methodological flaws and artifacts |
| Effects disappear when artifacts controlled | 6/10 | Electrode replacement eliminates observed anomalies |
| No robust evidence for reproducible effects | 8/10 | Systematic reviews find no reliable replication under blinded conditions |
| Quantum/nonlocal mechanisms lack support | 2/10 | Mechanisms remain speculative without testable predictions |
| Positive findings only in fringe journals | 7/10 | Zero results in mainstream physics or chemistry journals |
| Confounds explain reported anomalies | 8/10 | Instrument drift and environmental factors better account for data |
Coverage of the intention-water hypothesis across study types. Most gaps exist for blinded replication studies in mainstream journals.
View the complete literature review including methodology, detailed results, discussion, and all cited papers.
research/consensus-water-intention-deep.md
Extracted claims from each chapter of the book, including scientific studies cited, researcher profiles, quantitative claims, and theoretical assertions.
Each chapter of the book with its key studies, associated fact-checks, and an evidence assessment. Click a chapter to expand, then click any fact-check to read the full report.
Complete bibliography from the book. References were collected using a 3-tier pipeline: CrossRef/Unpaywall/Sci-Hub, Perplexity AI search, and Anna's Archive/Libgen.
Interactive network graph showing the interconnections between researchers McTaggart cites. Node size reflects the number of chapters a researcher appears in. Edges connect researchers who collaborate on the same studies. Hover/tap nodes for details. Drag to rearrange.
McTaggart uses a sophisticated six-layer strategy to create an illusion of scientific legitimacy. Each layer builds on the previous, making the overall argument appear more credible than any individual claim.
Cites real, peer-reviewed research (Pennebaker, Kaptchuk, Davidson/Lutz, Poulin, Koenig) then makes analogical leaps to extraordinary claims.
Features real academic credentials (Schwartz/Harvard PhD, Hagelin/Harvard physics, Roy/Penn State NAE) — but in each case the researcher had migrated to the fringe.
Biophotons exist (metabolic byproducts). Mirror neurons exist (motor neurons). Water coherent domains are a real theoretical proposal.
Journal of Scientific Exploration (UFology journal). Quantum University ("NOT equivalent to an MD"). Russian Ministry of Health approval.
"10 million to one" (post-hoc data dredging). "790% decrease" (mathematically impossible). "38 of 42 experiments positive" (unverifiable).
Russian Ministry of Health "approval" of GDV. Korotkov's Bio-Well listed as a "medical device." Quantum University awards degrees with "Dr." titles. Life University cited as institutional validation.
Separating the legitimate science McTaggart references from the distortions and fabrications she builds on top of it.
Analysis of what the project is missing, what could be strengthened, and what new dimensions could be added.
Added as the "Executive Summary" tab — a concise, shareable one-page summary for lay audiences.
83% success rate is good but 24 references remain unavailable. Some are behind paywalls and others aren't digitized.
There's no mapping between the 74 fact-checks and the 151 references they cite. A cross-reference would show which references are cited most often and which fact-checks rely on which source papers.
The project doesn't systematically document McTaggart's own rebuttals to criticism. Including and addressing her counter-arguments would strengthen the analysis and preempt "you didn't consider her side" objections.
Some fact-checks are 4+ pages with detailed methodology assessments (e.g., Sri Lanka peace, PTSD/EEG), while others in batch 3-4 are shorter with less rigorous analysis. A quality pass to bring all reports to a consistent standard would strengthen the whole project.
The verdicts use ~15 different labels (MISLEADING, UNVERIFIABLE, PARTIALLY ACCURATE, ANECDOTAL, DISCREDITED, MISAPPLIED, SPECULATIVE, etc.). A standardized 5-6 verdict taxonomy with clear definitions would make comparisons easier and the analysis more rigorous.
The 23 chapter claim extraction files list claims but don't include verdicts or link to the fact-check reports that evaluate them. Adding cross-links would connect the claim extraction to the analysis.
The /research/ directory has paired .md and .json files, but the JSON files are raw Perplexity API responses. Extracting structured data (sources, key findings, confidence levels) would make them machine-parseable and enable richer dashboard visualizations.
A chronological timeline showing when each experiment was conducted, when it was (or wasn't) published, and when it was debunked or failed replication. This would visually demonstrate the pattern of delayed/absent publication.
Added as the "Researcher Network" tab — interactive force-directed graph with 22 researchers and 25 collaboration edges.
Several key figures sell products or services related to their research (Korotkov sells GDV devices, McTaggart sells workshops, Schwartz runs paid programs). Documenting the commercial interests would add an important analytical dimension.
McTaggart isn't alone in this space. Comparing her methodology and claims with Dispenza, Braden, Lipton, and others in the "consciousness affects matter" ecosystem would show common patterns and shared weaknesses.
The research is thorough enough to support a video or podcast episode. A structured script walking through the strongest findings (Sri Lanka data, ASA ruling, Fairfield Iowa, Braud-Wiseman) would be a powerful derivative product.
Auto-generate a formatted PDF report from the markdown files for offline reading, sharing, or archival. The data is all there; it just needs a compilation pipeline.