Quantum Outpost

Series

QML Reality Check

Quantum machine learning is two things at once: a real research area with open questions, and a marketing category routinely oversold against weak baselines. This series benchmarks named QML methods against seriously-tuned classical ones on real datasets, and publishes whatever the numbers say.

Pre-registration

For each entry below, we publish the dataset, the contenders, the evaluation metric, and the random seeds before running the benchmark — committed to git so you can verify we didn't choose the flattering configuration after the fact. The notebook produces the number; we publish whichever direction it points.

Published

Pre-registered, coming soon

Why this exists

Open the most-cited QML papers from 2020–2024. Most benchmark on toy datasets (MNIST 0/1) where a 3-layer classical MLP gets 99%, against baselines like logistic regression. The "quantum advantage" claim survives only against the weak baseline.

Vendor blogs amplify this. They have to: their quantum-cloud business depends on the "QML works" narrative being directionally true. They are structurally incapable of publishing a series like this, because the first finding is "for tabular classification, XGBoost wins."

We can publish it. So we do. If the next finding is "but on this molecular simulation, VQE wins by 8 mHa" — we publish that too, with the same rigor. The point isn't to bash QML. It's to give working developers an honest answer to "should I reach for quantum here?"

For the methodology behind these benchmarks and our broader editorial stance, see Editorial Independence.

Weekly dispatch

Quantum, for people who already code.

One serious tutorial per week, plus the industry moves that actually matter. No hype, no hand-waving.

Free. Unsubscribe anytime. We will never sell your email.