Dequantizing Short-Path Quantum Algorithms
François Le Gall, Suguru Tamaki · arXiv:2604.12131 · April 2026 · Score: 8/10
Abstract
The short-path quantum algorithm introduced by Hastings (Quantum 2018, 2019) is a variant of adiabatic quantum algorithms that enables an easier worst-case analysis by avoiding the need to control the spectral gap along a long adiabatic path. Dalzell, Pancotti, Campbell, and Brandão (STOC 2023) recently revisited this framework and obtained a clear analysis of the complexity of the short-path algorithm for several classes of constraint satisfaction problems (MAX-k-CSPs), leading to quantum algorithms with complexity 2(1−c)n/2 for some constant c > 0. This suggested a super-quadratic quantum advantage over classical algorithms.
In this work, we identify an explicit classical mechanism underlying a substantial part of this line of work, and show that it leads to clean dequantizations. As a consequence, we obtain classical algorithms that run in time 2(1−c')n, for some constant c' > c, for the same classes of constraint satisfaction problems. This shows that current short-path quantum algorithms for these problems do not achieve a super-quadratic advantage. On the positive side, our results provide a new “quantum-inspired” approach to designing classical algorithms for important classes of constraint satisfaction problems.
Executive Summary
Le Gall and Tamaki dequantize the short-path quantum algorithms of Dalzell et al. (STOC 2023) for MAX-Ek-LIN2 and MAX-k-CSP, showing that purely classical conditioning-and-search algorithms match or exceed the quantum exponents. The classical improvement exponent c' is strictly larger than the quantum exponent c from Dalzell et al., definitively ruling out super-quadratic quantum advantage for these problems under current analyses. For Yuan's programme, this paper is directly relevant to understanding the limits of quantum speedups in combinatorial optimisation: it shows that the “short-path” mechanism for CSPs is essentially classical in nature, and the quantum algorithms can themselves be improved by applying amplitude amplification to the new classical algorithms.
Main Contribution
The paper isolates a classical conditioning-and-search framework that captures the core mechanism behind the quantum short-path algorithm. The key insight is that the quantum speedup claimed by Dalzell et al. rests on two structural inputs—a thin near-optimal threshold set and a large “successful” subset from which local search recovers an optimum—and both can be exploited classically. The classical exponent is c = min(γ, κ), where γ is the threshold exponent (measuring thinness of the near-optimal region) and κ is the successful-set exponent (measuring the size of the set from which exhaustive local search succeeds). By constructing explicit successful sets via correlated-pair analysis (for MAX-Ek-LIN2) and local-Lipschitz analysis (for weighted MAX-k-CSP), they show that ccl > cq in both settings, refuting the super-quadratic quantum advantage. Furthermore, applying standard amplitude amplification to their classical algorithms yields improved quantum algorithms as well.
Key Theorems / Lemmas / Algorithms
- Theorem 2.1 (Abstract conditioning-and-search principle): If sampling from a conditioning set T with a successful subset S satisfying μ(S) ≥ βn, and local search from any point in S finds an optimum, then the expected running time is O((τsamp + τsearch)/βn).
- Theorem 2.3 (Black-box exact optimisation from (γ, κ)): Under threshold exponent γ and successful-set exponent κ, an optimum assignment can be found in expected time 2(1−min{γ,κ})n+o(n).
- Theorem 3.1 (Successful-set bound, correlated-pair regime): For homogeneous degree-k objectives, the successful set Sηns has size ≥ 2h(qη)n − o(n), where qη = (1 − (1−η)1/k)/2.
- Theorem 3.5 (Successful-set bound, local-Lipschitz regime): For weighted MAX-k-CSP, the successful set Sηlip has size ≥ 2½ h(θη)n − o(n), where θη = η|Hmin|/(ΛmaxΣ).
- Theorem 4.1 (Main theorem for MAX-Ek-LIN2): Under threshold condition |Tη| ≤ 2(1−γ)n, an optimum is found classically in time 2(1−c)n+o(n) with c = min{γ, h(qη)} ≥ γη/k, beating Dalzell et al.’s quantum exponent cq ≈ 0.1856γη/k.
- Theorem 4.3 (Main result for weighted MAX-k-CSP): An optimum is found classically in time 2(1−c)n+o(n) with c = min{γη, ½h(θη)}, where γη depends quadratically on |Hmin|/Σ, beating Dalzell et al.’s cubic quantum exponent.
- Corollary 4.4 (Explicit bound for weighted MAX-k-CSP): c ≥ 0.7213 · Δ2/(22kk2D), strictly larger than the quantum cq = 0.0578 · Δ3/(23kk3D).
- Proposition 4.2 (McDiarmid threshold-set bound): |Tη| ≤ 2(1−γη)n with γη = (2(1−η)2/ln 2) · (|Hmin|/Σ)2 / (Λmax2 D).
- Theorems 1.5, 1.6 (Improved quantum algorithms): Applying amplitude amplification to the new classical algorithms gives quantum algorithms with running times 2(1−ccl)n/2, improving over Dalzell et al.’s quantum algorithms.
Detailed Walkthrough
Section 1: Introduction and motivation
The paper is motivated by a central question: do short-path quantum algorithms (Hastings 2018; Dalzell et al. STOC 2023) achieve super-quadratic speedups over classical algorithms for MAX-k-CSP problems? Dalzell et al. showed quantum algorithms running in time 2(1−c)n/2, which would be super-quadratic over brute-force 2n classical search. Le Gall and Tamaki answer this negatively by providing classical algorithms running in time 2(1−c')n with c' > c, which means the quantum algorithm is at most quadratically faster than their new classical baseline.
Section 2: The conditioning-and-search framework
The core abstraction is a two-parameter framework. Given a near-optimal threshold set Tη = {x : H(x) ≤ (1−η)Hmin} of size at most 2(1−γ)n, and a “successful set” S ⊆ T of size at least 2κ n − o(n) from which local exhaustive search recovers an optimum, the black-box theorem (Theorem 2.3) yields an algorithm with running-time exponent c = min{γ, κ}. The framework is conceptually simple: repeatedly sample uniformly from {−1,1}n, accept if the sample lands in T, then search a neighbourhood of the accepted point. The key is that “successful” samples (those in S) guarantee recovery of an optimum from their local neighbourhood.
Section 3: Two constructions of successful sets
The correlated-pair analysis (Section 3.1) handles MAX-Ek-LIN2. Starting from an optimum x*, a random correlated copy X = x* ⊙ t (each bit flipped independently with probability q) satisfies E[H(X)] = ρkHmin. By Markov’s inequality, X lands in Tη with inverse-polynomial probability. Restricting to a narrow Hamming shell around the expected distance yields a successful set of size 2h(qη)n − o(n). The local-Lipschitz analysis (Section 3.2) handles weighted MAX-k-CSP. Flipping a “light” variable (one with degree ≤ 2davg) changes H by at most 2Λmaxdavg. A restricted Hamming ball of appropriate radius around x* therefore stays within the threshold set, yielding a successful set of size 2½h(θη)n − o(n).
Section 4: Main results
For MAX-Ek-LIN2 (Theorem 4.1), the classical exponent is c = min{γ, h(η/(2k))} ≥ γη/k, which strictly exceeds the quantum exponent cq ≈ 0.1856γη/k from Dalzell et al. Moreover, the entropic dependence h(η/(2k)) ∼ (η/(2k))log(2k/η) as η/k → 0 is asymptotically stronger than the linear dependence in cq. For MAX-k-CSP (Theorem 4.3, Corollary 4.4), the classical exponent depends quadratically on the normalised optimum scale Δ = |Hmin|/m, versus cubically for the quantum algorithm. Since Δ < 1, this immediately gives ccl > cq. Notably, the paper also shows (Section 4.3) that amplitude-amplifying these classical algorithms yields improved quantum algorithms (Theorems 1.5–1.6), beating Dalzell et al. on the quantum side as well.
Section 5: Technical details
This section provides the deferred proofs: shell-counting estimates for the correlated-pair regime, binomial entropy lower bounds for the restricted ball in the local-Lipschitz regime, and the full proof of Theorems 3.1 and 3.5.
Appendices
The three appendices address removing the known-optimum assumption: Appendix A gives a general discussion, Appendix B handles the given-(γ,η) regime via a ranking-based reduction (Theorem B.2), and Appendix C handles the parameter-derived regime via a one-sided reduction over candidate bounds (Theorem C.3). Appendix D provides a notation guide. The known-optimum assumption can be removed in all cases with at most 2o(n) overhead.
Section 6: Concluding remarks
The authors note that their framework may extend to dequantize quantum algorithms for Max Bisection, Max Independent Set, and ground-state problems of the Ising and SK models (following Chakrabarti et al. 2024). They identify concrete barriers: instances with small normalised optimum scale Δ = o(1) or large irregularity D = ω(1) resist the local-Lipschitz construction. They suggest connections to fine-grained complexity (SETH, QSETH) as directions for future work.
Figures
No figures extracted from source. This is a pure theory paper with no included graphics.
Citations to Yuan's Papers
Overlap with Y1–Y6
- Y1 (warm-started QAOA): Tangential. Y1 studies iterative warm-starting for QAOA on MaxCut, while this paper addresses exact optimisation via short-path algorithms. Both concern quantum optimisation of CSPs, but the techniques and algorithmic paradigms are distinct.
- Y2 (quasi-binary QAOA for portfolio optimisation): Minimal overlap. Y2 targets portfolio problems with hard constraints via a constraint-preserving ansatz; this paper works with unconstrained MAX-k-CSP and focuses on worst-case complexity exponents rather than variational ansatze.
- Y3 (QAOA for DGMVP): Moderate conceptual overlap. Y3 quantifies when QAOA achieves quantum advantage for portfolio problems and finds shot-noise-limited favourable scaling. This paper’s negative result on super-quadratic advantage for short-path algorithms is relevant context: it constrains what speedups are possible from adiabatic-inspired quantum algorithms on CSPs, potentially informing Y3’s assessment of realistic quantum advantage.
- Y4 (Grover for cardinality-constrained binary optimisation): Significant thematic overlap. Y4 uses Grover-based search with O(√(C(n,k)/M)) rotations for cardinality-constrained problems. This paper shows that the conditioning-and-search mechanism (which can be Grover-amplified) provides the right classical baseline for such exact optimisation. The black-box theorem (Theorem 2.3) and its quantum speedup (Section 4.3) are directly relevant to understanding the quantum advantage landscape for Y4’s problems.
- Y5 (quantum SDP relaxations via Pauli-sparse Gibbs states): Limited overlap. Y5 targets SDP relaxations using Gibbs-state preparation, a different algorithmic paradigm. However, the concluding remark that the conditioning-and-search framework extends to Gibbs samplers (Section 6) suggests a potential future connection.
- Y6 (PBR test on IBM Heron2): No overlap. Y6 is an experimental quantum foundations paper.
Recommended Action for Yuan
- Read Sections 2 and 4.3 carefully. The black-box conditioning-and-search theorem (Theorem 2.3) and the improved quantum algorithms (Theorems 1.5–1.6) provide a new lens on what constitutes the “right” classical baseline for quantum optimisation algorithms. This is directly relevant to Y3’s and Y4’s claims about quantum advantage—any speedup claim should be benchmarked against conditioning-and-search rather than brute force.
- Assess implications for Y4. Y4’s Grover-based algorithm for cardinality-constrained binary optimisation is structurally similar to the quantum algorithms in this paper (amplitude amplification over a structured search space). Check whether the conditioning-and-search framework can be instantiated for cardinality-constrained problems, which would either provide a stronger classical baseline against which to measure Y4’s speedup, or suggest improvements to Y4’s algorithm.
- Consider citing in future work. This paper establishes that short-path quantum speedups for CSPs are at most quadratic. Any future manuscript from Yuan’s programme claiming quantum advantage for combinatorial optimisation should cite this result as context for the current understanding of quantum vs. classical complexity for exact CSP optimisation.