Skip to content
Physics & Engineering Timeless 11 min read

Noisy Quantum Entanglement: How 75 Qubits Revealed Critical Hardware Limits

Researchers just set a record by entangling 75 qubits on IBM hardware, but fundamental noise limits impose hard mathematical constraints on how far quantum entanglement can scale without error correction.

Superconducting quantum processor chip demonstrating noisy quantum entanglement research
Reading mode

When the boss suggested we look into noisy quantum entanglement, the timing felt right: researchers reported a 75-qubit error-detected GHZ-state experiment on IBM hardware, yet the very noise that makes these machines “noisy” imposes hard mathematical limits on how far entanglement can scale. Understanding this tension is essential for anyone following the quantum computing race toward practical applications.

Quantum entanglement is the phenomenon where two or more particles become correlated in ways that classical physics cannot explain. Measure one entangled particle and you instantly know something about its partner, regardless of distance. This property underpins nearly every proposed quantum advantage: faster algorithms, quantum key distribution, and simulations of molecules too complex for classical computers.[s]

The Problem With Noisy Quantum Entanglement

Today’s quantum computers operate in what physicists call the NISQ era: Noisy Intermediate-Scale Quantum. The “noisy” part is not a design flaw to be patched in the next software update. It reflects fundamental physics. Every qubit interacts with its environment, and those interactions cause errors. Gate operations introduce mistakes. Measurements disturb states. And entanglement, being exquisitely fragile, can degrade quickly.

The inherent limitations of NISQ hardware, including restricted qubit numbers, imperfect gate fidelity, and limited qubit connectivity, impose fundamental obstacles to achieving genuine quantum advantage.[s] This is not pessimism; it is the starting point for serious engineering.

Hard Mathematical Limits

A 2025 theoretical paper in npj Quantum Information proved something uncomfortable: under strictly contractive unital noise, there are hard caps on how much entanglement noisy circuits can generate. For one-dimensional qubit arrays, the maximum achievable entanglement scales as O(log n), where n is the number of qubits. For two-dimensional arrays, the limit rises to O(√n log n), ruling out efficient creation of some highly entangled states without error correction.[s]

In plain terms: doubling your qubits does not double your usable entanglement. The noise eats most of the gain. Under such noise, researchers observed that n-qubit devices become indistinguishable from random coin flips when circuit depths exceed a logarithmic threshold.[s]

The 75-Qubit Experiment

Despite these limits, experimentalists keep pushing. A team working on IBM superconducting processors achieved genuine multipartite entanglement for up to 75 qubits, verified in terms of multiple-quantum coherence fidelity.[s] A later IBM result posted in October 2025 extended this to a 120-qubit GHZ state with fidelity 0.56(3) at a 28% post-selection rate, the largest GHZ state prepared to date.[s]

The trick was not brute force. Instead of deploying full quantum error correction (which demands hundreds or thousands of physical qubits per logical qubit), they used lightweight error-detection primitives: sparse stabilizer measurements with no more than 9 ancilla qubits. The same paper also reported over 85% gate fidelity across up to 40 lattice sites, while its 75-qubit GHZ-state routine discarded no more than about 78% of samples, far below discard rates above 99.9% reported in some fully encoded logical-qubit experiments.[s]

Net computational improvements can be achieved on current-generation quantum computers using low-overhead error-correction primitives, without the need for full logical encoding.[s] This matters because every qubit spent on error correction is a qubit not available for computation.

Logical Qubits and Dual-Rail Encoding

A parallel approach targets the error structure itself. Dual-rail superconducting qubits encode information in pairs of transmons, exploiting the fact that the dominant error type (erasure) can be detected mid-circuit. A 2026 Nature Physics paper demonstrated logical CNOT gates with 98.1% process fidelity at a 13% erasure rate, enabling the creation of a three-logical-qubit GHZ state at 93.9% fidelity.[s]

Each dual-rail qubit maintains millisecond-scale coherence times and logical single-qubit gate error rates on the order of 10-5 by using post-selection to mitigate erasure errors.[s] These numbers represent a significant step toward the error rates needed for fault-tolerant computation, though scaling remains the challenge.

A Hidden Measurement Problem

There is a methodological wrinkle that complicates interpretation of many noisy quantum entanglement experiments. Standard quantum readout error mitigation (QREM), a widely used technique for correcting measurement mistakes, conflates state preparation errors with measurement errors. A recent theoretical analysis proved that for all stabilizer states, if entanglement exists, the state fidelity achieved through stabilizer expectation value measurement is overestimated after conventional QREM correction.[s]

The bias grows exponentially with qubit count. The QREM-induced error might cover up gate operation errors and produce false positive conclusions.[s] This does not invalidate all published results, but it demands careful scrutiny of how fidelity claims are verified, especially in large-scale experiments.

Machine Learning to the Rescue

One response to noisy measurements is to build detection methods that expect noise. A machine-learning approach using support vector machines trained on Pauli measurements constructs what researchers call a robust optimal entanglement witness (ROEW). This method maintains high classification accuracy even when measurement errors exceed 10%.[s]

Training the model with only 20% of the typical dataset suffices to achieve high accuracy and substantial error reduction.[s] For laboratories with limited hardware access, this data efficiency matters as much as the noise resilience.

What This Means for Quantum Computing

The current state of noisy quantum entanglement research is simultaneously encouraging and sobering. Records keep falling. Techniques for error detection without full error correction show real promise. But the theoretical limits remain, and current NISQ platforms remain incapable of achieving genuine quantum advantage on practical problems.[s]

For applications like breaking classical encryption, quantum computers would need large fault-tolerant machines, not today’s noisy devices. The cryptographic community is already responding with post-quantum cryptography standards designed to resist quantum attacks, essentially assuming quantum computers may eventually scale.[s] The race continues, but the finish line keeps moving.

Understanding noisy quantum entanglement is not optional for anyone tracking this field. The physics constrains what engineering can achieve, and pretending otherwise leads to hype cycles that damage credibility and waste resources.

When the boss suggested we examine noisy quantum entanglement in NISQ devices, the timing was apt: a 75-qubit error-detected GHZ-state experiment highlighted what current hardware can do, while rigorous theoretical work proved hard bounds on entanglement scalability under specific noise models. This tension defines the current frontier.

Quantum entanglement is the non-classical correlation structure exploited by essentially all proposed quantum advantages: Shor’s algorithm, quantum key distribution, variational quantum eigensolvers, and quantum simulation. The operational question is whether current hardware can generate and preserve entanglement at scales useful for these applications. The answer involves both experimental records and theoretical impossibility results.

Theoretical Limits on Noisy Quantum Entanglement

A 2025 npj Quantum Information paper analyzed strictly contractive unital noise channels and derived tight bounds on entanglement generation in noisy circuits. The key results: under such noise, n-qubit devices become polynomial-time indistinguishable from random coins when circuit depths exceed Ω(log n).[s]

For spatially constrained architectures, the limits tighten further. One-dimensional noisy qubit circuits have maximal entanglement bounded by O(log n). Two-dimensional circuits scale as O(√n log n).[s] These results rule out super-polynomial quantum advantages for 1D circuits at any depth without error correction, and severely constrain 2D architectures.

The mathematical structure is instructive. Strictly contractive unital channels can be decomposed as Λ₁ = V ∘ D ∘ U, where U and V are unitary channels and D contracts each Pauli matrix σᵢ by factors qᵢ < 1. The depolarizing channel exemplifies this: Λ₁(ρ) = (1-p)ρ + pI/2, with contraction rate μ₁ = (1-p)². After t layers, the relative entropy to the maximally mixed state diminishes as D(ρ(t)∥σ₀) ≤ nμᵗ. Information loss is exponential in depth.

Experimental State of the Art

Against this theoretical backdrop, experimental groups continue setting records. A major superconducting-processor result reported a 75-qubit GHZ state, achieving genuine multipartite entanglement verified via multiple-quantum coherence fidelity.[s] A subsequent IBM result, posted in October 2025, scaled this to a 120-qubit GHZ state with fidelity 0.56(3) at a 28% post-selection rate, the largest GHZ state prepared to date.[s]

The protocol is notable for its resource efficiency. Rather than full quantum error correction (which demands hundreds or thousands of physical qubits per logical qubit), the team employed error detection using sparse stabilizer measurements with no more than 9 ancilla qubits. The 75-qubit routine discarded no more than about 78% of samples, compared with discard rates above 99.9% reported in some approaches using fully encoded logical qubits.[s]

A novel unitary entangle-disentangle protocol for long-range CNOT gates achieved over 85% fidelity across up to 40 lattice sites, significantly outperforming measurement-based alternatives.[s] The disentangled intermediate qubits serve as flag qubits, detecting bit-flip and amplitude-damping errors that occurred during gate execution.

Dual-Rail Erasure Qubits

An orthogonal approach exploits error bias. Dual-rail encoding in superconducting circuits uses pairs of tunable transmons, where the dominant error mode (leakage) manifests as detectable erasure rather than undetectable bit flips. A 2026 Nature Physics paper demonstrated logical multi-qubit entanglement using this scheme.

Each dual-rail qubit achieves millisecond-scale coherence and single-qubit gate error rates of order 10-5 via post-selection on erasure detection.[s] The team synthesized a logical CNOT with 98.1% process fidelity at 13% erasure rate, enabling a three-logical-qubit GHZ state at 93.9% fidelity.[s]

Systematic Bias in Fidelity Estimation

A critical methodological issue affects interpretation of noisy quantum entanglement experiments. Conventional quantum readout error mitigation (QREM) calibrates the measurement error matrix by preparing computational basis states and profiling outcomes. This conflates state preparation (initialization) errors with readout errors.

A recent analysis proved that for all stabilizer states, if entanglement exists, the state fidelity achieved through stabilizer expectation value measurement is overestimated after conventional QREM correction.[s] The paper expresses the overestimate through product factors involving initialization error rates and fi,I, the fraction of stabilizers containing I on qubit i.

For large GHZ states and graph states, this overestimation grows exponentially with qubit count. The QREM-induced error might cover up gate operation errors and produce false positive conclusions.[s] The paper derives safety bounds: using its first-order approximation Δ ≈ 2nq̄, keeping relative error below δ at n qubits requires average initialization error q̄ ≲ δ/(2n).

Robust Entanglement Detection

Machine learning offers partial mitigation. A robust optimal entanglement witness (ROEW) uses support vector machines trained on Pauli measurement features with distributionally robust optimization against worst-case measurement noise. The method maintains high classification accuracy even when measurement errors exceed 10%.[s]

Data efficiency is notable: training with only 20% of the typical dataset suffices for high accuracy and substantial error reduction.[s] The approach bridges machine learning and quantum information science, offering a practical tool for noise-robust characterization.

Implications for the Quantum Computing Race

The synthesis of these results clarifies the current state of noisy quantum entanglement. Theoretical limits establish that without error correction, polynomial-time quantum advantages are impossible for super-logarithmic depth circuits under strictly contractive unital noise. Experimental records demonstrate that lightweight error detection can extend useful entanglement to at least 75 qubits, with later IBM work scaling a 120-qubit GHZ state, though with significant post-selection and discard rates. Methodological work reveals that commonly reported fidelities may be systematically inflated.

Current NISQ platforms remain incapable of achieving genuine quantum advantage on practical problems.[s] The path forward requires either dramatic reductions in physical error rates, or scalable fault-tolerant architectures that are years away. The cryptographic community has already responded: NIST post-quantum cryptography standards are designed to protect against quantum computers that may eventually break many widely used cryptographic systems.[s]

Understanding noisy quantum entanglement is not optional for serious work in quantum computing. The constraints are physical, not just engineering hurdles, and the gap between current capabilities and application requirements remains substantial.

How was this article?
Share this article

Spot an error? Let us know

Sources