Quantum Computing for Natural Sciences: Technology and Applications Agenda

Introduction and welcome remarks by chairs

30 minutes talk with 5 minutes Q&A

Abstract: Quantum chaos explores the behavior of quantum systems that exhibit chaotic dynamics, bridging quantum mechanics and classical chaos. This field is essential for understanding complex systems, the spread of information, and thermalization, with impacts on quantum computing, condensed matter physics, and even astrophysics.

Recently, dual unitary (DU) systems have garnered attention as rare, exactly solvable models that exhibit maximum chaos. These systems are “dual” because their unitarity holds in both time and space, allowing for the analytical computation of typically intractable properties, such as correlation functions. DU circuits serve as exceptional benchmarks for quantum simulations, helping us study critical phenomena like quantum scrambling, entanglement growth, and the decay of correlations—all fundamental to quantum chaos and many-body physics. Their analytic tractability also aids in designing and testing quantum algorithms for real-world applications.

In this talk, I will present our pioneering work, done in collaboration with IBM and Trinity College Dublin, leveraging state-of-the-art error mitigation techniques to simulate the decay of auto-correlation functions in quantum chaotic many-body systems. These functions are vital for understanding transport properties, such as conductivity and diffusion, and for addressing fundamental questions in non-equilibrium quantum dynamics.

Our simulations achieved unprecedented accuracy on a cloud-based 127-qubit Eagle processor, using up to 91 active qubits, 91 entangling gate layers, and 4,095 CNOT gates. This marks the largest-scale digital simulation of correlation functions in an interacting quantum many-body system to date. This breakthrough demonstrates that we can already benefit from this technology by advancing scientific understanding today.

15 minutes talk with 2 minutes Q&A each

  • Lewis Anderson (IBM Research) – Quantum simulation of mixed systems of fermions and bosons

Abstract: Hamiltonians containing both bosonic and fermionic degrees of freedom are regularly used to represent physical systems such as lattice gauge theories and electronic systems interacting with complex environments. We will discuss methods for digital quantum simulation of interacting bosonic and fermionic degrees of freedom within the context of a lattice Hamiltonian, namely the lattice Schwinger model, and present a novel methodology for reducing the resources required for quantum simulation of systems interacting with structured baths represented by bosonic modes.

 

  • Koichi Miyamoto (Osaka University) – Quantum algorithm for the Vlasov simulation of the large-scale structure formation with massive neutrinos

Abstract: Investigating the cosmological implication of the fact that neutrinos have a finite mass is of importance for fundamental physics. In particular, massive neutrinos affect the formation of the large-scale structure (LSS) of the universe, and conversely, observations of the LSS can give constraints on the neutrino mass. Numerical simulations of the LSS formation including massive neutrinos along with conventional cold dark matter is thus an important task. For this, calculating the neutrino distribution in the phase space by solving the Vlasov equation is a suitable approach, but it requires solving the partial differential equation (PDE) in the $(6+1)$-dimensional space and is thus computationally demanding: configuring $n_\mathrm{gr}$ grid points in each coordinate and $n_t$ time grid points leads to $O(n_\mathrm{gr}^6)$ memory space and $O(n_tn_\mathrm{gr}^6)$ queries to the coefficients in the discretized PDE. We propose a quantum algorithm for this task. Linearizing the Vlasov equation by neglecting the relatively weak self-gravity of the neutrino, we perform the Hamiltonian simulation to produce quantum states that encode the phase space distribution of neutrinos. We also propose a way to extract the power spectrum of the neutrino density perturbations as classical data from the quantum state by quantum amplitude estimation with accuracy $\epsilon$ and query complexity of order $\widetilde{O}((n_\mathrm{gr} + n_t)/\epsilon)$. Our method also reduces the space complexity to $O(\mathrm{polylog}(n_\mathrm{gr}/\epsilon))$ in terms of the qubit number, while using quantum random access memories with $O(n_\mathrm{gr}^3)$ entries. As far as we know, this is the first quantum algorithm for the LSS simulation that outputs the quantity of practical interest with guaranteed accuracy.

Lunch break

30 minutes talk with 5 minutes Q&A

Abstract: TBC

15 minutes talk with 2 minutes Q&A each

  • Edoardo Altamura (The Hartree Centre) – Quantum optimization beyond gradients

Abstract: Many quantitative problems in industry and science today can be tackled using optimization techniques, whereby an algorithm defined by a set of rules suggests the adjustments to be made to minimize (or maximize) an objective function given the constraints. Some of these problems, usually NP-hard, are known to become intractable on classical computers because the computational cost to find a solution rapidly increases with the problem size. However, studies found quantum algorithms that can provide significant speed-ups when executed on quantum hardware, opening a line of research focused on quantum utility and quantum advantage. Luckily, Variational Quantum Algorithms and Quantum Neural Networks, both gradient-based, can be efficiently implemented on today’s quantum devices. However, they cannot be easily trained due to the barren plateau problem. Indeed, the role of gradients is central to the non-trainability in barren plateaus, but even gradient-free methods are affected, and more so in the presence of hardware noise and increased qubit count. Starting with an overview of the challenges of traditional gradient-based and gradient-free methods, this talk will outline the debate around classes of optimization algorithms which may have the potential of circumventing or mitigating the barren plateau problem. Then, I will focus on evolutionary techniques as one representative class of such promising candidates, explaining the implications of recent results and the path forward.

 

  • Katie Klymko (NERSC/Lawrence Berkeley National Laboratory) – Exploring the Lieb Lattice Phase Diagram on a Neutral Atom Array

Abstract: In this presentation, I will provide an overview of the National Energy Research Scientific Computing Center’s (NERSC) partnership with QuEra Computing, a leader in neutral atom hardware. I will discuss the scientific applications we’ve been exploring in collaboration with QuEra’s team, and how these efforts are shaping our understanding of computational areas that could be accelerated by access to quantum technologies. I will focus on our work exploring the ground state phases and phase transitions of Rydberg atoms on the Lieb lattice with both homogeneous and inhomogeneous detuning. We study the system numerically via DMRG simulations on NERSC’s classical supercomputer Perlmutter and experimentally via Aquila, the Rydberg atom quantum simulator from QuEra Computing. The homogeneous detuning phase diagram hosts a zoo of quantum phases and provides an interesting playground for exploring multicritical phenomena. We map out the phase diagram under two distinct boundary conditions experimentally and in both cases find qualitative and quantitative agreement with simulations. With inhomogeneous detuning, we further find that the system exhibits an analog of the liquid-vapor and electroweak phase transitions in a quantum setting. Our work highlights the rich equilibrium physics that can be probed in analog quantum simulators.

 

  • Stefano Barison (EPFL) – Joint quantum and classical algorithms to simulate quantum systems (online presentation)

Abstract: Quantum simulation is one of the most promising application of quantum devices. In particular, the study of the electronic structures of molecules and materials is fundamental in computational quantum chemistry, condensed matter physics and material sciences. After a decade of initial demonstrations on physical systems of small size, quantum computers have finally reached a qubits count that could enable calculations with practical use cases exceeding classical computing power. However, in the absence of error correction, current devices are still limited by gate errors and qubits coherence times. This makes it unlikely to rely on a quantum device for the entire simulation of quantum system of practical relevance in the near future. Motivated by these limitations, several steps have been made towards combining quantum devices with the capabilities of classical computers. In this talk, we will present our results in integrating quantum devices in classical workflows We will explain how this brought to the more general theory of variational embeddings tha we discussed in a preprint (manuscript attached) earlier this year. Finally, we will showcase our steps towards implementation on real quantum hardware.

Afternoon break.

30 minutes talk with 5 minutes Q&A

Abstract: Electronic structure calculations on quantum computers (QC) have a potential to revolutionize computational chemistry, but simulations on present day noisy intermediate scale quantum (NISQ) computers still represent significant computational challenge. Many papers in the field remain largely focused on quantum circuit simulators rather than calculations on actual quantum hardware. In studies using QC hardware, electronic structure problems are solved using the variational quantum eigensolver (VQE). However, VQE is confined low QuBit counts (<16) limiting its ability to take advantage of current generation hardware with over 100 QuBits. Recently, a subspace algorithm based on sampling and classical post-processing using a quantum-centric supercomputing (QCSC) architecture allowed for the extension of electronic structure simulations to up to 77-qubit experiments enabling simulations of active spaces including up to 36 orbitals. This represents the largest so-called complete active space (CAS) calculations that has ever been used for electronic structure calculations on actual quantum hardware. The resultant workflow has been given the name Sample-based Quantum Subspace Diagonalization (SQSD). In my talk I will briefly describe the SQSD approach and its application to intermolecular interactions and conformational spaces of complex organic molecules. I will also discuss future applications of this technology to problems in health and life sciences.

15 minutes talk with 2 minutes Q&A each

  • Nathan Fitzpatrick (Quantinuum) – Quantum Signal Processing in Quantum Chemistry

Abstract: We adapt a recent advance in resource-frugal quantum signal processing — the Quantum Eigenvalue Transform with Unitary matrices (QET-U) — to explore non-unitary imaginary time evolution on early fault-tolerant quantum computers using exactly emulated quantum circuits. We test strategies for optimising the circuit depth and the probability of successfully preparing the desired imaginary-time evolved states. For the task of ground state preparation, we confirm that the probability of successful post-selection is quadratic in the initial reference state overlap $\gamma$ as $O(\gamma^2)$. When applied instead to thermal state preparation, we show QET-U can directly estimate partition functions at exponential cost. Finally, we combine QET-U with Trotter product formula to perform non-normal Hamiltonian simulation in the propagation of Lindbladian open quantum system dynamics. We find that QET-U for non-unitary dynamics is flexible, intuitive and straightforward to use, and suggest ways for delivering quantum advantage in simulation tasks.

 

  • Declan Millar (IBM Research) – Quench spectroscopy on a digital quantum computer

Abstract: We propose a novel approach to spectroscopy using digital quantum computers. By combining quench spectroscopy with compressed quantum circuits, we can obtain high-resolution data without extensive experimental resources. The goal is to enable materials scientists to rapidly characterize candidate materials for target applications, allowing them to quickly iterate and tune the quantum mechanical properties of their models to achieve desired features.

Join Newsletter

Provide your details to receive regular updates from the STFC Hartree Centre.