Benchmarking for Quantum Machine Learning
PI: Mahmut Taylan Kandemir (Computer Science and Engineering)
Team Member (co-PI): Mehrdad Mahdavi, Associate Professor, Department of Compute Science and Engineering (expertise on Both PIs will serve as mentors for the Junior Researcher.
Context:
As machine learning (ML) systems continue to scale, they confront fundamental computational constraints, particularly in tasks involving high-dimensional optimization, kernel methods, and modeling quantum-mechanical systems—areas where classical algorithms often suffer from exponential complexity and/or memory overhead. Quantum computing provides a fundamentally different computational paradigm based on the manipulation of quantum states, enabling operations such as superposition, entanglement, and interference. These properties allow quantum systems to explore vast solution spaces in parallel and represent complex correlations that are difficult to capture classically. This gives rise to Quantum Machine Learning (QML), an emerging field that explores quantum-enhanced models capable of offering polynomial or even exponential speedups in tasks like feature space embeddings via quantum kernels, efficient sampling from probability distributions (e.g., in generative models), and variational optimization using quantum circuits.
As QML emerges as a rapidly growing research field [1], there is a critical need for standardized benchmark suites and systematic benchmarking methodologies to evaluate and compare QML algorithms, hardware platforms, and hybrid quantum-classical workflows. Such benchmarks are essential not only for assessing performance, accuracy, and resource utilization across diverse quantum devices and problem instances, but also for guiding the co-design of quantum algorithms and hardware. In fact, we believe that the QML community can draw inspiration from well-established benchmarking initiatives in classical ML—such as MLCommons Benchmarks [2], which are already playing a pivotal role in standardizing evaluation protocols, datasets, and workload definitions across models and accelerators. Similar efforts in QML would enable fair comparisons, reproducibility, and meaningful progress tracking across the landscape of quantum hardware backends, variational circuit designs, and application domains. Moreover, a welldefined QML benchmark suite would support the identification of performance bottlenecks, inform compiler and runtime optimizations, and foster collaboration between algorithm designers, system architects, and experimental physicists.
Expertise/Skills of Interest:
• Basic quantum computing knowledge including linear algebra, Hilbert spaces, linear transformations, qubits, quantum gates, and circuits.
• Basic knowledge in algorithms, in particular on Grover’s, QPE, VQE, and QAOA.
• Basic knowledge in core ML concepts including supervised/unsupervised learning, overfitting, regularization, backpropagation.
• Basic programming skills (preferably Phyton and C++).
Expectations:
• A PhD student with at least some experience and/or training in computer science, IST, math or a related field.
• Being able to write QML code for basic ML algorithms that are amenable to quantum computing.
• Being able to put such quantum algorithms together as a benchmark suite (along with evaluation results) and maintain it.
• Weekly meetings and regular project updates with faculty advisors.
Mid-Range Goal:
The goal of QML benchmarking is to establish a rigorous, standardized, and practical (easy to use) framework for systematically evaluating and comparing QML systems—spanning algorithms, hardware systems, and application domains. This framework aims to accelerate scientific discovery in both quantum computing and ML, inform the co-design of quantum algorithms and hardware, and provide clear evidence for when and how quantum advantage can be realized in ML tasks.
Specific Objectives:
• Produce at least 5 QML algorithms and port them into at least 2 quantum machines (IBM and Quantinuum).
• Collect detailed execution statistics from the execution of these algorithms on target quantum machines (change, if needed, the parameters of the algorithms as well and perform a sensitivity study).
• Prepare a detailed documentation explaining how to use the benchmarks, how to interpret results, how to port them to other (and possibly different types) quantum machines.
Engagement:
Kandemir is an ICDS co-hire and one of its associate directors. Mahdavi is the director of AI Hub. Kandemir and Mahdavi collaborate with several quantum computing experts from CSE and math departments.