Once your quantum generative model, whether a Quantum Circuit Born Machine (QCBM) or the generator component of a Quantum Generative Adversarial Network (QGAN), has been trained, the next step is to generate new data points. This process relies fundamentally on the measurement postulate of quantum mechanics. Executing the trained quantum circuit prepares a specific quantum state ψ(θ), and measuring this state yields outcomes according to a probability distribution determined by the state itself.
Sampling from a quantum generative model involves these core steps:
For a QCBM preparing the state ∣ψ(θ)⟩, the probability of measuring the computational basis state ∣x⟩ (represented by the bitstring x) is given by Born's rule:
P(x)=∣⟨x∣ψ(θ)⟩∣2Therefore, repeatedly preparing ∣ψ(θ)⟩ and measuring in the computational basis directly produces samples x distributed according to P(x).
For a QGAN, the generator G(θ) is designed to transform an initial state (often ∣0⟩⊗n or sometimes conditioned on a random input) into a state ∣ψG(θ)⟩ whose measurement probabilities approximate the target data distribution. Sampling involves executing G(θ) and measuring its output qubits. The discriminator is not involved in the sampling process itself, only in training the generator.
Flowchart illustrating the process of generating a single sample from a trained quantum generative model and repeating for multiple samples.
Generating samples efficiently and accurately involves several practical points:
The number of times you execute the circuit and measure (the number of "shots") determines how well your collection of samples approximates the true underlying distribution pmodel(x). More shots generally lead to a better approximation but increase the computational cost (time and resources). Choosing the number of shots often involves a trade-off based on the requirements of the downstream task and the available quantum resources.
While quantum measurements can be performed in various bases, sampling for generative modeling almost always uses the computational (Z) basis. This is because the goal is typically to generate classical data samples (like bitstrings representing images or numerical data), and the computational basis provides a direct mapping from quantum measurement outcomes to classical bitstrings.
When running on actual quantum hardware, the sampling process is affected by noise.
These errors mean the empirically observed distribution of samples pempirical(x) obtained from hardware might differ significantly from the ideal target distribution pmodel(x). Readout error mitigation techniques are often applied post-measurement to correct the sample counts and obtain a more accurate estimate of the intended distribution.
The samples generated through this process are crucial for evaluating the performance of the quantum generative model. Metrics discussed in the previous section, such as comparing the distribution of generated samples to the original data distribution using classical divergence measures or specific tests like the Maximum Mean Discrepancy (MMD), rely directly on having a representative set of samples drawn from the trained model.
Here's a conceptual code snippet illustrating sampling using a Qiskit-like structure:
# Conceptual example using Qiskit-like syntax
# Assume 'trained_circuit' is the quantum circuit for QCBM or QGAN Generator
# with parameters theta already optimized.
from qiskit import QuantumCircuit, transpile
from qiskit_aer import AerSimulator # Using a simulator
# Or from qiskit_ibm_provider import IBMProvider for real hardware
# Define the number of samples required
num_samples = 2048
# Assume trained_circuit is a QuantumCircuit object
# Add measurements to all qubits
trained_circuit.measure_all()
# Choose a backend (simulator or real hardware)
# simulator = AerSimulator()
# backend = simulator
# Alternatively, for hardware:
# provider = IBMProvider()
# backend = provider.get_backend('ibm_brisbane') # Example backend
# For simulation
simulator = AerSimulator()
compiled_circuit = transpile(trained_circuit, simulator)
job = simulator.run(compiled_circuit, shots=num_samples)
result = job.result()
counts = result.get_counts(compiled_circuit)
# For hardware (conceptual)
# compiled_circuit = transpile(trained_circuit, backend)
# job = backend.run(compiled_circuit, shots=num_samples)
# result = job.result() # This call may block until the job completes
# counts = result.get_counts(compiled_circuit)
# Optional: Apply measurement error mitigation if using hardware
# 'counts' is a dictionary: {'0010': 150, '1101': 45, ...}
# Keys are the measured bitstrings (samples), values are frequencies.
print(f"Raw counts from backend: {counts}")
# Convert counts dictionary to a list of samples if needed
generated_samples = []
for bitstring, count in counts.items():
generated_samples.extend([bitstring] * count)
print(f"Generated {len(generated_samples)} samples.")
# These samples can now be used for evaluation or downstream tasks.
# e.g., convert bitstrings '0110' to data points like images or features.
This process of executing the circuit and measuring is the standard way to extract the information learned by quantum generative models, turning the quantum state representation back into classical data samples that mimic the original dataset. Understanding the nuances of this sampling process, especially concerning noise and the number of shots, is important for effectively using and evaluating these quantum models.
© 2025 ApX Machine Learning