As we move from the general principles of variational algorithms discussed previously, we now examine how to structure parameterized quantum circuits (PQCs) to mimic the layered organization found in classical neural networks. This involves defining analogous concepts for "neurons" and "layers" within the quantum domain, forming the fundamental components of Quantum Neural Networks (QNNs).
Recall a classical neuron: it receives inputs, computes a weighted sum, and applies a non-linear activation function to produce an output. Translating this directly to quantum mechanics presents challenges. Quantum evolution is inherently linear (described by unitary operators), and measurement introduces non-linearity but also collapses the quantum state.
The most prevalent approach treats a parameterized quantum circuit itself as the core processing unit, acting somewhat like a complex neuron or even a small layer. Let's break down this PQC-based model:
We can visualize this fundamental unit as follows:
A conceptual diagram of a PQC-based processing unit in a QNN. Classical data x is encoded into ∣ϕ(x)⟩, processed by the parameterized circuit U(θ), and measured via observable M to yield a classical output ⟨M⟩.
The specific structure of the PQC U(θ), often referred to as the ansatz, is a significant design choice. It dictates the types of transformations the quantum "neuron" can perform and heavily influences the model's expressibility and trainability (including susceptibility to issues like barren plateaus, discussed in Chapter 4).
Just as classical neurons are grouped into layers, these PQC-based units can be arranged to form quantum layers. A quantum layer typically consists of multiple PQCs acting on the system's qubits.
Key aspects of quantum layer design include:
Here's a conceptual view of a simple quantum layer composed of two PQC units acting on four qubits, possibly with shared parameters or entanglement between them:
Conceptual structure of a quantum layer applying PQCs (potentially with distinct parameters θ1,θ2 or shared parameters) to subsets of input qubits. Entangling operations might exist within or between the PQCs. Outputs are typically derived from measurements.
These quantum layers often form components within larger hybrid quantum-classical models. A typical pattern involves using input classical layers, followed by one or more quantum layers performing the core feature transformation, and finally output classical layers for post-processing and prediction.
The design of these quantum neurons and layers, specifically the choice of PQC ansatz and measurement observables, directly impacts the function the QNN can learn. We optimize the parameters θ using gradient-based or gradient-free methods (as covered in Chapter 4) to minimize a cost function derived from the final measurement outcomes, aiming to solve tasks like classification or regression. We will now explore specific QNN architectures built from these fundamental units.
© 2025 ApX Machine Learning