arXiv:2603.11131v1 Announce Type: new
Abstract: While Quantum Convolutional Neural Networks (QCNNs) offer a theoretical paradigm for quantum machine learning, their practical implementation is severely bottlenecked by barren plateaus — the exponential vanishing of gradients — and poor empirical accuracy compared to classical counterparts. In this work, we propose a novel QCNN architecture utilizing localized cost functions and a hardware-efficient tensor-network initialization strategy to provably mitigate barren plateaus. We evaluate our scalable QCNN on the MNIST dataset, demonstrating a significant performance leap. By resolving the gradient vanishing issue, our optimized QCNN achieves a classification accuracy of 98.7%, a substantial improvement over the baseline QCNN accuracy of 52.32% found in unmitigated models. Furthermore, we provide empirical evidence of a parameter-efficiency advantage, requiring $mathcal{O}(log N)$ fewer trainable parameters than equivalent classical CNNs to achieve $>95%$ convergence. This work bridges the gap between theoretical quantum utility and practical application, providing a scalable framework for quantum computer vision tasks without succumbing to loss landscape concentration.
