LogHD: Robust Compression of Hyperdimensional Classifiers via Logarithmic Class-Axis Reduction

2025-11-06 20:00 GMT · 5 months ago aimagpro.com

arXiv:2511.03938v1 Announce Type: new
Abstract: Hyperdimensional computing (HDC) suits memory, energy, and reliability-constrained systems, yet the standard “one prototype per class” design requires $O(CD)$ memory (with $C$ classes and dimensionality $D$). Prior compaction reduces $D$ (feature axis), improving storage/compute but weakening robustness. We introduce LogHD, a logarithmic class-axis reduction that replaces the $C$ per-class prototypes with $n!approx!lceillog_k Crceil$ bundle hypervectors (alphabet size $k$) and decodes in an $n$-dimensional activation space, cutting memory to $O(Dlog_k C)$ while preserving $D$. LogHD uses a capacity-aware codebook and profile-based decoding, and composes with feature-axis sparsification. Across datasets and injected bit flips, LogHD attains competitive accuracy with smaller models and higher resilience at matched memory. Under equal memory, it sustains target accuracy at roughly $2.5$-$3.0times$ higher bit-flip rates than feature-axis compression; an ASIC instantiation delivers $498times$ energy efficiency and $62.6times$ speedup over an AMD Ryzen 9 9950X and $24.3times$/$6.58times$ over an NVIDIA RTX 4090, and is $4.06times$ more energy-efficient and $2.19times$ faster than a feature-axis HDC ASIC baseline.