Interpretable ML Under the Microscope: Performance, Meta-Features, and the Regression-Classification Predictability Gap

2026-03-26 19:00 GMT · 1 day ago aimagpro.com

arXiv:2601.00428v2 Announce Type: replace
Abstract: As machine learning models are increasingly deployed in high-stakes domains, the need for interpretability has grown to meet strict regulatory and accountability constraints. Despite this interest, systematic evaluations of inherently interpretable models for tabular data remain scarce and often focus solely on aggregated performance. To address this gap, we evaluate sixteen interpretable methods, including Explainable Boosting Machines (EBMs), Symbolic Regression (SR), and Generalized Optimal Sparse Decision Trees, across 216 real-world tabular datasets. We assess predictive accuracy, computational efficiency, and generalization under distributional shifts. Moving beyond aggregate performance rankings, we further analyze how model behavior varies with dataset meta-features and operationalize these descriptors to study algorithm selection. Our analyses reveal a clear dichotomy: in regression tasks, models exhibit a predictable performance hierarchy dominated by EBMs and SR that can be inferred from dataset characteristics. In contrast, classification performance remains highly dataset-dependent with no stable hierarchy, showing that standard complexity measures fail to provide actionable guidance. Furthermore, we identify an “interpretability tax”, showing that models explicitly optimizing for structural sparsity incur significantly longer training times. Overall, these findings provide practical guidance for practitioners seeking a balance between interpretability and predictive performance, and contribute to a deeper empirical understanding of interpretable modeling for tabular data.