In-Context Learning as Nonparametric Conditional Probability Estimation: Risk Bounds and Optimality

arXiv:2508.08673v2 Announce Type: replace Abstract: This paper investigates the expected excess risk of in-context learning (ICL) for multiclass classification. We formalize each task as a sequence of labeled examples followed by a query input; a pretrained model then estimates the query's conditional class probabilities. The expected excess risk is defined as the average truncated Kullback-Leibler (KL) divergence between the predicted and true conditional class distributions over a specified family of tasks. We establish a new oracle inequality for this risk, based on KL divergence, in multiclass classification. This yields tight upper and lower bounds for transformer-based models, showing that the ICL estimator achieves the minimax optimal rate (up to logarithmic factors) for conditional probability estimation. From a technical standpoint, our results introduce a novel method for controlling generalization error via uniform empirical entropy. We further demonstrate that multilayer perceptrons (MLPs) can also perform ICL and attain the same optimal rate (up to logarithmic factors) under suitable assumptions, suggesting that effective ICL need not be exclusive to transformer architectures.

2025-09-03 04:00 GMT · 1 day ago arxiv.org

arXiv:2508.08673v2 Announce Type: replace Abstract: This paper investigates the expected excess risk of in-context learning (ICL) for multiclass classification. We formalize each task as a sequence of labeled examples followed by a query input; a pretrained model then estimates the query's conditional class probabilities. The expected excess risk is defined as the average truncated Kullback-Leibler (KL) divergence between the predicted and true conditional class distributions over a specified family of tasks. We establish a new oracle inequality for this risk, based on KL divergence, in multiclass classification. This yields tight upper and lower bounds for transformer-based models, showing that the ICL estimator achieves the minimax optimal rate (up to logarithmic factors) for conditional probability estimation. From a technical standpoint, our results introduce a novel method for controlling generalization error via uniform empirical entropy. We further demonstrate that multilayer perceptrons (MLPs) can also perform ICL and attain the same optimal rate (up to logarithmic factors) under suitable assumptions, suggesting that effective ICL need not be exclusive to transformer architectures.

Original: https://arxiv.org/abs/2508.08673