Archives AI News

Black-Box On-Policy Distillation of Large Language Models

arXiv:2511.10643v1 Announce Type: cross Abstract: Black-box distillation creates student large language models (LLMs) by learning from a proprietary teacher model’s text outputs alone, without access to its internal logits or parameters. In this work, we introduce Generative Adversarial Distillation (GAD),…

ProbLog4Fairness: A Neurosymbolic Approach to Modeling and Mitigating Bias

arXiv:2511.09768v1 Announce Type: new Abstract: Operationalizing definitions of fairness is difficult in practice, as multiple definitions can be incompatible while each being arguably desirable. Instead, it may be easier to directly describe algorithmic bias through ad-hoc assumptions specific to a…

Rebellion: Noise-Robust Reasoning Training for Audio Reasoning Models

arXiv:2511.09682v1 Announce Type: new Abstract: Instilling reasoning capabilities in large models (LMs) using reasoning training (RT) significantly improves LMs’ performances. Thus Audio Reasoning Models (ARMs), i.e., audio LMs that can reason, are becoming increasingly popular. However, no work has studied…

Echoing: Identity Failures when LLM Agents Talk to Each Other

arXiv:2511.09710v1 Announce Type: new Abstract: As large language model (LLM) based agents interact autonomously with one another, a new class of failures emerges that cannot be predicted from single agent performance: behavioral drifts in agent-agent conversations (AxA). Unlike human-agent interactions,…

Cogent argument extensions are weakly admissible but not vice versa

arXiv:2511.09600v1 Announce Type: new Abstract: In this research note, we show the relationship between two non-admissible argumentation framework semantics: cogent and weakly admissible semantics. We prove that, while cogent extensions are weakly admissible, the converse is not true.