Quantized but Deceptive? A Multi-Dimensional Truthfulness Evaluation of Quantized LLMs
Quantized but Deceptive? A Multi-Dimensional Truthfulness Evaluation of Quantized LLMs arXiv:2508.19432v1 Announce Type: new Abstract: Quantization enables efficient deployment of large language models (LLMs) in resource-constrained environments by significantly reducing memory and computation costs. While quantized LLMs often maintain performance…
