SpecPipe: Accelerating Pipeline Parallelism-based LLM Inference with Speculative Decoding

arXiv:2504.04104v2 Announce Type: replace Abstract: The demand for large language model inference is rapidly increasing. Pipeline parallelism offers a cost-effective deployment strategy for distributed inference but suffers from high service latency. While incorporating speculative decoding to pipeline parallelism improves performance, it still faces challenges of low hardware utilization and narrow speculative window. Inspired by branch prediction in instruction pipelining, we introduce SpecPipe, which fills the pipeline with speculative tokens of a request step-by-step. By maximizing the hardware utilization, SpecPipe decodes one token per pipeline step ideally. Specifically, SpecPipe comprises a dynamic speculative token tree and a pipelined inference framework. The tree dynamically accepts tokens from a speculative token source and outputs the tokens to the inference pipeline. Since the speculative window relaxed in our framework, a high-accuracy draft model is integrated without fine-tuning. The pipeline inference framework follows node-wise computation, pruning propagation, and inter-node communication stages. We implement SpecPipe and a variant SpecPipe-DB with dynamic batching for single- and multi-request inference, respectively. On an 8-stage pipeline, SpecPipe improves time between tokens on diverse single-request workloads by $4.19times$-$5.53times$ over standard pipeline parallelism and by $2.08times$-$2.38times$ over prior tree-based speculative decoding methods. For multi-request workloads, SpecPipe-DB achieves $1.64times$-$2.08times$ higher throughput and $1.61times$-$2.06times$ lower time between tokens than vLLM.

2025-09-01 04:00 GMT · 1 day ago arxiv.org

arXiv:2504.04104v2 Announce Type: replace Abstract: The demand for large language model inference is rapidly increasing. Pipeline parallelism offers a cost-effective deployment strategy for distributed inference but suffers from high service latency. While incorporating speculative decoding to pipeline parallelism improves performance, it still faces challenges of low hardware utilization and narrow speculative window. Inspired by branch prediction in instruction pipelining, we introduce SpecPipe, which fills the pipeline with speculative tokens of a request step-by-step. By maximizing the hardware utilization, SpecPipe decodes one token per pipeline step ideally. Specifically, SpecPipe comprises a dynamic speculative token tree and a pipelined inference framework. The tree dynamically accepts tokens from a speculative token source and outputs the tokens to the inference pipeline. Since the speculative window relaxed in our framework, a high-accuracy draft model is integrated without fine-tuning. The pipeline inference framework follows node-wise computation, pruning propagation, and inter-node communication stages. We implement SpecPipe and a variant SpecPipe-DB with dynamic batching for single- and multi-request inference, respectively. On an 8-stage pipeline, SpecPipe improves time between tokens on diverse single-request workloads by $4.19times$-$5.53times$ over standard pipeline parallelism and by $2.08times$-$2.38times$ over prior tree-based speculative decoding methods. For multi-request workloads, SpecPipe-DB achieves $1.64times$-$2.08times$ higher throughput and $1.61times$-$2.06times$ lower time between tokens than vLLM.

Original: https://arxiv.org/abs/2504.04104