- This event has passed.
High Performance Inferencing for LLMs
Inferencing has become ubiquitous across cloud, regional, edge, and device environments, powering a wide spectrum of AI use cases spanning vision, language, and traditional machine learning applications. In recent years, Large Language Models (LLMs), initially developed for natural language tasks, have expanded to multimodal applications including vision speech, reasoning and planning each demanding distinct service-level objectives (SLOs). Achieving high-performance inferencing for such diverse workloads requires both model-level and system-level optimizations.
This talk focuses on system-level optimization techniques that maximize token throughput and minimize cost per token, maintaining achieve meeting user experience metrics and inference service-provider efficiency. We review several recent innovations including KV caching, Paged/Flash/Radix Attention, Speculative Decoding, and KV Routing, and explain how these mechanisms enhance performance by reducing latency, memory footprint, and compute overhead. These techniques are implemented in leading open-source inference frameworks such as vLLM, SGLang, Hugging Face TGI, and NVIDIA NIM, which form the backbone of large-scale public and private LLM serving platforms.
The use of GPU Training, Inference and Analysis clusters with Multi-Instance-GPU's (MIG), and Federated Models with QML applications is now become practical.
Attendees will gain a practical understanding of the challenges in delivering scalable, low-latency LLM inference, and of the architectural and algorithmic innovations driving next-generation high-performance inference systems.
Co-sponsored by: eMerging Open Tech Foundation
Speaker(s): Ravishankar,
Agenda:
– Introduction to INGR with AIML & QIT working groups (Baw Chng + Prakash Ramchandran) – 10 mts
– High Performance Inferencing for LLMs – By Dr. Ravishankar Ravindran ( Tech. Director eOTF – Advisory) -60 mts
– Q&A – 20 mts
Virtual: https://events.vtools.ieee.org/m/508671
