Skip to yearly menu bar Skip to main content


Search All 2024 Events
 

97 Results

<<   <   Page 6 of 9   >   >>
Workshop
Probabilistic Proof State Compression: Optimizing LLM-Guided Formal Verification
Noor Rahim · Ali Rahim
Workshop
Bridging the Gap between Diffusion Models and Universal Quantization for Image Compression
Lucas Relic · Roberto Azevedo · Yang Zhang · Markus Gross · Christopher Schroers
Workshop
MCUCoder: Adaptive Bitrate Learned Video Compression for IoT Devices
Ali Hojjat · Janek Haberer · Olaf Landsiedel
Workshop
Communication Compression for Tensor Parallel LLM Inference
Jan Hansen-Palmus · Alok Verma · Michael Truong Le
Workshop
The Trichromatic Strong Lottery Ticket Hypothesis: Neural Compression With Three Primary Supermasks
Ángel López García-Arias · Yasuyuki Okoshi · Hikari Otsuka · Daiki Chijiwa · Yasuhiro Fujiwara · Susumu Takeuchi · Masato Motomura
Workshop
Losslessly Compressible Neural Network Parameters
Matthew Farrugia-Roberts
Workshop
Perception Loss Function Adaptive to Rate for Learned Video Compression
Sadaf Salehkalaibar · Truong Buu Phan · João Atz Dick · Ashish Khisti · Jun Chen · Wei Yu
Workshop
LORC: Low-Rank Compression for LLMs KV Cache with a Progressive Compression Strategy
Rongzhi Zhang · Kuan Wang · Liyuan Liu · Shuohang Wang · Hao Cheng · Chao Zhang · yelong shen
Workshop
LoRC: Low-Rank Compression for LLMs KV Cache with a Progressive Compression Strategy
Rongzhi Zhang · Kuan Wang · Liyuan Liu · Shuohang Wang · Hao Cheng · Chao Zhang · Yelong Shen
Workshop
Rethinking LLM Memorization through the Lens of Adversarial Compression
Avi Schwarzschild · Zhili Feng · Pratyush Maini · Zachary Lipton · J. Zico Kolter
Workshop
Sat 11:42 GEAR: An Efficient Error Reduction Framework for KV Cache Compression in LLM Inference
· Qingru Zhang · Souvik Kundu · Geonhwa Jeong · Zaoxing Liu · Tushar Krishna · Tuo Zhao
Workshop
NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks
Yongchang Hao · Yanshuai Cao · Lili Mou