Skip to yearly menu bar Skip to main content


Search All 2024 Events
 

10 Results

<<   <   Page 1 of 1   >>   >
Poster
Wed 11:00 Mitigating Object Hallucination via Concentric Causal Attention
Yun Xing · Yiheng Li · Ivan Laptev · Shijian Lu
Workshop
Sat 15:45 Mitigating LLM Hallucinations via ConformalAbstention
Yasin Abbasi Yadkori · Ilja Kuzborskij · David Stutz · András György · Adam Fisch · Arnaud Doucet · Iuliya Beloshapka · Wei-Hung Weng · Yao-Yuan Yang · Csaba Szepesvari · Taylan Cemgil · Nenad Tomasev
Workshop
Sat 15:45 Mitigating Hallucination in Large Language Models with Explanatory Prompting
Alexander Braverman · Weitong Zhang · Quanquan Gu
Workshop
Mitigating Hallucination in Large Vision-Language Models via Modular Attribution and Intervention
Tianyun Yang · Ziniu Li · Juan Cao · Chang Xu
Workshop
Mitigating Object Hallucination in Large Vision-Language Models via Image-Grounded Guidance
Linxi Zhao · Yihe Deng · Weitong Zhang · Quanquan Gu
Workshop
HSCL-RL: Mitigating Hallucinations in Multimodal Large Language Models
Zichen Song · 思潭 黄
Poster
Thu 16:30 Hallo3D: Multi-Modal Hallucination Detection and Mitigation for Consistent 3D Content Generation
Hongbo Wang · Jie Cao · Jin Liu · Xiaoqiang Zhou · Huaibo Huang · Ran He
Workshop
Mitigating Hallucinations in LVLMs via Summary-Guided Decoding
Kyungmin Min · Minbeom Kim · Kang-il Lee · Dongryeol Lee · Kyomin Jung
Workshop
Incorporating Generative Feedback for Mitigating Hallucinations in Large Vision-Language Models
Ce Zhang · Zifu Wan · Zhehan Kan · Martin Q. Ma · Simon Stepputtis · Deva Ramanan · Ruslan Salakhutdinov · Louis-Philippe Morency · Katia Sycara · Yaqi Xie
Workshop
THaMES: An End-to-End Tool for Hallucination Mitigation and Evaluation in Large Language Models
Mengfei Liang · Archish Arun · Zekun Wu · CRISTIAN VILLALOBOS · Jonathan Lutch · Emre Kazim · Adriano Koshiyama · Philip Treleaven