Skip to yearly menu bar Skip to main content


Search All 2024 Events
 

1272 Results

<<   <   Page 2 of 106   >   >>
Workshop
Sun 10:21 Lorenzo Sani, Alex Iacob, Zeyu Cao, Bill Marino, Yan Gao, Tomas Paulik, Wanru Zhao, William F. Shen, Preslav Aleksandrov, Xinchi Qiu & Nicholas Donald Lane. The Future of Large Language Model Pre-training is Federated
Workshop
The Future of Large Language Model Pre-training is Federated
Lorenzo Sani · Alexandru-Andrei Iacob · Zeyu Cao · Bill Marino · Yan Gao · Tomas Paulik · Wanru Zhao · William Shen · Preslav Aleksandrov · Xinchi Qiu · Nicholas Lane
Workshop
Sat 10:55 Expertise-Centric Prompting Framework for Financial Tabular Data Generation using Pre-trained Large Language Models
Subin Kim · Jungmin Son · Minyoung Jung · Youngjun Kwak
Workshop
Generalized Prompt Tuning: How to Use a Frozen Pre-Trained Univariate Time Series Foundation Model for Multivariate Time Series Prediction
Mingzhu Liu · Angela Chen · George H Chen
Workshop
Improving generalisability of 3D binding affinity models in low data regimes
Julia Milena Buhmann · Ward Haddadin · Alan Bilsland · Lukáš Pravda · Hagen Triendl
Poster
Wed 16:30 How does Architecture Influence the Base Capabilities of Pre-trained Language Models? A Case Study Based on FFN-Wider and MoE Transformers
Xin Lu · Yanyan Zhao · Bing Qin · Liangyu Huo · Qing Yang · Dongliang Xu
Workshop
Measuring Pre-training Data Quality without Labels for Time Series Foundation Models
Songkang Wen · Vasilii Feofanov · Jianfeng Zhang
Workshop
Pre-Training Multimodal Hallucination Detectors with Corrupted Grounding Data
Spencer Whitehead · Jacob Phillips · Sean Hendryx
Workshop
Sat 10:50 Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare
Emre Can Acikgoz · Osman Batur İnce · Rayene Bech · Arda Boz · Ilker Kesen · Aykut Erdem · Erkut Erdem
Poster
Thu 16:30 Extracting Training Data from Molecular Pre-trained Models
Renhong Huang · Jiarong Xu · Zhiming Yang · Xiang Si · Xin Jiang · Hanyang Yuan · Chunping Wang · YANG YANG
Workshop
From One to Zero: RAG-IM Adapts Language Models for Interpretable Zero-Shot Predictions on Clinical Tabular Data
Sazan Mahbub · Caleb Ellington · Sina Alinejad · Kevin Wen · Yingtao Luo · Ben Lengerich · Eric Xing
Workshop
Sat 11:00 Optimizing Data Use for Efficient Pre-training
Danqi Chen