Skip to yearly menu bar Skip to main content

Workshop: Table Representation Learning Workshop

Generating Data Augmentation Queries Using Large Language Models

Christopher Buss · Jasmin Mousavi · Mikhail Tokarev · Arash Termehchy · David Maier · Stefan Lee

Keywords: [ Heterogeneous DBMS ] [ Information Integration ] [ Applied ML and AI for data management ] [ data integration ] [ Federated DBMS ] [ Large language models ]


Users often want to augment entities in their datasets with relevant informationAs many external sources are accessible only via keyword-search interfaces, a user usually has to manually formulate a keyword query that extracts relevant information for each entity.This is challenging as many data sources contain numerous tuples, only a small fraction of which may be relevant.Moreover, different datasets may represent the same information in distinct forms and under different terms.In such cases, it is difficult to formulate a query that precisely retrieves information relevant to a specific entity.Current methods for information enrichment mainly rely on resource-intensive manual effort to formulate queries to discover relevant information. However, it is often important for users to get initial answers quickly and without substantial investment in resources (such as human attention).We propose a progressive approach to discovering entity-relevant information from external sources with minimal expert intervention. It leverages end users' feedback to progressively learn how to retrieve information relevant to each entity in a dataset from external data sources.To bootstrap performance, we use a pre-trained large language model (LLM) to produce rich representations of entities. We evaluate the use of parameter efficient techniques for aligning the LLM's representations with our downstream task of online query policy learning.

Chat is not available.