Skip to yearly menu bar Skip to main content


Poster
in
Workshop: NeurIPS 2023 Workshop on Machine Learning for Creativity and Design

Contextual Alchemy: A Framework for Enhanced Readability through Cross-Domain Entity Alignment

Simra Shahid · Nikitha Srikanth · Surgan Jandial · Balaji Krishnamurthy

[ ] [ Project Page ]
[ Slides [ Poster
Sat 16 Dec 1:30 p.m. PST — 2:30 p.m. PST

Abstract:

Prior to the development of Large Language Models (LLMs), the pursuit of creative writing or content adjustment mainly focused on tailoring tonality, style, and lexicon to suit reader preferences. In addition, there have been frameworks aimed at simplification like 'Explain it to me like I'm five' and targeted explanation like 'Explain to me like I'm a scientist'.In this work, we present Contextual Alchemy, a framework that identifies examples and its context in a document and suggests alternate examples for different topic of interest, time, and region. Consider that you are reading a document that mentions Magnavox Odyssey. Such an example does not resonate with all readers and they might lose relevance over time. Our framework aims to retrieve other replacable entities in similar context, for example, in the sports domain Reebok has faced a similar outcome to Magnavox Odyssey. In this manner, our work utilises LLMs to enhance readability by adapting entities and context within a document to align closely with varied reader interests, ensuring reading is more engaging, relatable, and factually consistent for diverse readers.

Chat is not available.