Skip to yearly menu bar Skip to main content


Poster

Turning Indirect Knowledge into Direct Demonstrations for Computer Agents at Scale

Tianyue Ou · Frank F. Xu · Aman Madaan · Jiarui Liu · Robert Lo · Abishek Sridhar · Sudipta Sengupta · Dan Roth · Graham Neubig · Shuyan Zhou


Abstract:

LLMs can now act as autonomous agents that interact with digital environments and complete specific objectives (e.g., arranging an online meeting). However, acquiring large-scale, direct demonstrations for these agents through exploration or reinforcement learning is costly and resulting datasets often lack comprehensive coverage, mainly due to difficulty of setting up environments for each task. On the other hand, there is abundant knowledge that may indirectly assist task completion, such as online tutorials that were created for human consumption. In this work, we present Synatra, an approach that effectively transforming the indirect knowledge into direct supervisions at scale. We define types of indirect knowledge, and carefully study the available sources to obtain it, methods to encode the structures of direct demonstrations, and finally methods to transform indirect knowledge into direct demonstrations. We use 50k such synthetically-created demonstrations to finetune a 7B CodeLlama, and demonstrate that the resulting agent surpasses all comparably sized models on three web-based task benchmarks Mind2Web, MiniWob++ and WebArena, as well as surpassing GPT-3.5 on MiniWob++ and Mind2Web. In addition, while synthetic demonstrations ($0.025 each) prove to be only 3% the cost of human demonstrations, we show that the synthetic demonstrations can be more effective than a similar number of human demonstrations. Data and code are available in the OpenReview supplementary material.

Live content is unavailable. Log in and register to view live content