Skip to yearly menu bar Skip to main content

Workshop: Instruction Tuning and Instruction Following

Knowledge Augmented Instruction Tuning for Zero-shot Animal Species Recognition

Zalan Fabian · Zhongqi Miao · Chunyuan Li · Yuanhan Zhang · Ziwei Liu · Andres Hernandez · Pablo Arbelaez · Andrés Link · Andrés Montes-Rojas · Rafael Escucha · Laura Siabatto · Rahul Dodhia · Juan Lavista Ferres

Keywords: [ Instruction Tuning ] [ vision-language models ] [ AI conservation ] [ zero-shot classification ]


Due to deteriorating environmental conditions and increasing human activity, conservation efforts directed towards wildlife is crucial. Motion-activated camera traps constitute an efficient tool for tracking and monitoring wildlife populations across the globe. Supervised learning techniques have been successfully deployed to analyze such imagery, however training such techniques requires annotations from experts. Reducing the reliance on costly labelled data therefore has immense potential in developing large-scale wildlife tracking solutions with markedly less human labor. In this work, we propose a novel zero-shot species classification framework that leverages multimodal foundation models. In particular, we instruction tune vision-language models to generate detailed visual descriptions of camera trap images using similar terminology to experts. Then, we match the generated caption to an external knowledge base of descriptions in order to determine the species in a zero-shot manner. We investigate techniques to build instruction tuning datasets for detailed animal description generation and propose a novel knowledge augmentation technique to enhance caption quality. We demonstrate the performance of our proposed method on a new camera trap dataset collected in the Magdalena Medio region of Colombia.

Chat is not available.