Skip to yearly menu bar Skip to main content


Poster
in
Workshop: NeurIPS 2023 Workshop on Tackling Climate Change with Machine Learning: Blending New and Existing Knowledge Systems

How to Recycle: General Vision-Language Model without Task Tuning for Predicting Object Recyclability

Eliot Park · Eddy Pan · Shreya Johri · Pranav Rajpurkar


Abstract:

Waste segregation and recycling place a crucial role in fostering environmental sustainability. However, discerning the whether a material is recyclable or not poses a formidable challenge, primarily because of inadequate recycling guidelines to accommodate a diverse spectrum of objects and their varying conditions. We investigated the role of vision-language models in addressing this challenge. We curated a dataset consisting >1000 images across 11 disposal categories for optimal discarding and assessed the applicability of general vision-language models for recyclability classification. Our results show that Contrastive Language-Image Pre- training (CLIP) model, which is pretrained to understand the relationship between images and text, demonstrated remarkable performance in the zero-shot recycla- bility classification task, with an accuracy of 89%. Our results underscore the potential of general vision-language models in addressing real-world challenges, such as automated waste sorting, by harnessing the inherent associations between visual and textual information.

Chat is not available.