Timezone: »
Deep learning architectures lead the state-of-the-art in several computer vision, natural language processing, and reinforcement learning tasks due to their ability to extract multi-level representations without human engineering. The model's performance is affected by the amount of labeled data used in training. Hence, novel approaches like self-supervised learning (SSL) extract the supervisory signal using unlabeled data.Although SSL reduces the dependency on human annotations, there are still two main drawbacks. First, high-computational resources are required to train a large-scale model from scratch. Second, knowledge from an SSL model is commonly finetuning to a target model, which forces them to share the same parameters and architecture and make it task-dependent.This paper explores how SSL benefits from knowledge distillation in constructing an efficient and non-task-dependent training framework. The experimental design compared the training process of an SSL algorithm trained from scratch and boosted by knowledge distillation in a teacher-student paradigm using the video-based human action recognition dataset UCF101. Results show that knowledge distillation accelerates the convergence of a network and removes the reliance on model architectures.
Author Information
Fernando Camarena (Tecnologico de Monterrey)
MIGUEL GONZALEZ-MENDOZA (Tecnologico de Monterrey)
Miguel González Mendoza holds a PhD degree and a Postdoc in Artificial Intelligence from INSA and LAAS-CNRS Toulouse, France, in 2003 and 2004 respectively. Since 2004 he works as research professor at Tecnologico de Monterrey, Mexico. Miguel González Mendoza’s research activities are focused on machine learning, semantic web and big data applications, areas in which has supervised 9 PhD and 21 MSc. Theses, published more than 100 peer reviewed scientific publications, participated and conducted more than 20 national (CONACYT founded) and international (European founded) research and innovation projects, and chaired 4 international Congresses. President of the Mexican Society for Artificial Intelligence (2017-2018), Member of the Mexican National Research System (SNI) rank II (Jan 2016), member since 2006. Head of the Graduate Programs on Computer Sciences at Tecnologico de Monterrey, Mexico 2005-2016. Invited as Young Scientist at the World Economic Forum for New Champions in Tianjin China in September 2012.
Leonardo Chang (Tecnologico de Monterrey)
Neil Hernandez-Gress (Tecnologico de Monterrey)
More from the Same Authors
-
2022 Affinity Workshop: LatinX in AI »
Maria Luisa Santiago · Juan Banda · CJ Barberan · MIGUEL GONZALEZ-MENDOZA · Caio Davi · Sara Garcia · Jorge Diaz · Fanny Nina Paravecino · Carlos Miranda · Gissella Bejarano Nicho · Fabian Latorre · Andres Munoz Medina · Abraham Ramos · Laura Montoya · Isabel Metzger · Andres Marquez · Miguel Felipe Arevalo-Castiblanco · Jorge Mendez · Karla Caballero · Atnafu Lambebo Tonja · Germán Olivo · Karla Caballero Barajas · Francisco Zabala