Self-supervised learning is a great way to extract training signals from massive amounts of unlabelled data and to learn good representation to facilitate downstream tasks where it is expensive to collect task-specific labels. This tutorial will focus on two major approaches for self-supervised learning, self-prediction and contrastive learning. Self-prediction refers to self-supervised training tasks where the model learns to predict a portion of the available data from the rest. Contrastive learning is to learn a representation space in which similar data samples stay close to each other while dissimilar ones are far apart, by constructing similar and dissimilar pairs from the dataset. This tutorial will cover methods on both topics and across various applications including vision, language, video, multimodal, and reinforcement learning.
Schedule
Mon 5:00 p.m. - 5:08 p.m.
|
Intro to self-supervised learning
(
Intro
)
>
SlidesLive Video |
Lilian Weng 🔗 |
Mon 5:08 p.m. - 5:17 p.m.
|
Early Work
(
talk
)
>
SlidesLive Video |
Jong Wook Kim 🔗 |
Mon 5:17 p.m. - 5:35 p.m.
|
Methods
(
talk
)
>
SlidesLive Video |
Lilian Weng 🔗 |
Mon 5:35 p.m. - 5:45 p.m.
|
Q&A
(
Q&A
)
>
SlidesLive Video |
🔗 |
Mon 5:45 p.m. - 5:55 p.m.
|
break
|
🔗 |
Mon 5:55 p.m. - 6:18 p.m.
|
Pretext tasks (vision)
(
Talk
)
>
SlidesLive Video |
Jong Wook Kim 🔗 |
Mon 6:18 p.m. - 6:28 p.m.
|
Q&A
(
Q&A
)
>
SlidesLive Video |
🔗 |
Mon 6:28 p.m. - 6:38 p.m.
|
Break
|
🔗 |
Mon 6:38 p.m. - 6:46 p.m.
|
Pretext tasks
(
talk
)
>
SlidesLive Video |
Jong Wook Kim 🔗 |
Mon 6:46 p.m. - 7:11 p.m.
|
Techniques and Conclusion
(
Talk
)
>
SlidesLive Video |
Lilian Weng · Jong Wook Kim 🔗 |
Mon 7:11 p.m. - 7:21 p.m.
|
Q&A
(
Q&A
)
>
SlidesLive Video |
🔗 |