1.PART 1 - Introduction to Unsupervised Learning: Alex Graves00:00
2.Types of Learning00:24
3.Why Learn Without a Teacher?03:10
4.Transfer Learning05:46
5.The Cherry on the Cake07:23
6.Example09:00
7.Supervised Learning11:25
8.Unsupervised Learning12:18
9.Density Modelling13:34
10.Where to Look14:28
11.Problems with Density Modelling16:19
12.Generative Models18:37
13.Autoregressive Models19:53
14.The Chain Rule for Probabilities20:10
15.Autoregressive Networks22:12
16.Recurrent Neural Network Language Models24:14
17.Advantages of
Autoregressive Models25:25
18.Disadvantages of
Autoregressive Models26:01
19.Language Modelling28:06
20.WaveNets29:06
21.PixelRNN - Model30:18
22.Pixel RNN - Samples31:48
23.Conditional Pixel CNN32:18
24.Autoregressive over slices, then pixels within a slice33:04
25.Video Pixel Network (VPN)34:44
26.Handwriting Synthesis34:55
27.Autoregressive Mixture Models35:10
28.Representation Learning35:55
29.Unsupervised Representations38:41
30.Reading the Latent Language39:25
31.Autoencoder40:12
32.Variational AutoEncoder40:43
33.Learn the Dataset, Not the Datapoints41:44
34.Binarized MNIST linear classification results43:22
35.Mutual Information43:45
36.Contrastive Predictive Coding44:39
37.Speech - LibriSpeech46:13
38.Images - ImageNet46:29
39.Unsupervised
Reinforcement Learning46:38
40.Auxiliary Tasks46:49
41.UNREAL Agent47:25
42.Unsupervised RL Baselines47:27
43.Intrinsic Motivation48:01
44.Curious Agents48:28
45.Curiouser and Curiouser...49:03
46.Empowered Agents49:53
47.Conclusions51:01
48.PART 2 - Marc'Aurelio Ranzato52:20
49.Overview52:58
50.Learning Representations:
Continuous Case53:28
51.Learning Visual Representations54:48
52.Unsup. Feature Learning in
Vision55:13
53.The Vision Architecture58:15
54.Self-Supervised Learning58:38
55.SSL on Static Images: Example59:30
56.Pascal VOC Detection1:00:39
57.SSL on Static Images:
Other Examples1:01:22
58.SSL on Videos: Example1:01:50
59.UCF101 Action Recognition1:02:59
60.SSL: Other Examples1:03:16
61.Learning by Clustering1:03:46
62.ImageNet Classification1:05:19
63.Conclusions on Unsupervised
Learning of Visual Features1:05:46
64.Vision <-> NLP 1:06:46
65.Unsup. Feature Learning in NLP1:07:45
66.The meaning of a word is determined by its context.1:11:16
67.Linguistic Regularities in Word Vector Space1:12:06
68.Recap word2vec1:12:32
69.Representing Sentences1:12:57
70.BERT1:13:17
71.BERT: Training1:15:30
72.GLUE Benchmark (11 tasks)1:16:32
73.Conclusions on Learning
Representation from Text1:17:11
74.Generative Models: Vision1:18:24
75.Generative Models: Text1:19:34
76.Learning to Map1:21:06
77.Why Learning to Map1:21:24
78.Vision: Cycle-GAN1:21:42
79.Unsupervised Machine Translation1:24:09
80.Unsupervised Word Translation1:24:57
81.Results on Word Translation1:27:04
82.Naive Application of MUSE1:27:38
83.Method1:28:06
84.Adding Language Modeling1:29:34
85.NMT: Sharing Latent Space1:30:46
86.Experiments on WMT1:31:21
87.Distant & Low-Resource Language Pair: En-Ur 1:32:11
88.Conclusion on Unsupervised
Learning to Translate1:32:48
89.Challenge #1: Metrics & Tasks1:33:22
90.Challenge #2: General Principle1:35:04
91.Challenge #3: Modeling Uncertainty1:39:51
92.The Big Picture1:40:32
93.Thank You1:42:57
94.Q & A1:43:47