CHAPTERS
1.Invited Talk by Dr Noah Goodman12:08
2.[Invited Talk: How we come to know]13:37
3.The problem of induction13:58
4.Concept learning14:38
5.A powerful learning mechanism to go from examples to concepts15:55
6.Bayes' rule16:56
7.Rule hypotheses17:17
8.Grammar gives "language of thought" for rules 18:25
9.Example: concept learning19:15
10.The problem of induction20:09
11."Ratchet"22:16
12.Learning from language23:08
13.Results24:16
14.Modeling26:53
15.Modeling results29:15
16.What language?30:37
17.Baseline comparison31:32
18.Reference games are a tremendous source of grounded language data34:10
19.Hypothesis35:23
20.Generics36:48
21.Formalizing generics39:46
22.Prior elicitation42:13
23.Interpretation data43:36
24.Model comparison44:24
25.Endorsement data + model46:06
26.Language for transfer47:34
27.Q & A50:12
28.ORAL PRESENTATIONS58:48
29.[Paper: Meta-Learning Update Rules for Unsupervised Representation Learning]59:11
30.Unsupervised representation learning59:28
31.Hand Design1:00:19
32.Objective Mismatch1:00:31
33.Deep learning, Meta learning1:02:17
34.Design goal1:02:44
35.Supervised learning loop1:03:22
36.Unsupervised learning loop1:03:45
37.Meta-learning1:04:01
38.Meta-learning for unsupervised learning1:04:40
39.Learned Update Rule Architecture (simplified)1:05:36
40.Meta-loss: generalization from few shot training1:06:59
41.Gradients through unrolled training1:07:22
42.Meta-Optimize1:07:49
43.Results1:07:56
44.Evaluating the Learning Rule1:09:16
45.Generalization to New Architectures1:09:26
46.Generalization to New Modalities1:10:47
47.Conclusion1:11:38
48.Q & A1:12:19
49.[Paper: Temporal Difference Variational Auto-Encoder]1:16:59
50.Agents model of world1:17:12
51.Agents model of temporal data1:18:32
52.Desiderata of agent models of data1:19:19
53.Forward Latent Model1:20:23
54.Belief State1:21:35
55.Intuitively building the algorithm1:22:31
56.Training the belief state network1:22:39
57.Temporal Difference1:23:57
58.Relation between t1 and t21:24:40
59.Grounding the state1:25:02
60.Model components in practice1:25:38
61.DeepMind Lab1:26:49
62.Autoregressive model1:27:05
63.3d mazes1:28:23
64.Conclusion1:30:17
65.Q & A1:31:14
66.[Paper: Transferring Knowledge across Learning Processes]1:33:52
67.An Al should exploit prior knowledge maximally1:34:04
68.Transfer learning uses source model1:34:24
69.Transfer knowledge across learning processes1:34:59
70.Meta-learning beyond Few-shot learning1:35:11
71.Leap: inductive bias over learning processes1:35:57
72.Leap takes trajectory length into account...1:36:15
73.Converges to a locally Pareto optimal solution1:37:25
74.Leap pulls initialization forward along trajectories1:37:39
75.First-order meta-gradient for free (well, almost)1:38:27
76.Multi-shot Omniglot1:39:03
77.Multi-CV1:40:42
78.Atari1:41:07
79.Leap1:42:05
80.Q & A1:42:42
CHAPTERS
Powered byVideoKen
MORE
ShareCOPIED
Share Topic...
×
Search in Video
Feedback
Powered byVideoKen
SEARCH IN VIDEO
Powered byVideoKen
FEEDBACK
Love it, will recommend
Good, but won't recommend
Bad, needs a lot of improvement
Submit Feedback
Powered byVideoKen