CHAPTERS
1.Welcome back to ICML 2019 presentations. This session on Deep Learning Architect...00:00
2.Paper: Graph Matching Networks for Learning the Similarity of Graph Structured O...05:42
3.Graph structured data appear in many applications05:48
4.Finding similar graphs07:06
5.The binary function similarity search problem07:55
6.Most existing approaches10:12
7.Different graph similarity estimation paradigms11:05
8.Graph similarity learning11:44
9.Learning graph embeddings with Graph Neural Nets12:36
10.Graph embedding model details13:23
11.Graph Matching Networks14:41
12.Other variants17:14
13.Experiments18:06
14.Synthetic task: graph edit distance learning19:20
15.Results on binary function similarity search19:53
16.More ablation studies21:05
17.Learned attention patterns21:30
18.Conclusions and future directions22:28
19.Q&A23:26
20.Paper: BayesNAS: A Bayesian Approach for. Neural Architecture Search24:49
21.What?25:43
22.Why?26:13
23.How?27:20
24.Byproduct:28:39
25.Experiment28:49
26.Conclusion and future work:29:26
27.Paper: Set Transformer: A Framework for Attention-basedPermutation-Invariant Neu...30:40
28.Set-input problems and Deep Sets (Zaheer et al., 2017130:51
29.Attention based set operations31:47
30.Set transformer - building blocks32:31
31.Set transformer - architecture34:02
32.Experiments34:47
33.Conclusion35:28
34.Paper: Shallow-Deep Networks: Understanding and Mitigating Network Overthinking36:05
35.What is overthinking?36:19
36.Do deep neural networks overthink too?36:40
37.How do we detect overthinking?37:07
38.The SDN modification37:30
39.The wasteful effect of overthinking38:16
40.The destructive effect of overthinking38:55
41.The destructive effect causes disagreement39:14
42.Implications39:48
43.Thank you!40:07
44.Paper: Graph U-Nets40:36
45.IMAGE VS. GRAPH40:45
46.U-NET ON GRAPH41:19
47.GRAPH POOLING LAYER (GPOOL)41:57
48.GRAPH UN-POOLING LAYER (GUNPOOL)42:52
49.GRAPH U-NET43:14
50.NETWORK REPRESENTATION LEARNING RESULTS43:27
51.Paper: SATNet: Bridging deep learning and logical reasoning using a differentiab...45:00
52.Integrating deep learning and logic45:56
53.This talk is not about . .48:19
54.This talk is about49:09
55.Review of SAT problems49:34
56.MAXSAT Problem50:27
57.SATNet: MAXSAT SDP as a layer51:55
58.Fast solution to MAXSAT SDP approximation53:42
59.Differentiate through the optimization problem54:58
60.SATNet: MAXSAT SDP as a layer56:00
61.Other ingredients in SATNet56:42
62.Illustration: Learning Parity from single bit supervision57:50
63.Illustration: Learning Sudoku58:42
64.Illustration: MNIST Sudoku1:00:29
65.Code and Colab1:01:51
66.Conclusion1:02:03
67.Q&A1:03:24
68.Paper: Area Attention1:08:23
69.Neural Attentional Mechanisms1:08:31
70.Neural Machine Translation1:08:47
71.Image Captioning1:09:01
72.Attention-Based Architectures1:09:15
73.Limitations1:09:37
74.Research Goal1:10:08
75.1D Area Attention1:10:52
76.2D Area Attention1:11:28
77.Features of Each Area1:12:05
78.Area Attention consistently Improves upon Transformer & LSTM1:12:35
79.Paper: The Evolved Transformer1:13:18
80.Motivation1:13:38
81.Methods1:14:05
82.Evolved Transformer Performance1:15:49
83.Architecture Comparison1:17:02
84.Summary1:17:53
85.Paper: Jumpout : Improved Dropout for Deep Neural Networks with ReLUs1:18:19
86.Dropout has a few Drawbacks...1:18:38
87.Jumpout Modification I - Encourage Local Smoothness1:19:26
88.Jumpout Modification II - Better Control of Regularization1:20:35
89.Jumpout Modification III - Synergize well with Batchnorm1:21:22
90.Results1:22:18
91.Thank you!1:22:34
92.Paper: Stochastic Deep Networks1:22:52
93.Deep Architectures on Density Inputs1:23:14
94.Proposed Layer: Elementary Block (EB)1:24:02
95.Proposed Architechtures1:25:22
96.Approximation Property1:25:56
97.Applications1:26:14
98.Conclusion / Open Problems1:27:16
CHAPTERS
Powered byVideoKen
MORE
ShareCOPIED
Share Topic...
×
Search in Video
Feedback
Powered byVideoKen
SEARCH IN VIDEO
Transcript is not available for this video
Powered byVideoKen
FEEDBACK
Love it, will recommend
Good, but won't recommend
Bad, needs a lot of improvement
Submit Feedback
Powered byVideoKen