1.[PAPER: HYPERBOLIC NEURAL NETWORKS]07:14
2.Difficulties 08:52
3.Use Gyro-vector spaces to generalize basic operations and neural networks09:48
4.Our contributions10:13
5.Experiments12:15
6.[PAPER: NORM MATTERS: EFFICIENT AND ACCURATE NORMALIZATION SCHEMES IN DEEP NETW...12:53
7.Batch normalization13:21
8.Batch-norm Leads to norm invariance14:00
9.Weight decay before BN is redundant14:39
10.Improving weight-norm15:10
11.Replacing Batch-norm - switching norms15:53
12.Low precision batch-norm17:16
13.With a few more tricks..17:29
14.Thank you for your time!17:44
15.[PAPER: CONSTRUCTING FAST NETWORK THROUGH DECONSTRUCTION OF CONVOLUTION]17:59
16.Goal18:24
17.Deconstruction of Convolution19:36
18.Example of learned Shift21:19
19.[PAPER: A SIMPLE UNIFIED FRAMEWORK FOR DETECTING OUT-OF-DISTRIBUTION SAMPLES AND...22:06
20.Motivation: Detecting Abnormal Samples22:33
21.Experimental Results25:39
22.Conclusion25:54
23.[PAPER: DISCOVERY OF LATENT 3D KEYPOINTS VIA END-TO-END GEOMETRIC REASONING]26:54
24.Semantic Correspondence27:52
25.Problem 2: Choosing what to choose28:23
26.Traditional Pipeline29:13
27.End-to-end Geometric Reasoning30:17
28.Pose Estimation31:31
29.Angular Distance Error36:12
30.Preliminary Results on Real Images37:55
31.Conclusion & Discussions38:21
32.[PAPER: LEARNING LIBRARIES OF SUBROUTINES FOR NEURALLY-GUIDED BAYESIAN PROGRAM I...42:05
33.Library Learning43:18
34.Subset of 38 learned library routines for list processing43:53
35.Domain: Text Editing45:29
36.Programs with numerical parameters: Symbolic regression from visual input46:05
37.New domain: Generative graphics programs (Turtle/LOGO)46:30
38.Generative graphics programs: Samples from DSL46:41
39.[PAPER: LEARNING LOOP INVARIANTS FOR PROGRAM VERIFICATION]47:37
40.Program verification47:54
41.Difficulties of learning loop Invariant 48:58
42.Solution to sparsity and non-smoothness49:35
43.Solution to generalization50:33
44.Poster session:52:21
45.[PAPER: DEEPPROBLOG: NEURAL PROBABILISTIC LOGIC PROGRAMMING]52:35
46.Real-life problems involve two important aspects52:57
47.The neural predicate53:35
48.Perception54:06
49.Related work54:24
50.Example task: MNIST addition54:49
51.Combined reasoning55:30
52.Conclusion56:36
53.[PAPER: LEARNING TO INFER GRAPHICS PROGRAMS FROM HAND DRAWN IMAGES]57:38
54.Parsing images into latex Tikz commands58:57
55.Synthesizing high-level programs from specs (spec=drawing commands)59:49
56.Learning to quickly synthesize programs Learn search policy ?1:00:21
57.Example programs1:01:13
58.Application: Error correction1:01:28
59.Application: Extrapolating drawings1:01:53
60.[PAPER: LEARNING TO RECONSTRUCT SHAPES FROM UNSEEN CLASSES]1:02:49
61.Motivation1:03:22
62.Formulation1:03:57
63.What is the proper inductive bias of fe (1 )? 1:05:14
64.Depth to Shape?1:06:01
65.Our approach: Generalizable Reconstruction (GenRe)1:06:31
66.Supervision: Object- vs. Viewer-Centered1:08:19
67.Results:1:09:31
68.Effect of Viewpoints on Generalization1:12:59
69.Conclusion1:14:09
70.[PAPER: IMPROVING NEURAL PROGRAM SYNTHESIS WITH INFERRED EXECUTION TRACES]1:16:23
71.Background1:16:47
72.Karel the Robot1:18:06
73.Summary of approach1:18:24
74.Trace the Code1:19:07
75.Evaluation1:19:28
76.Thanks for Listening1:21:17
77.[PAPER: RESNET WITH ONE-NEURON HIDDEN LAYERS IS A UNIVERSAL APPROXIMATOR]1:21:34
78.In the 90's: Universal approximation theorem1:21:59
79.Deep Learning1:22:21
80.Classifying the unit ball distribution1:22:56
81.ResNet: residual network1:24:26
82.Reset with one-neuron hidden layers1:24:47
83.[PAPER: TOWARDS UNDERSTANDING LEARNING REPRESENTATIONS: TO WHAT EXTENT DO DIFFER...1:25:46
84.Motivation1:26:11
85.Two Groups of Neurons Learning the Same Representation: Exact Matches1:27:43
86.Exact/Approximate Matches between Two Groups of Neurons1:29:22
87.Maximum Matches and Simple Matches1:29:51
88.Experimental Findings: Few Matches in Intermediate Layers1:30:13
89.Thank you!1:30:32
90.[PAPER: GENERALIZED CROSS ENTROPY LOSS FOR NOISE LABELS]1:30:48
91.Motivation1:31:07
92.Symmetric loss1:31:31
93.Limitations of MAE 1:32:00
94.Generalized Cross Entropy (Lq Loss)1:32:50
95.Truncated Lq Loss 1:34:27
96.Experiments 1:34:57
97.Thank you very much for your attention!1:35:16