1.[PAPER: DIFFERENTIALLY PRIVATE TESTING OF IDENTITY AND CLOSENESS OF DISCRETE DIS...04:47
2.Hypothesis Testing05:35
3.Modern Challenges06:00
4.Previous Results07:51
5.Our Results08:05
6.Upper Bound08:49
7.Lower Bound - Coupling Lemma09:08
8.The End09:51
9.[PAPER: LOCAL DIFFERENTIAL PRIVACY FOR EVOLVING DATA]10:04
10.High-Level Motivation10:38
11.What is "private"?11:04
12.Example11:23
13.Oranges or Pears?11:37
14.New Problem: Evolving Data13:35
15.Old Solution vs. New Problem13:52
16.This Paper: New Solution For New Problem14:28
17.[PAPER: DIFFERENTIALLY PRIVATE K-MEANS WITH CONSTANT MULTIPLICATIVE ERROR]15:27
18.What is k-Means Clustering?15:43
19.Why is that a good privacy definition? 17:00
20.Differentially private K-Means Clustering17:37
21.Previous and New Bounds19:20
22.[PAPER: A SPECTRAL VIEW OF ADVERSARIALLY ROBUST FEATURES]20:18
23.What are adversarial examples?20:49
24.More Questions than Answers21:07
25.Simpler Objective: Adversarial Robust Features21:57
26.Connections to Spectral Graph Theory23:06
27.Takeaways24:44
28.[PAPER: MODEL AGNOSTIC PRIVATE LEARNING]25:33
29.This paper 26:39
30.Framework for privacy preserving learning28:52
31.Differential privacy29:25
32.Related prior work30:07
33.Our results 30:43
34.Private Algorithm for Classification Queries31:44
35.Generic Transformation of Misclassification Rate34:28
36.From Private Predictions to a Private Classifier35:36
37.Summary37:14
38.Q/A37:52
39.[PAPER: BOUNDED-LOSS PRIVATE PREDICTION MARKETS]40:11
40.Prediction Markets40:49
41.Prior work42:37
42.This paper43:31
43.[PAPER: CPSGD: COMMUNICATION-EFFICIENT AND DIFFERENTIALLY-PRIVATE DISTRIBUTED SG...45:43
44.Distributed learning with mobile devices46:02
45.Server sends model to clients...46:19
46.Clients send updates back...46:28
47.Challenge I: uplink communication is expensive46:45
48.How to design the quantization?47:07
49.Challenge II: user privacy is important47:50
50.Attempt 1: add Gaussian noise on the server48:23
51.Attempt 2: add Gaussian noise on the client48:46
52.cpSGD49:26
53.[PAPER: ADVERSARIALLY ROBUST GENERALIZATION REQUIRES MORE DATA]50:39
54.Adversarial Examples51:00
55.Standard vs Robust Generalization51:42
56.State Of The Art in lo-Robustness53:10
57.Robust Generalization54:29
58.Conclusions55:04
59.[PAPER: ATTACKS MEET INTERPRETABILITY: ATTRIBUTE-STEERED DETECTION OF ADVERSARIA...55:55
60.Understanding Adversarial Samples56:22
61.Challenges58:03
62.Attribute Witness Extraction58:49
63.Experimental Results59:40
64.Thank you!1:00:12
65.[PAPER: LEARNING TO SOLVE SMT FORMULAS]1:00:27
66.SMT Formula1:01:05
67.SMT Solvers1:01:59
68.Solving SMT Formulas is Hard1:02:58
69.SMT Formula Solving1:04:33
70.Learn to Solve Formula1:06:12
71.SMT Formula Solving1:07:03
72.Neural Network Policy1:08:51
73.Training1:09:49
74.Evaluation1:11:07
75.Q & A1:12:59
76.[PAPER: TOWARDS ROBUST DETECTION OF ADVERSARIAL EXAMPLES]1:15:55
77.We Detect Adversarial Examples, and How?1:16:49
78.Reverse Cross Entropy1:17:16
79.The RCE Training Method1:17:56
80.Theoretical Analysis1:18:16
81.Experiments1:18:47
82.For more results and analyses, please come1:19:13
83.[PAPER: NEURAL ARCHITECTURE SEARCH WITH BAYESIAN OPTIMISATION AND OPTIMAL TRANSP...1:19:30
84.Neural architecture search 1:20:03
85.Prior Work in Neural Architecture Search1:20:38
86.Bayesian Optimisation1:20:49
87.OTMANN correlates with cross validation performance1:23:25
88.Optimising the acquisition1:23:54
89.Test Error on 7 Datasets1:24:17
90.Architectures found 1:24:25
91.[PAPER: DATA DRIVEN CLUSTERING VIA PARAMETERIZED LLOYDS FAMILIES]1:24:52
92.Data-Driven Clustering1:25:10
93.Learning Model1:25:58
94.Lloyds method1:26:56
95.Initial Centers are Important!1:27:39
96.The (alpha, beta)-Lloyds Family1:28:10
97.Results1:29:02
98.[PAPER: SUPERVISING UNSUPERVISED LEARNING]1:30:03
99.Contributions1:30:28
100.General approach1:31:49
101.Number of clusters1:32:42
102.Clustering algorithm (assume fixed for simplicity)1:33:34
103.Fraction of outliers1:33:51
104.Deep learning binary similarity function1:34:18
105.See you...1:35:01