1.WELCOME07:33
2.[PAPER: ON THE DIMENSIONALITY OF WORD EMBEDDING]08:42
3.Embedding Method in NLP09:01
4.Dimensionality of Word Embedding09:34
5.Understand Embedding Dimensionality10:29
6.Word Embeddings10:50
7.A Motivating Example11:25
8.Loss Function for Word Embeddings12:32
9.Geometric Interpretation14:04
10.Property of the PIP Loss14:45
11.Matrix Factorization15:21
12.Embedding Algorithms and Matrix Factorization15:38
13.Main Results16:05
14.Dimensionality Selection17:10
15.Cross-Validation with Empirical Results17:36
16.Dimensionality Selection Results18:12
17.Other Discoveries18:42
18.Tool for Dimensionality Selection19:02
19.Q/A19:49
20.[PAPER: UNSUPERVISED CROSS-MODAL ALIGNMENT OF SPEECH AND TEXT EMBEDDING SPACES]24:33
21.Machine Learning (MT)25:08
22.Framework25:49
23.Advantages28:07
24.Usage28:36
25.[PAPER: DIFFUSION MAPS FOR TEXTUAL NETWORK EMBEDDING]30:07
26.Textual Information Network Embedding30:47
27.Motivation31:10
28.Diffusion Process31:52
29.Model32:46
30.Poster: 34:45
31.[PAPER: A RETRIEVE-AND-EDIT FRAMEWORK FOR PREDICTING STRUCTURED OUTPUTS]35:03
32.A Retrieve-and-Edit Framework for
Predicting Structured Outputs35:31
33.Sequence to sequence models35:54
34.Direct generation can be difficult36:41
35.A retrieve-and-edit framework38:09
36.Two challenges39:07
37.Decoupling the training procedure39:55
38.Optimizing for the ideal editor40:28
39.Decomposing the oracle loss41:14
40.Learning the encoder42:08
41.Retrieval on the sphere42:47
42.Overall training procedure43:50
43.Base evaluation: GitHub data44:34
44.Performance: GitHub data45:18
45.Evaluation: Hearthstone cards45:53
46.Performance: Hearthstone cards46:19
47.Example and error analysis47:12
48.Discussion and conclusion47:22
49.Q/A47:56
50.SESSION END49:06