8:30-9:10: Gathering
9:10-9:30: Opening
9:30-11:30: Yann LeCun (Facebook, NYU)
The Unreasonable Effectiveness Of Deep Learning
11:30-12:00: Break
12:00-13:00: Yaniv Taigman (Facebook)
Web-Scale Training for Face Identification
Scaling machine learning methods to massive datasets has attracted considerable attention in recent years, thanks to easy access to ubiquitous sensing and data from the web. Face recognition is a task of great practical interest for which (i) very large labeled datasets exist, containing billions of images; (ii) the number of classes can reach billions; and (iii) complex features are necessary in order to encode subtle differences between subjects, while maintaining invariance to factors such as pose, illumination, and aging. In this talk I will present an elaborate pipeline and several customized deep architectures, that learn representations that generalize well on the tasks of face verification and identification.
13:00-14:00: lunch break
14:00-15:00 Amnon Shashua (HUJI/ICRI-CI)
SimNets: A Generalization of Convolutional Networks
We present a deep layered architecture that generalizes classical convolutional neural networks (ConvNets). The architecture, called SimNets, is driven by two operators, one being a similarity function whose family contains the convolution operator used in ConvNets, and the other is a new "soft max-min-mean" operator called MEX that realizes classical operators like ReLU and max-pooling, but has additional capabilities that make SimNets a powerful generalization of ConvNets. Two interesting properties that emerge from the architecture are: (i) the basic input to hidden-units to output-nodes machinery contains as special case a kernel machine, and (ii) initializing networks using unsupervised learning is natural. Experiments demonstrate the capability of achieving state of the art accuracy with networks that are 1/8 the size of comparable ConvNets.
15:00-15:30: break
15:30-17:30: Open student pannel