Hi all,
In the coming AIRG, I will be discussing an interesting work on neural network landscape design.
Abstract:
Have you ever been confused with which optimization algorithm you should be using to solve your non-convex optimization problem? We have so many to choose from !! - GD, SGD, RMSProp, Adam, etc. Well, this interesting
line of research attempts to solve our confusion. Their work tries to bring about change in the optimization landscape so that no matter which optimization algorithm you choose, you are guaranteed to converge to the global minima. On top of this, they also
provide theoretical guarantees such that their techniques don't seem like a black box. So, come let's see how these techniques apply to the cutest beast out there - a one hidden layer neural network!
Time: 4 pm, Wednesday, September 26
Location: CS 3310
Presenter: Vishnu Lokhande (lokhande@xxxxxxxxxxx)
Paper:
Learning One-hidden-layer Neural Networks with Landscape Design.
Rong Ge, Jason D. Lee, and Tengyu Ma.
ICLR 2018.
https://arxiv.org/abs/1711.00501
Thanks,
Vishnu Lokhande