About Event
Either in stochastic optimization or statistical learning, the goal is to find a solution minimizing on average a certain population risk. Classical methodologies assume the collected data is clean and light-tailed. Beyond that setting, approaches based on the sample average are severely deteriorated with even a single outlier. Algorithmic "robust'' solutions of these problems aim in designing estimators that can be reliable under corrupted or heavy-tailed samples while still being computationally tractable. On the side, one workhorse algorithm in machine learning and stochastic optimization is the stochastic gradient descent (SGD). SGD is well known to suffer from instability if the step-size is not tuned properly. Designing efficient robust SGD-type methods that are adaptive to the step-size tuning is still a major challenge in the area, either theoretically or practically. We introduce the audience to our research projects, namely, algorithmic robust statistics and adaptive SGD-type methods. While traditionally from different fields, it is now a common trend to unify techniques from both Statistics/Machine Learning and Optimization communities. Both research programs have an unifying theme: (1) to weaken traditional assumptions on data, e.g., contamination, heavy-tails or non-uniformly bounded variance, (2) to achieve statistical and optimization optimality while (3) designing practical methods (i.e. stable and adaptive) with theoretical guarantees.
Apoiadores / Parceiros / Patrocinadores
Speakers
Philip Thompson
Philip Thompson is, since January 2020, an Assistant Professor at Purdue University, Krannert School of Management, Quantitative Methods Area. Before, he was a Research Associate in the Centre of Mathematical Sciences of Cambridge University during 2019 hosted by Prof. Richard Samworth, a post-doc fellow at ENSAE/Ecole Polytechnique funded by the Jacques Hadamard foundation during 2017-2019 hosted by Prof. Arnak Dalalyan, a post-doc fellow at the CMM, Chile during 2016-2017 hosted by Prof. Alejandro Jofre. He obtained his PhD in Mathematics from IMPA during 2011-2015 advised by Alfredo Iusem and Alejandro Jofre. His research interests concern the theory and implementation of algorithms in (stochastic) optimization, high-dimensional statistics and machine learning. His current focus is on robust high-dimensional estimation and adaptive variants of the stochastic gradient descent method. He has published in venues like SIAM Opt., Math. Program., NeurIPS and Math. Oper. Res.. He has served as reviewer for Annals of Statistics, COLT, Electronic Journal of Statistics, Journal of Machine Learning Research, Journal of the American Statistical Association, Journal of the Royal Statistical Society - Series B, Management Science, Oper. Res., SIAM Opt., Math. Program. In 2019, he was the winner of the Dupakova-Prekopa Student Paper Prize by the Stochastic Programming Society.
Location
Endereço
Link: https://fgv-br.zoom.us/j/98906353931
Meeting ID: 989 0635 3931
Information: emap@fgv.br – 3799-5917