Computer Science Seminar: Yan Yan
This event is open to all Illinois Tech faculty and students.
Abstract
Machine learning has been extensively used in many application areas. Data from the real world is usually noisy and thus imposes a great challenge to the modeling and optimization processes of machine learning. How can we learn a predictive model that is more robust to the data noise and able to generalize well? In this presentation, Yan Yan would like to introduce his recent research on developing robust machine (deep) learning algorithms with enhanced generalization performance by linking minimization problems to min-max and min-min (or inf-projection) problems. Yan will start by presenting the understanding of the stagewise stochastic gradient descent (SGD) that is commonly used to train deep neural networks (minimization problems) in practice. By providing both theoretical analysis and empirical evidence, we justify the efficacy of stagewise SGD to better trade between optimization and generalization than standard SGD. Yan will then present their effort toward distributionally robust optimization (min-max problems) and variance regularization (min-min problems), both of which target better generalization performance. For min-max problems, their research makes significant progress by developing a fast convergence rate. For min-min problems, our project casts the variance regularization as an inf-projection formulation and establishes the first comprehensive theoretical analysis for the proposed stochastic algorithms.
Bio
Yan received his Ph.D. in computer science from University of Technology Sydney in Australia in 2018 and his B.E. in computer science from Tianjin University in China in 2013. He is now a postdoctoral research associate at University of Iowa. His research interests widely include theoretical methodology of statistical machine learning and its outreach on applications of computer vision and data mining. His current focus is on large-scale robust machine (deep) learning. In particular, he works on designing optimization algorithms to enhance the generalization performance for machine-learning problems with various mathematical formulations, including minimization, min-max, and inf-projection structures. In addition, his research projects also covered topics of online learning for imbalanced data, matrix factorization for recommender systems, etc.