Seminars in Applied Mathematics

组织者 Organizer:包承龙
时间 Time: 2021 - 8 - 19, 10:00 - 11:00
地点 Venue:线上

报告摘要 Abstract

[Title]A2DR: Open-Source Python Solver for Prox-Affine Distributed Convex Optimization
Date:2021.08.19 (10am-11am in Beijing Time Zone)
Tencent meeting:839333395
Speaker:Junzi Zhang (Applied Scientist, Amazon)
Short Bio:Junzi Zhang is currently working at Amazon Advertising as an Applied Scientist. He got his Ph.D. degree in Computational Mathematics at Stanford University, advised by Prof. Stephen P. Boyd from Stanford Department of Electrical Engineering. He has also been working closely with Prof. Xin Guo and Prof. Mykel J. Kochenderfer. Before coming to Stanford, he obtained a B.S. degree in applied mathematics from School of Mathematical Sciences, Peking University, where he conducted his undergraduate research under the supervision of Prof. Zaiwen Wen and Prof. Pingwen Zhang. His research has been focused on the design and analysis of optimization algorithms and software, and extends broadly into the fields of machine learning, causal inference and decision-making systems (especially reinforcement learning). He is also recently extending his research to federated optimization, predictive modeling and digital advertising. His research had been partly supported by Stanford Graduate Fellowship. More information can be found on his personal website at https://web.stanford.edu/~junziz/index.html.

Abstract:We consider the problem of finite-sum non-smooth convex optimization with general linear constraints, where the objective function summands are only accessible through their proximal operators. To solve it, we propose an Anderson accelerated Douglas-Rachford splitting (A2DR) algorithm, which combines the scalability of Douglas-Rachford splitting and the fast convergence of Anderson acceleration. We show that A2DR either globally converges or provides a certificate of infeasibility/unboundedness under very mild conditions. We describe an open-source implementation (https://github.com/cvxgrp/a2dr) and demonstrate its outstanding performance on a wide range of examples. The talk is mainly based on the joint work [SIAM Journal on Scientific Computing, 42.6 (2020): A3560–A3583] with Anqi Fu and Stephen Boyd.




题目:Landscape analysis of non-convex optimizations in phase retrieval

报告人:  蔡剑锋,香港科技大学

时间:   2020-07-17, 10:00-11:00 AM

方式:   ZOOM

会议ID:  13320196942

摘要:Non-convex optimization is a ubiquitous tool in scientific and engineering research. For many important problems, simple non-convex optimization algorithms often provide good solutions efficiently and effectively, despite possible local minima. One way to explain the success of these algorithms is through the global landscape analysis. In this talk, we present some results along with this direction for phase retrieval. The main results are, for several of non-convex optimizations in phase retrieval, a local minimum is also global and all other critical points have a negative directional curvature. The results not only will explain why simple non-convex algorithms usually find a global minimizer for phase retrieval, but also will be useful for developing new efficient algorithms with a theoretical guarantee by applying algorithms that are guaranteed to find a local minimum.






题目: 针对训练测试数据偏差的鲁棒深度学习方法初探
报告人:  孟德宇教授,西安交通大学
时间:  2020-7-10, 9:00-10:00 AM
方式:  腾讯会议
会议 ID: 304 179 559
摘要:  在现实复杂环境下,用以训练的数据标记通常包含大量噪声(错误标记)。采用数据加权的方式是对该噪声标记问题一种通用的方法,例如侧重于易分类样本的自步学习方法与侧重于难分类样本的boosting算法等。然后,目前对数据加权仍然缺乏统一的学习模式,且一般总要涉及超参数选择的问题。本报告讲汇报一种新的元学习方法,通过在无偏差元数据的引导下,能够对存在偏差的噪声标记数据的训练模式进行有效的调节与控制,从而在很大程度上避免了超参数调节的问题,并通过数据驱动的方式实现了自适应选择权重赋予的方式。通过在各种包含异常标注数据集上的测试,初步验证了该方法的有效性与稳定性。
报告人简介:  孟德宇,西安交通大学教授,博导,任西安交大大数据算法与分析技术国家工程实验室机器学习教研室负责人。主要研究兴趣为机器学习、计算机视觉与人工智能的基础研究问题。共发表论文100余篇,其中IEEE Trans.长文36篇, CCF A类会议论文37篇。




题目: The power of depth in deep Q-Learning
报告人:  林绍波教授,西安交通大学
时间:  2020-7-10, 10:00-11:00 AM
方式:腾讯会议
会议 ID: 304 179 559
Abstract:  With the  help of massive data and rich computational resource, deep Q-learning has been widely used in   operations research and management science and receives great success in numerous applications including, recommender system, games and robotic manipulation. Compared with avid research activities in practice, there lack solid theoretical verifications and interpretability for the success of deep Q-learning, making it be a little bit mystery.  The aim of this talk is to discuss the power of depth in deep Q-learning.  In the framework of statistical learning theory, we rigorously prove that deep Q-learning outperforms  the traditional one by showing its good generalization error bound.  Our results shows that the main reason of the success of deep Q-learning is due to the excellent performance of  deep neural networks (deep nets) in capturing special properties of rewards such as the spatially sparse and piecewise constant rather than due to their large capacities. In particular, we provide answers to questions why and when deep Q-learning performs better than the traditional one and how about the generalization capability of deep Q-learning.
报告人简介:  林绍波:西安交通大学教授、博导。研究方向为分布式学习理论、深度学习理论及强化学习理论。主持或以核心成员参与国家自然科学基金9项.在JMLR, ACHA, IEEE-TSP, SIAM-JNA 等著名期刊发表论文60余篇。