Tsinghua-BIMSA Seminars in Applied Mathematics

组织者 Organizer:包承龙、史作强
时间 Time:2022/01/21 08:30-10:00 am.
地点 Venue:Tencent Meeting

Upcoming Talks

Title: From ODE Solvers to Accelerated Optimization Methods

Time: 2022/01/21 08:30-10:00 am.

Tencent Meeting ID:904-824-431 Passcode:220121

Speaker: Prof. Chen Long (Department of Mathematics, University of California, Irvine)

Join the meeting through the linkhttps://meeting.tencent.com/dm/ANYcbo1Rzl2d

Abstract: Convergence analysis of accelerated first-order methods for convex optimization problems are presented from the point of view of ordinary differential equation (ODE) solvers. We first take another look at the acceleration phenomenon via A-stability theory for ODE solvers and present an explanation by transformation of spectrum to the complex plane. After that, we present the Lyapunov framework for dynamical system and introduce the strong Lyapunov condition. Many existing continuous convex optimization models, such as gradient flow, heavy ball system, Nesterov accelerated gradient flow, and dynamical inertial Newton system etc, are addressed and analyzed in this framework.

This is a joint work with Dr. Hao Luo at Peking University.

Past Talks

Title: An efficient unconditionally stable method for Dirichlet partitions in arbitrary domains

Time:2022/01/14 10:00-11:30 am.


Tencent meeting ID:819-151-291 Passcode: 220114

You can also join in the meeting through this link: https://meeting.tencent.com/dm/JdcIcbpSXKF3


A Dirichlet k-partition of a domain is a collection of k pairwise disjoint open subsets such that the sum of their first Laplace--Dirichlet eigenvalues is minimal. In this talk, we propose a new relaxation of the problem by introducing auxiliary indicator functions of domains and develop a simple and efficient diffusion generated method to compute Dirichlet k-partitions for arbitrary domains. The method only alternates three steps: 1. convolution, 2. thresholding, and 3. projection. The method is simple, easy to implement, insensitive to initial guesses and can be effectively applied to arbitrary domains without any special discretization. At each iteration, the computational complexity is linear in the discretization of the computational domain. Moreover, we theoretically prove the energy decaying property of the method. Experiments are performed to show the accuracy of approximation, efficiency, and unconditional stability of the algorithm. We apply the proposed algorithms on both 2- and 3-dimensional flat tori, triangle, square, pentagon, hexagon, disk, three-fold star, five-fold star, cube, ball, and tetrahedron domains to compute Dirichlet k-partitions for different k to show the effectiveness of the proposed method. Compared to previous work with reported computational time, the proposed method achieves hundreds of times acceleration.



Title:Massive Random Access for 5G and Beyond: An Optimization Perspective

Time:2021/12/10 4pm-5pm



Abstract:Massive access, also known as massive connectivity or massive machine-type communication (mMTC), is one of the three main use cases of the fifth-generation (5G) and beyond 5G (B5G) wireless networks defined by the International Telecommunication Union. Different from conventional human-type communication, massive access aims at realizing efficient and reliable communications for a massive number of Internet of Things (IoT) devices. The main challenge of mMTC is that the BS can efficiently and reliably detect the active devices based on the superposition of their unique signatures from a large pool of uplink devices, among which only a small fraction is active. In this talk, we shall present some recent results of massive access from an optimization perspective. In particular, we shall present optimization formulations and algorithms as well as some phase transition analysis results.

个人简介:刘亚锋,2007年毕业于西安电子科技大学理学院数学系,2012年在中国科学院数学与系统科学研究院获得博士学位(导师:戴彧虹研究员);博士期间,受中国科学院数学与系统科学研究院资助访问明尼苏达大学一年(合作导师:罗智泉教授)。博士毕业后,他一直在中国科学院数学与系统科学研究院计算数学所工作,2018年晋升为数学与系统科学研究院副研究员。他的主要研究兴趣是最优化理论与算法及其在信号处理和无线通信等领域中的应用。曾获2011年国际通信大会“最佳论文奖”,2018年数学与系统科学研究院“陈景润未来之星”,2018年中国运筹学会“青年科技奖”,2020年IEEE通信学会亚太地区“杰出青年学者奖”等。他目前担任《IEEE Transactions on Wireless Communications》、《IEEE Signal Processing Letters》和《Journal of Global Optimization》期刊的编委。他是IEEE信号处理学会SPCOM(Signal Processing for Communications and Networking)的技术委员会成员。他的工作获得国家自然科学基金委青年基金、面上项目和优秀青年基金的资助。

[Title]A2DR: Open-Source Python Solver for Prox-Affine Distributed Convex Optimization

Date: 2021.08.19 (10am-11am in Beijing Time Zone)

Tencent meeting: 839333395

Speaker: Junzi Zhang (Applied Scientist, Amazon)

Short Bio:Junzi Zhang is currently working at Amazon Advertising as an Applied Scientist. He got his Ph.D. degree in Computational Mathematics at Stanford University, advised by Prof. Stephen P. Boyd from Stanford Department of Electrical Engineering. He has also been working closely with Prof. Xin Guo and Prof. Mykel J. Kochenderfer. Before coming to Stanford, he obtained a B.S. degree in applied mathematics from School of Mathematical Sciences, Peking University, where he conducted his undergraduate research under the supervision of Prof. Zaiwen Wen and Prof. Pingwen Zhang. His research has been focused on the design and analysis of optimization algorithms and software, and extends broadly into the fields of machine learning, causal inference and decision-making systems (especially reinforcement learning). He is also recently extending his research to federated optimization, predictive modeling and digital advertising. His research had been partly supported by Stanford Graduate Fellowship. More information can be found on his personal website at https://web.stanford.edu/~junziz/index.html.

Abstract:We consider the problem of finite-sum non-smooth convex optimization with general linear constraints, where the objective function summands are only accessible through their proximal operators. To solve it, we propose an Anderson accelerated Douglas-Rachford splitting (A2DR) algorithm, which combines the scalability of Douglas-Rachford splitting and the fast convergence of Anderson acceleration. We show that A2DR either globally converges or provides a certificate of infeasibility/unboundedness under very mild conditions. We describe an open-source implementation (https://github.com/cvxgrp/a2dr) and demonstrate its outstanding performance on a wide range of examples. The talk is mainly based on the joint work [SIAM Journal on Scientific Computing, 42.6 (2020): A3560–A3583] with Anqi Fu and Stephen Boyd.

题目:Landscape analysis of non-convex optimizations in phase retrieval

报告人: 蔡剑锋,香港科技大学

时间: 2020-07-17, 10:00-11:00 AM

方式: ZOOM

会议ID: 13320196942

摘要:Non-convex optimization is a ubiquitous tool in scientific and engineering research. For many important problems, simple non-convex optimization algorithms often provide good solutions efficiently and effectively, despite possible local minima. One way to explain the success of these algorithms is through the global landscape analysis. In this talk, we present some results along with this direction for phase retrieval. The main results are, for several of non-convex optimizations in phase retrieval, a local minimum is also global and all other critical points have a negative directional curvature. The results not only will explain why simple non-convex algorithms usually find a global minimizer for phase retrieval, but also will be useful for developing new efficient algorithms with a theoretical guarantee by applying algorithms that are guaranteed to find a local minimum.

题目: 针对训练测试数据偏差的鲁棒深度学习方法初探

报告人: 孟德宇教授,西安交通大学

时间: 2020-7-10, 9:00-10:00 AM

方式: 腾讯会议

会议 ID: 304 179 559

摘要: 在现实复杂环境下,用以训练的数据标记通常包含大量噪声(错误标记)。采用数据加权的方式是对该噪声标记问题一种通用的方法,例如侧重于易分类样本的自步学习方法与侧重于难分类样本的boosting算法等。然后,目前对数据加权仍然缺乏统一的学习模式,且一般总要涉及超参数选择的问题。本报告讲汇报一种新的元学习方法,通过在无偏差元数据的引导下,能够对存在偏差的噪声标记数据的训练模式进行有效的调节与控制,从而在很大程度上避免了超参数调节的问题,并通过数据驱动的方式实现了自适应选择权重赋予的方式。通过在各种包含异常标注数据集上的测试,初步验证了该方法的有效性与稳定性。

报告人简介: 孟德宇,西安交通大学教授,博导,任西安交大大数据算法与分析技术国家工程实验室机器学习教研室负责人。主要研究兴趣为机器学习、计算机视觉与人工智能的基础研究问题。共发表论文100余篇,其中IEEE Trans.长文36篇, CCF A类会议论文37篇。

题目: The power of depth in deep Q-Learning

报告人: 林绍波教授,西安交通大学

时间: 2020-7-10, 10:00-11:00 AM


会议 ID: 304 179 559

Abstract: With the help of massive data and rich computational resource, deep Q-learning has been widely used in operations research and management science and receives great success in numerous applications including, recommender system, games and robotic manipulation. Compared with avid research activities in practice, there lack solid theoretical verifications and interpretability for the success of deep Q-learning, making it be a little bit mystery. The aim of this talk is to discuss the power of depth in deep Q-learning. In the framework of statistical learning theory, we rigorously prove that deep Q-learning outperforms the traditional one by showing its good generalization error bound. Our results shows that the main reason of the success of deep Q-learning is due to the excellent performance of deep neural networks (deep nets) in capturing special properties of rewards such as the spatially sparse and piecewise constant rather than due to their large capacities. In particular, we provide answers to questions why and when deep Q-learning performs better than the traditional one and how about the generalization capability of deep Q-learning.

报告人简介: 林绍波:西安交通大学教授、博导。研究方向为分布式学习理论、深度学习理论及强化学习理论。主持或以核心成员参与国家自然科学基金9项.在JMLR, ACHA, IEEE-TSP, SIAM-JNA 等著名期刊发表论文60余篇。