##
Computational and Applied Mathematics Seminar

## 报告摘要 Abstract

**2019-12-19**

**Speaker:** Yunan
Yang (Courant Institute, NYU)

**Title:** Inverse Data Matching in Weak Norms

**Abstract:** For many problems in modern science and engineering
that involve optimization, the residual or the so-called objective/loss/misfit
function is essential to characterize the similarity and dissimilarity between
two objects. In particular, the choice of a proper measure of data discrepancy
affects the accuracy, convergence rate, and stability for computational
solutions of inverse problems. We will have a closer look at the three
outstanding benefits of optimization driven by the weak norms. First, it offers
a more convex optimization landscape that mitigates the local-minima issues the
traditional least-squares norm has. Second, it preconditions the iterative
process of updating the solution, which improves general stability. Third, when
used as the likelihood function in Bayesian inversion, it accounts for a
broader range of noise than purely additive Gaussian. The discussion will be
mainly focused on the quadratic Wasserstein distance and the negative Sobolev
norm.

** **

**2019-12-2**

**Speaker:** Prof.
Kim-Chuan Toh (National University of Singapore)

**Title:** Exploiting
Second Order Sparsity in Big Data Optimization

**Abstract:**

In this talk, we shall demonstrate how second order sparsity (SOS) in important optimization problems such as sparse optimization models in machine learning, semidefinite programming, and many others can be exploited to design highly efficient algorithms. The SOS property appears naturally when one applies a semismooth Newton (SSN) method to solve the subproblems in an augmented Lagrangian method (ALM) designed for certain classes of structured convex optimization problems. With in-depth analysis of the underlying generalized Jacobians and sophisticated numerical implementation, one can solve the subproblems at surprisingly low costs. For lasso problems with sparse solutions, the cost of solving a single ALM subproblem by our second order method is comparable or even lower than that in a single iteration of many first order methods. Consequently, with the fast convergence of the SSN based ALM, we are able to solve many challenging large scale convex optimization problems in big data applications efficiently and robustly. For the purpose of illustration, we present a highly efficient software called SuiteLasso for solving various well-known Lasso-type problems. This talk is based on joint work with Xudong Li (Fudan U.) and Defeng Sun (PolyU).

**About speaker:**

Dr Toh is the Leo Tan Professor in the Department of Mathematics at the National University of Singapore (NUS). He obtained his BSc degree in Mathematics from NUS and PhD degree in Applied Mathematics from Cornell University. He works extensively on convex programming, particularly large-scale matrix optimization problems such as semidefinite programming, and structured convex problems arising from machine learning and statistics.

He is currently an Area Editor for Mathematical Programming Computation, an Associate Editor for SIAM Journal on Optimization, Mathematical Programming, and ACM Transactions on Mathematical Software. He has given invited talks at numerous meetings, including the 2010 SIAM Annual Meeting, and 2006 International Symposium on Mathematical Programing. He received the 2017 Farkas Prize awarded by the INFORMS Optimization Society and the 2018 triennial Beale-Orchard Hays Prize awarded by the Mathematical Optimization Society. He is also a Fellow of the Society for Industrial and Applied Mathematics.

** **

**2019-11-27**

**Speaker:** Haohuan
Fu (Tsinghua University)

**Title:** Lessons
on Integrating and Utilizing 10 Million Cores: Experience of Sunway TaihuLight

**Abstract:**

The Sunway TaihuLight supercomputer is the world's first system with a peak performance greater than 100 PFlops, and a parallel scale of over 10 million cores. Different from other existing heterogeneous supercomputers, the system adopts its unique design strategies in both the architecture of its 260-core Shenwei CPU and its way of integrating 40,960 such CPUs as 40 powerful cabinets. This talk would first introduce and discuss design philosophy about the approach to integrate these 10 million cores, at both the processor and the system level. Based on such a system design, we would then talk about the efforts and the challenges as we see in the process of utilizing the 10 million cores to push forward the frontiers of science in domains such as climate, seismology, material, bioinformatics, and big data analytics.

**About speaker:**

Haohuan Fu is a professor in the Ministry of Education Key Laboratory for Earth System Modeling, and Department of Earth System Science in Tsinghua University, where he leads the research group of High Performance Geo-Computing (HPGC). He is also the deputy director of the National Supercomputing Center in Wuxi, leading the research and development division. Fu has a PhD in computing from Imperial College London. His research work focuses on providing both the most efficient simulation platforms and the most intelligent data management and analysis platforms for geoscience applications, leading to two consecutive winning of the ACM Gordon Bell Prizes (nonhydrostatic atmospheric dynamic solver in 2016, and nonlinear earthquake simulation in 2017).

** **

**2019-11-20**

**Speaker:** Chang
Liu（微软亚洲研究院
研究员 MSRA researcher）

**Title:** Sampling
Methods on Manifolds and Their View from Probability Manifolds

**Abstract:** Drawing samples from a given distribution is a
fundamental task that is required in many real-world problems, such as Bayesian
inference. The talk will cover two aspects where sampling methods meet
manifolds: advances of sampling methods for distributions supported on
manifolds, and the perspective of sampling methods from the Wasserstein space,
a manifold of probability distributions. Samping on manifolds is the setup of
many applications (e.g., topic model, Bayesian matrix factorization), and
incorporating a proper metric (e.g., information geometry) also helps improving
the efficiency for sampling on Euclidean spaces. Advances of methods for this
task include MCMC methods that are computational-efficient and particle-based
variational inference methods (ParVIs) that are particle-efficient. For the
manifold view of general sampling methods, ParVIs have a natural optimization
interpretation, and this is made concrete on the Wasserstein space as a
gradient flow, which provides a deeper understanding and inspirations for
improving ParVIs. With augmented structures on the Wasserstein space, general
dynamics-based MCMC methods can also be treated as a certain flow on the
Wasserstein space. The structure of the flow reveals the behavior (convergence,
stochastic gradient robustness) of MCMCs and draws a general connection to
ParVIs.

**2019-10-23**

**Speaker:** 于海军 Haijun Yu (Chinese Academy of Sciences)

**Title:** Better
Approximations of High Dimensional Smooth Functions by Deep Neural Networks
with Rectified Power Units

** **

**2019-8-22 **

**Speaker: **Xinghui Zhong (Zhejiang University)

**Title: **Compact WENO Limiters for Discontinuous Galerkin
Methods

**Abstract: **Discontinuous Galerkin (DG) method is a class of
finite element methods that has gained popularity in recent years due to its
flexibility for arbitrarily unstructured meshes, with a compact stencil, and
with the ability to easily accommodate arbitrary h-p adaptivity. However, some
challenges still remain in specific application problems. In this talk, we
design compact limiters using weighted essentially non-oscillatory (WENO)
methodology for DG methods solving hyperbolic conservation laws, with the goal
of obtaining a robust and high order limiting procedure to simultaneously
achieve uniform high order accuracy and sharp, non-oscillatory shock
transitions. The main advantage of these compact limiters is their simplicity
in implementation, especially on multi-dimensional unstructured meshes.

** **

**2019-5-29**

**Speaker:** 李仁仓 Ren-Cang Li (University of Texas at Arlington)

**Title:** Convergence
of Lanczos Methods for Trust-Region Subproblem

**2019-5-21**

**Speaker:** 邓又军 教授（中南大学）

**Title:** On
spectral properties and asymptotic behavior of Neuman-Poincare operators on
spheres

**Abstract:** In
this talk, I will present our recent results on spectral properties of the
Neumann-Poincare operator for the 3D elastostatic system in the spherical
geometry and also spectra of Neumann-Proincare operators in Helmholz and Elastic
system with finite frequency. Some applications will also be presented.