Speaker:Kate Wenqi Zhu (Ph.D. Student, University of Oxford)
Schedule:Thursday, 16:00-17:00, Aug 28, 2025
Venue:Shuangqing B627; 腾讯会议/voov:232-746-893
Date:2025-08-28
Host: Jin-Peng Liu 刘锦鹏
Abstract: Traditionally, first-order gradient-based techniques, such as stochastic gradient descent (SGD), and second-order methods, such as the Newton method, have dominated the field of optimization. In recent years, high-order tensor methods with regularization for nonconvex optimization have garnered significant research interest. These methods offer superior local convergence rates, improved worst-case evaluation complexity, enhanced insights into data geometry through higher-order information, and better parallelization compared to SGD.
The most critical challenge in implementing the $p$th-order method ($p \geq 3$) lies in efficiently minimizing the $p$th-order subproblem, which typically consists of a $p$th-degree multivariate Taylor polynomial combined with a $(p+1)$th-order regularization term. In this talk, we address the research gaps by characterizing the local and global optimality of the subproblem and investigating its potential NP-hardness. In this talk, we will introduce and discuss a series of provably convergent and efficient algorithms for minimizing the regularized subproblem both locally and globally, including the Quadratic Quartic Regularization Method (QQR), the Cubic Quartic Regularization Method (CQR), and the Sums-of-Squares Convex Taylor Method (SoS-C). More interestingly, our research adopts an AI-integrated approach, using the mathematical reasoning capabilities of large language models (LLMs) to verify the nonnegativity of multivariate polynomials. This problem is closely related to Hilbert’s Seventeenth Problem and the challenge of globally minimizing subproblems.
Bio: Kate Wenqi Zhu is a fourth year Ph.D. student in Applied Mathematics at the University of Oxford, under the supervision of Professor Coralia Cartis, and is fully funded by the CIMDA–Oxford Studentship. Her research focuses on leveraging higher-order information for efficient nonconvex optimization, with interests spanning computational complexity analysis, tensor approximation, sum-of-squares techniques, implementable high-order subproblem solvers, and adaptive regularization methods. She completed both her undergraduate and first master's degrees in Mathematics at Oxford, followed by an M.Sc. in Mathematical Modelling and Scientific Computing, supervised by Professor Yuji Nakatsukasa.
