Chen, Long, A Gradient Descent Akin Method for Constrained Optimization
Motivated by the applications of large-scale shape optimization problems and inspired by singular value decomposition, we present a “gradient descent akin method” (GDAM) for solving constrained optimization problems. At each iteration, we compute a search direction using a linear combination of the negative and normalized objective and constraint gradient by introducing a parameter \zeta. While the principled idea behind GDAM is similar to that of gradient descent, we show its connection to the classical logarithmic barrier interior-point method and argue that it can be considered a first-order interior-point method. The convergence behavior of the method is studied using a dynamical systems approach. In particular, we show that the continuous-time optimization trajectory finds local solutions by asymptotically converging to the central path(s) of the barrier interior-point method. Furthermore, we show that the convergence rate of the method is bounded relative to \zeta. Numerical examples are reported, which include both common test examples and real-world applications in shape optimization. Finally, we show recent progress in the practical implementation of GDAM by incorporating Nesterov’s Acceleration Gradient method.
How to join online
The talk is held online via Zoom. You can join with the following link:
https://uni-kl-de.zoom.us/j/62521592603?pwd=VktnbVlrWHhiVmxQTzNWQlkxSy9WZz09
Referent: Dr. Long Chen, Chair for Scientific Computing (SciComp), TU Kaiserslautern
Zeit: 12:00 Uhr
Ort: Hybrid (Room 32-349 and via Zoom)