Andrea Walther, On a semismooth conjugate gradient method

In machine learning and other large scale applications, nowadays deterministic and stochastic variants of the steepest descent method are widely used for the minimization of objectives that are only piecewise smooth. As alternative, in this talk we present a deterministic descent method based on the generalization of rescaled conjugate gradients proposed by Phil Wolfe in 1975 for objectives that are convex. Without this assumption the new method exploits semismoothness to obtain conjugate pairs of generalized gradients such that it can only converge to Clarke stationary points. In addition to the theoretical analysis, we present preliminary numerical results.

How to join online

The talk is held online via Zoom. You can join with the following link:
https://uni-kl-de.zoom.us/j/62521592603?pwd=VktnbVlrWHhiVmxQTzNWQlkxSy9WZz09

Referent: Prof. Andrea Walther, Department of Mathematics, Humboldt University Berlin

Zeit: 12:00 Uhr

Ort: Hybrid (Room 32-349 and via Zoom)