Friday, 11:10 - 12:30 - Room 146
Stream: Convex and non-smooth optimization
Chair:
The theory of subdifferentials provides adequate methods and tools to put descent methods for nonsmooth optimization problems into practice. But there is often no exact information about the whole subdifferential for local Lipschitz continuous functions, e.g. for marginal functions in parametric mathematical programming.
Basing on the continuous outer subdifferentials we developed, this talk presents a new strategy for optimization problems with local Lipschitzian cost functions. A descent method will be developed and its convergence will be proven.
Chance constrained problems are in general nonconvex and non-differentiable.
Therefore, nonsmooth analysis becomes the key issue in an approach to CCOPT problems. It is well-known that CCOPT problems are
numerically intractable. As a result, the existing methods of finite dimensional sampling or non-smooth optimization cannot be trivially applicable. Therefore, this work proposes a smoothing method for an approximate solution of nonsmooth and nonconvex CCOPT problems on Banach spaces.
Finding zeros of the sum of two monotone operators is of interest in image processing, optimal transportation and statistics.
The main primal methods are: the forward-backward method, the Douglas-Rachford method and the forward-backward-forward method.
Primal-dual methods solve the problem by generating a sequence of points converging to a point from the Kuhn-Tucker set.
In this presentation we propose a primal-dual method with inertial effect, i.e. in the formula for subsequent approximate we take into account two previous aproximates.
Among the integer linear programming formulations of the STSP problem, the multistage-insertion formulation (MI-formulation) and the subtour elimination formulation attain the same value for their linear relaxations. Computational studies comparing different formulations indicate the superior performance of the MI-formulation. The Leontief structure of the sub-problem of the MI problem provides additional algorithmic approaches to solve the MI-relaxation problem, using Lagrangean relaxation. Also we show the problem is a minimum cost flow problem in a hypergraph, providing new algorithms.