14th EUROPT Workshop on Advances in Continuous Optimization

Warsaw (Poland), July 1-2, 2016


Friday, 14:00 - 15:00 - Room 161


Stream: Large scale optimization

Chair: Massimo Roma

  1. Preconditioning Techniques for Nonlinear Conjugate Gradient Methods Based on Damped Quasi Newton Updates

    Massimo Roma, Mehiddin Al-Baali, Andrea Caliciotti, Giovanni Fasano

    In this paper we deal with preconditioning techniques for Nonlinear Conjugate Gradient methods in large scale unconstrained optimization. We combine information collected by the NCG scheme with specific appealing properties of quasi-Newton updates, particularly the recent damped quasi-Newton techniques. Our aim is to possibly approximate in some sense the inverse of the Hessian matrix, while still preserving information provided by the satisfaction of the secant equation or its modification. Preliminary numerical experiences described confirm the effectiveness of the proposed approach.

  2. The controller of asymptotically fully damping for singularly perturbed linear autonomous systems with delay

    Tsekhan Olga

    For linear autonomous singularly perturbed system with small parameter at the highest derivative of the part of variables and with delay in the slow state variables construct a dynamic composite controller, independent of the small parameter, according to the feedback principle so that for sufficiently small values of the parameter the closed-loop system was asymptotically stable by Lyapunov, and the state of the system and control's law were close to zero (order of smallness of the parameter), starting at some point of time.

  3. A new price coordination algorithm for decomposed convex optimization

    Andrzej Karbowski

    A separable, additive, convex optimization problem is considered. Under quite moderate assumptions, the Lagrangian decomposition, based on the relaxation of constraints, can be applied, what leads to a hierarchical, two-level scheme with the dual problem on the upper level and the decomposed primal problem on the lower level [1]. A new gradient algorithm for the upper level, using more information concerning the algorithm applied on the lower level, will be proposed.

    Ref. [1] Karbowski A., Comm. on "Optimization Flow Control...", IEEE/ACM Trans. on Networking, 11(2): 338-339, 2003.