WORHP
*Large-scale Sparse Nonlinear Optimisation*

Let the optimisation variables be . The objective function of interest is with respect to the nonlinear constraint and the linear constraints and . Furthermore, the optimisation variables must satisfy the box constraints and .

The WORHP specific formulation requires to mark constraints or variables without upper bound with infinity and without lower bounds with minus infinity respectively.

WORHP uses derivative based optimisation to minimise the given problem. Thus, in order to improve
the performance of the solver analytic derivatives should be given. In this section we explain the
resulting derivatives of our example. The implementations in the different programming languages
can be found in the next section. Remember to take care of the indexing of your programming
language of choice (i.e. FORTRAN 1-based, C/C++ 0-based). We will use FORTRAN 1-based vectors
within this section. Please have a look at the section "Sparse Matrices" within
our manual for further information on sorting requirements for the matrix entries
within WORHP. Additionally, WORHP uses a scaling factor for the objective function
(`wsp%ScaleObj`), which must be used when computing
the objective function value, the value of the gradient of the objective and within the
computation of the Hessian of the Lagrangian.

The gradient of the objective function
for our example is given by
The implementation uses sparse matrices for the derivatives. Therefore, this gradient leads to the row vector
`DFrow = [1, 2, 3]` and the value vector `DFval = [wsp%ScaleObj * 2.0 * x(1), wsp%ScaleObj * 4.0 * x(2), wsp%ScaleObj * (-1.0)]`. If you do not provide
an implementation of the objective gradient, you must set the parameter `UserDF = False`.

The jacobian of the constraint function is given by
The row vector is `DGrow = [1, 3, 1, 2, 2, 3]`, the column vector is `DGcol = [1, 2, 3, 3, 4, 4]`
and the value vector is `DGval = [2.0*x(1) + x(3), 1.0, 2.0*x(3)+x(1), 1.0, -1.0, 1.0]`. If you do not provide an
implementation of the jacobian of the constraints, you must set the parameter
`UserDG = False`.

The implemented algorithms require the use of the Lagrangian function
, with the scaling factor
(`wsp%ScaleObj`) for the
objective part within the Lagrangian. One advantage of the used Sequential Quadratic Programming
and Interior-Point approach is, that theoretically local quadratic convergence is achievable. To reach this order of convergence
second order derivatives of the Lagrangian must be used. Due to symmetry only the lower triangular part of the matrix must be stored.
Additionally, WORHP requires a special sorting of the sparse entries. All entries on the diagonal must be given,
even structural zeros. Furthermore, the non-diagonal entries must be given first. To get a deeper insight, please have a look
at the WORHP
manual.
The row vector is `HMrow = [3, 1, 2, 3, 4]`, the column vector is `HMcol = [1, 1, 2, 3, 4]`
and the value vector is `HMval = [wsp%ScaleObj * Mu(1), 2.0 + 2.0*Mu(1), wsp%ScaleObj * 4.0, 0.0, 0.0]`. If you do not provide an
implementation of the Lagrangian of the Hessian, you must set the parameter `UserHM = False`.

Please check out the source code of the language of your choice to get further insights:

WorhpFromXML: Used parameter file worhp.xml * Read 265/265 parameters. ------------------------------------------------------- This is WORHP 1.9-1803714, the European NLP-solver. Use of WORHP is subject to terms and conditions. Visit http://www.worhp.de for more information. ------------------------------------------------------- Total number of variables ........................ 4 fixed variables 0 variables with lower bound only 2 variables with lower and upper bound 2 variables with upper bound only 0 Total number of box constraints .................. 6 Total number of other constraints ................ 4 equality constraints 1 inequality constraints with lower bound only 0 inequality constraints with lower and upper bound 1 inequality constraints with upper bound only 1 Gradient (user) 3/4 = 75.000% Jacobian (user) 6/12 = 50.000% Hessian (user) 5/10 = 50.000% Algorithm Sequential Quadratic Programming NLP MaxIter 10000 Line Search Method Filter QP MaxIter 500 LA solver MA97 (tol 1.0E-09, ref 10, ord METIS/AMD, scl none) Tolerances: Optimality (sKKT) 1.00E-06 (1.0E-03) IP ComTol 2.00E-07 Feasibility 1.00E-06 (1.0E-03) IP ResTol 4.00E-08 Complementarity 1.00E-06 Timeout 1800.000 seconds ITER OBJ CON OPTI/COMPL FLAGS ALPHA |DX| REL PEN REG TIME [ 0| 6] +1.10000000E+01 6.00000000E+00 3.73381687E+00 sc Uin 0.00E+00 3.17E+00 - - - 0.00E+00 [ 1| 4] -1.51500000E+00 4.56000000E+00 1.90035937E+00 sc Uin 1.00E+00 3.17E+00 0.5 - -2.9 0.00E+00 [ 2| 17] -3.69786640E-01 1.43771018E+00 7.01177962E-01 sc Uin 1.00E+00 8.62E-01 0.2 - -3.2 0.00E+00 [ 3| 9] -4.56045101E-01 2.52414375E-01 9.41480289E-02 sc Uin 1.00E+00 4.36E-01 - - -3.4 0.00E+00 [ 4| 2] -4.99062234E-01 3.15607386E-02 1.50160622E-02 so Uin 1.00E+00 1.78E-01 - - -3.7 0.00E+00 [ 5| 2] -4.99999222E-01 8.82624972E-04 2.25791466E-04 so Uaa 1.00E+00 2.97E-02 - - -4.2 0.00E+00 [ 6| 1] -5.00000000E-01 7.77196114E-07 1.42460414E-08 so Ufo 1.00E+00 8.81E-04 0.0 - -4.6 4.00E-03 Final values after iteration 6: Final objective value ............. -4.9999999969E-01 Final constraint violation ........ 7.7719611413E-07 Final complementarity ............. 1.3955978467E-11 (1.3955978467E-11) Final KKT conditions .............. 1.4246041360E-08 (1.6401932597E-05) Successful termination: Optimal Solution Found.A detailed description of the output can be found in the Users' Guide.