This section provides some details on the primal-dual interior point algorithm used by MIPS and described in [8, 43].

For a scalar function of a real vector , we use the following notation for the ﬁrst derivatives (transpose of the gradient):

| (A.14) |

The matrix of second partial derivatives, the Hessian of , is:

| (A.15) |

For a vector function of a vector , where

| (A.16) |

the ﬁrst derivatives form the Jacobian matrix, where row is the transpose of the gradient of

| (A.17) |

In these derivations, the full 3-dimensional set of second partial derivatives of will not be computed. Instead a matrix of partial derivatives will be formed by computing the Jacobian of the vector function obtained by multiplying the transpose of the Jacobian of by a vector , using the following notation

| (A.18) |

Please note also that is used to denote a diagonal matrix with vector on the diagonal and is a vector of all ones.

The primal-dual interior point method used by MIPS solves a problem of the form:

| (A.19) |

subject to

where the linear constraints and variable bounds from (A.4) and (A.5) have been incorporated into and . The approach taken involves converting the inequality constraints into equality constraints using a barrier function and vector of positive slack variables .

| (A.22) |

subject to

As the parameter of perturbation approaches zero, the solution to this problem approaches that of the original problem.

For a given value of , the Lagrangian for this equality constrained problem is

| (A.26) |

Taking the partial derivatives with respect to each of the variables yields:

| (A.31) |

The ﬁrst order optimality (Karush-Kuhn-Tucker) conditions for this problem are satisﬁed when the partial derivatives of the Lagrangian above are all set to zero:

| (A.35) |

The ﬁrst order optimality conditions are solved using Newton’s method. The Newton update step can be written as follows:

| (A.36) |

| (A.37) |

This set of equations can be simpliﬁed and reduced to a smaller set of equations by
solving explicitly for in terms of and for in terms of . Taking the
2^{nd} row of (A.37) and solving for we get

Solving the 4^{th} row of (A.37) for yields

Then, substituting (A.38) and (A.39) into the 1^{st} row of (A.37) results in

where

and

Combining (A.40) and the 3^{rd} row of (A.37) results in a system of equations of reduced
size:

| (A.45) |

The Newton update can then be computed in the following 3 steps:

- 1.
- Compute and from (A.45).
- 2.
- Compute from (A.39).
- 3.
- Compute from (A.38).

In order to maintain strict feasibility of the trial solution, the algorithm truncates the Newton step by scaling the primal and dual variables by and , respectively, where these scale factors are computed as follows:

resulting in the variable updates below.

The parameter is a constant scalar with a value slightly less than one. In MIPS, is set to 0.99995.

In this method, during the Newton-like iterations, the perturbation parameter must converge to zero in order to satisfy the ﬁrst order optimality conditions of the original problem. MIPS uses the following rule to update at each iteration, after updating and :

| (A.52) |

where is a scalar constant between 0 and 1. In MIPS, is set to 0.1.