The new version of the numeric.xxxeq() functions (eg. numeric.addeq()) can also now operate on typed Arrays, although I did not see significant performance benefits when I tested it. If you use numeric.addeq() on several different typed Array types, the JIT will realize that numeric.addeq() is polymorphic in your code and the JIT will deoptimize numeric.addeq(). As a result, for best performance it is best to stick to a single Array type throughout your application.

]]>I also added a whole bunch of issues to github (see here) to document what needs to be done or fixed in the future (this was prompted by an email from Davy Wybiral). I’ve also ranked them by difficulty, as follows:

- Easy: can be done without even hacking numeric.js; for example, by providing new examples, tests or documentation.
- Medium: requires some small modification to an existing numeric.js function, or requires programming a “straightforward” new algorithm or tool that’s not too intimately connected with the rest of numeric.js.
- Hard: requires detailed understanding of the existing implementation; algorithm is difficult to implement; algorithm is difficult to test; some difficult decisions must be made on the right approach.

Apart from numeric.js, I’ve also reworked the workshop. This is hard to see as a user, but the workshop used to be a complicated mess of divs using table layout, which has poor compatibility with IE. I’ve found a less complicated and more compatible way of implementing the workshop.

]]>$$\min_x c^Tx \text{ such that } Ax \leq b,$$

where \(c,A,b\) are given and \(x\) is the unknown. The algorithm is a logarithmic barrier method. The initial implementation used

The barrier function is:

$$f_\alpha(x) = c^T x + \alpha\left({1 \over 2}x^Tx – \sum_i \log (Ax-b)_i\right),$$

where \(\alpha>0\) is a relaxation parameter. The solution to the linear programming problem is found by letting \(\alpha \to 0^+\). The minimum of \(f_{\alpha}(x)\) is approximated by searching in the Newton search direction. The parameter \(\alpha>0\) is reset at every Newton iteration step.

This new algorithm appears to run about ~100 times faster than the previous one.

]]>Such matrices occur naturally when solving partial differential equations

or as the adjacency matrix of large graphs. The \(n \times n\) matrix

$$A = \begin{bmatrix}

2 & -1 \\

-1 & 2 & -1 \\

& -1 & 2 & -1 \\

& & \ddots & \ddots & \ddots \\

& & & -1 & 2 \end{bmatrix}$$

is an example of a sparse matrix. Click here to see this example in the Workshop.

When stored as a dense matrix, it occupies \(n^2\) memory locations:

IN> A = numeric.diag([2,2,2,2,2,2]); for(i=0;i<5;++i) A[i+1][i] = A[i][i+1] = -1; A OUT> [[ 2, -1, 0, 0, 0, 0], [ -1, 2, -1, 0, 0, 0], [ 0, -1, 2, -1, 0, 0], [ 0, 0, -1, 2, -1, 0], [ 0, 0, 0, -1, 2, -1], [ 0, 0, 0, 0, -1, 2]]

Instead of explicitly storing all the entries of the matrix (including the nonzero entries), we can use the Column Compressed Storage format:

IN> ccsA = numeric.ccsSparse(A); OUT> [[ 0, 2, 5, 8, 11, 14, 16], [ 0, 1, 0, 1, 2, 1, 2, 3, 2, 3, 4, 3, 4, 5, 4, 5], [ 2, -1, -1, 2, -1, -1, 2, -1, -1, 2, -1, -1, 2, -1, -1, 2]]

This is a bit hard to read, but `ccsA[2]` contains the nonzero entries of `A`, starting with column 0 (ie. entries 2,-1), followed by column 1, etc… Meanwhile, the array `ccsA[1]` contains the corresponding row numbers. The array `ccsA[0]` to the beginning of each column in `ccsA[1]` and `ccsA[2]`. The function `numeric.ccsFull(ccsA)` can be used put the entries of `ccsA` into a dense matrix form.

The main advantage of column compressed storage is that the matrix can be stored using much less than \(n^2\) storage. As a result, sparse matrices are not usually converted into dense matrices.

Given a vector \(b\) and a sparse matrix \(A\), one can solve the linear problem \(Ax=b\) for the unknown \(x\) by using the functions `numeric.ccsLUPSolve()` and `numeric.ccsLUP()`, as follows:

IN> numeric.ccsLUPSolve(numeric.ccsLUP(ccsA),[1,1,1,1,1,1]) OUT> [ 3, 5, 6, 6, 5, 3]

The algorithm used is a sparse LUP decomposition \(PA=LU\), which may be better than computing the matrix inverse \(A^{-1}\) since the inverse of a sparse matrix is usually dense.

If you want to solve large sparse linear problems then you should use a sparse matrix and never convert it to a dense or full matrix.

]]>IN> x = numeric.solveLP([1,1], /* minimize [1,1]*x */ [[-1,0],[0,-1],[-1,-2]], /* matrix of inequalities */ [0,0,-3] /* right-hand-side of inequalities */ ); numeric.trunc(x.solution,1e-12); OUT> [0,1.5]

Click here to see this example in the Workshop.

The function `numeric.solveLP(c,A,b)` uses a logarithmic barrier method. In order to solve the linear problem

$$\min c^Tx \text{ such that } Ax \leq b,$$

we define the “barrier function”

$$F_a(x) = c^Tx – a\sum_i \log(b-Ax)_i,$$

where \(a>0\) is a parameter. For a given value of \(a\), we minimize \(F_a(x)\) using `numeric.uncmin()`, obtaining the solution \(x_a\); this value of \(x = x_a\) is an approximate solution to the linear program. The function `numeric.solveLP()` minimizes \(F_a(x)\) for smaller and smaller values of \(a \to 0^+\) until convergence is attained.

The function `numeric.solveLP(c,A,b,Aeq,beq)` is able to solve a linear program with equality and inequality constraints:

$$\min c^Tx \text{ such that } Ax \leq b \text{ and } A_{eq} x = b_{eq}.$$