Levenberg-Marquardt algorithm

The Levenberg-Marquardt algorithm (LM, LMA, LevMar) is a widely used method of solving nonlinear least squares problems. ALGLIB package implements Levenberg-Marquardt algorithm in several programming languages, including our dual licensed (open source and commercial) flagship products:

• , a high performance C++ library with great portability across hardware and software platforms
• , a highly optimized C# library with two alternative backends: a pure C# implementation (100% managed code) and a high-performance native implementation (Windows, Linux) with same C# interface

Our implementation of LMA was tested on many synthetic and real-life tasks and has following distinctive features ( for more complete discussion):

• support for box and linear constraints on function parameters.
• built-in numerical differentiation using precise 4-point formula.
• bulletproof code: integrity checks for user-supplied data (tried to calculate ln(-∞) or 0/0 in your target function? we will catch it!), optional verification of user-provided gradient.
• algorithmic and low-level optimizations, including optimized implementation of underlying linear algebra.

Getting started with Levenberg-Marquardt

Choosing optimization scheme

LMA is used to solve problems of the form min [ f 2 + f 2 + ... + f 2 ], with f=f(x,...,x). The following algorithm versions may be used, depending on the information that is available:

• V (function vector). F(x) is represented as sum of squares. We have function vector f only. Jacobian is calculated using combination of numerical differentiation and secant updates.
• VJ (vector+Jacobian). F(x) is represented as sum of squares. We have function vector f and Jacobian J.

Version name (written in bold) is a suffix which is appended to the `minlmcreate` - name of function used to create optimizer. I.e., following functions can be called by ALGLIB users: minlmcreatev, minlmcreatevj.

What scheme to choose? For a quick start, we recommend to use V scheme first (minlmcreatev function), because it is very simple to implement. Only function vector f is needed, and that's all. No Jacobian, no other higher order information. You calculate function vector, ALGLIB solves all differentiation-related issues.

After you've got familiar with ALGLIB, you can add Jacobian calculation and switch to VJ scheme. Jacobian-free optimization is easy to setup, but its performance is too low for time-critical applications. Furthermore, numerical differentiation doesn't allow us to find minimum with accuracy significantly larger than the numerical differentiation step. So if you need high performance or high accuracy, you should implement analytic Jacobian.

Choosing stopping conditions

ALGLIB package offers four kinds of stopping conditions:

• stepsize-based - step norm is small enough
• iteration-based - after specified number of iterations

You can set one or several conditions by calling minlmsetcond function. After algorithm termination you can examine completion code to determine what condition was satisfied at the last point.

Scale of the variables

Before you start to use optimizer, we recommend you to set scale of the variables with minlmsetscale function. Scaling is essential for the correct work of the stopping criteria (and sometimes for the convergence of optimizer). You can use optimizer without scaling if your problem is well scaled. However, if some variables are up to 100 times different in magnitude, we recommend you to tell solver about their scale. And we strongly recommend to set scaling in case of larger difference in magnitudes.

We recommend you to read separate article on variable scaling, which is worth reading unless you solve some simple toy problem.

Box and general linear constraints

Leveberg-Marquardt algorithm supports optimization with box constraints, i.e. constraints of the form l≤x≤u, and general linear equality/inequality constraints.

Box constraints can be set with minlmsetbc function. These constraints are handled very efficiently - computational overhead for having N constraints is just O(N) additional operations per function evaluation. Finally, these constraints are always exactly satisfied. We won't calculate function at points outside of the interval given by [l,u]. Optimization will stop inside [l,u] or exactly at its boundary.

General linear equality/inequality constraints can be set with minlmsetlc function. These constraints have higher computational overhead than box ones, and - due to rounding errors and limitations of underlying QP solver - they are only approximately satisfied. Although algorithm won't make step deep into the infeasible area, it is possible that it will try to evaluate target function 10 -6 units away from the boundary given by linear constraint.

Running algorithm

After creation and tuning of the optimizer object you can begin optimization using minlmoptimize function. It accepts as parameters optimizer object and callbacks which calculate function/gradient. Optimization result can be obtained using minlmresults function.

Imagine a situation: you solved your NLS problem with LMA with numerical differentiation and it worked, albeit slow; then you implemented analytic gradient in order to accelerate everything, and now optimizer stops far away from solution. What's wrong? The most probable answer is that you have an error in your analytic gradient. Luckily, ALGLIB can check user-provided gradient against results obtained with numerical differentiation. Such option can be activated with minlmsetgradientcheck function.

Another interesting option is possibility to submit request for termination for the running optimizer. Sometimes you want to smoothly terminate optimizer and stop at the last accepted point. It can be done with minlmrequesttermination function, which can be called from your callback function - or from some external code which just decided that it is time to stop.

Using `lsfit` convenience wrapper

The most frequent use of the Levenberg-Marquardt algorithm is for solution of nonlinear regression problems. In such setting individual functions f(x, ..., x) are just different version of same parametric function g(X|C), calculated for different values of parameter vector C: f(X)=g(X|C)-y. Say, if you fit a polynomial to single-dimensional data, then you have g(X|C) = x + x·C + x·C 2 + x·C 3 + ..., with x being polynomial coefficients, and C being points of your dataset.

Well, nothing prevents you from using minlm subpackage (which is described here) as nonlinear least squares solver. However, ALGLIB package offers specialized interface for such problems which is included into lsfit subpackage. Working with specialized interface is more convenient that using underlying optimization algorithm directly.

However, interface implemented by lsfit subpackage is somewhat slower than original algorithm described here because of additional level of abstraction it provides. It is especially important for small-scale problems (1-5 parameters to fit) with very cheap functions/gradients.

What to choose - performance or convenience? If you are new to ALGLIB and want to quickly get some code which works, we recommend you to use nonlinear least squares solver built on top of Levenberg-Marquardt optimizer. You will save some development time and you will be able to quickly build working prototype.

However, if you need high performance on a small scale problems, we recommend you to work directly with underlying optimizer (which is described here). Or alternatively you can start from lsfit subpackage (specialized interface) and, after getting familiar with ALGLIB, switch to minlm subpackage (underlying optimizer).

Examples

ALGLIB Reference Manual includes several examples on Levenberg-Marquardt algorithm:

• minlm_d_v example, which shows how to do Jacobian-free optimization
• minlm_d_vj example, which shows how to do optimization using analytic Jacobian
• minlm_d_fgh example, which shows how to do optimization using FGH mode
• minlm_d_restarts example, which shows how to do efficient restarts
• minlm_d_vb example, which shows how to do optimization with boundary constraints

Improving performance

Efficient restart

When you solve a sequence of optimization problems with same N/M, you can allocate new optimizer object every time you want to solve new problem. However, creation of the new optimizer leads to extensive memory allocation, which slows down optimization process. More efficient solution is to use minlmrestartfrom function, which restarts algorithm from the new point (and with new function) without additional dynamic allocations.

Acceleration

ALGLIB implementation of the Levenberg-Marquardt algorithm includes acceleration which can be used to improve optimizer performance.

Original Levenberg-Marquardt algorithm builds quadratic model of a function, makes one step and then builds 100% new quadratic model. This algorithm is used by default in the VJ and FGH versions of ALGLIB LM. However, building quadratic model from scratch can be time consuming process, which leads us to the next idea.

The idea is that we update Jacobian using secant update (also known as Broyden's rank one update) instead of recalculating it using analytic or numerical differentiation. Secant update is constructed using function vector at the new point. Such updated Jacobian is less accurate, so our next step will be somewhat worse than step using exact Jacobian. But is still will decrease function value. As result, we make more steps (and solve more linear systems on each step), but Jacobian is reused several times before recalculation. It is obvious that we can benefit from such strategy only when the cost of the Jacobian evaluation is higher that the linear solver cost.

This strategy is turned on by calling `minlmsetacctype(state,1)` and can be used with any optimization scheme which relies on function vector (V, VJ, VGJ). It is turned on by default when we use V-scheme. In this case Jacobian is calculated using numerical differentiation, which is very slow process, and first strategy is always beneficial. In other cases acceleration should be explicitly turned on by minlmsetacctype call.

ALGLIB Project offers you two editions of ALGLIB:

ALGLIB Free Edition: delivered for free offers full set of numerical functionality extensive algorithmic optimizations no low level optimizations non-commercial license

ALGLIB Commercial Edition: flexible pricing offers full set of numerical functionality extensive algorithmic optimizations high performance (SMP, SIMD) commercial license with support plan

ALGLIB 3.15.0 for C++ C++ library.
Delivered with sources.
Monolithic design.
Extreme portability.
Editions:   FREE   COMMERCIAL

ALGLIB 3.15.0 for C# C# library with native kernels.
Delivered with sources.
VB.NET and IronPython wrappers.
Extreme portability.
Editions:   FREE   COMMERCIAL

ALGLIB 3.15.0 for Delphi Delphi wrapper around C core.
Delivered as precompiled binary.
Compatible with FreePascal.
Editions:   FREE   COMMERCIAL

ALGLIB 3.15.0 for CPython CPython wrapper around C core.
Delivered as precompiled binary.
Editions:   FREE   COMMERCIAL