1 Introduction 1.1 What is ALGLIB 1.2 ALGLIB license 1.3 Documentation license 1.4 Reference Manual and User Guide 1.5 Acknowledgements 2 ALGLIB structure 2.1 Packages 2.2 Subpackages 2.3 Open Source and Commercial versions 3 Compatibility 3.1 CPU 3.2 OS 3.3 Compiler 3.4 Optimization settings 4 Compiling ALGLIB 4.1 Adding to your project 4.2 Configuring for your compiler 4.3 Improving performance (CPU-specific and OS-specific optimizations) 4.4 Examples (free and commercial editions) 4.4.1 Introduction 4.4.2 Compiling under Windows 4.4.3 Compiling under Linux 5 Using ALGLIB 5.1 Thread-safety 5.2 Global definitions 5.3 Datatypes 5.4 Constants 5.5 Functions 5.6 Working with vectors and matrices 5.7 Using functions: 'expert' and 'friendly' interfaces 5.8 Handling errors 5.9 Working with Level 1 BLAS functions 5.10 Reading data from CSV files 6 Working with commercial version 6.1 Benefits of commercial version 6.2 Working with SIMD support (Intel/AMD users) 6.3 Using multithreading 6.3.1 SMT (CMT/hyper-threading) issues 6.4 Linking with Intel MKL 6.4.1 Using lightweight Intel MKL supplied by ALGLIB Project 6.4.2 Using your own installation of Intel MKL 7 Advanced topics 7.1 Exception-free mode 7.2 Partial compilation 7.3 Testing ALGLIB 8 ALGLIB packages and subpackages 8.1AlglibMisc
package 8.2DataAnalysis
package 8.3DiffEquations
package 8.4FastTransforms
package 8.5Integration
package 8.6Interpolation
package 8.7LinAlg
package 8.8Optimization
package 8.9Solvers
package 8.10SpecialFunctions
package 8.11Statistics
package
ALGLIB is a cross-platform numerical analysis and data mining library. It supports several programming languages (C++, C#, Delphi, VB.NET, Python) and several operating systems (Windows, *nix family).
ALGLIB features include:
ALGLIB Project (the company behind ALGLIB) delivers to you several editions of ALGLIB:
Free Edition is a serial version without multithreading support or extensive low-level optimizations (generic C or C# code). Commercial Edition is a heavily optimized version of ALGLIB. It supports multithreading, it was extensively optimized, and (on Intel platforms) - our commercial users may enjoy precompiled version of ALGLIB which internally calls Intel MKL to accelerate low-level tasks. We obtained license from Intel corp., which allows us to integrate Intel MKL into ALGLIB, so you don't have to buy separate license from Intel.
ALGLIB Free Edition is distributed under license which favors non-commmercial usage, but is not well suited for commercial applications:
ALGLIB Commercial Edition is distributed under license which is friendly to commericial users. A copy of the commercial license can be found at http://www.alglib.net/commercial.php.
This reference manual is licensed under BSD-like documentation license:
Copyright 1994-2017 Sergey Bochkanov, ALGLIB Project. All rights reserved.
Redistribution and use of this document (ALGLIB Reference Manual) with or without modification, are permitted provided that such redistributions will retain the above copyright notice, this condition and the following disclaimer as the first (or last) lines of this file.
THIS DOCUMENTATION IS PROVIDED BY THE ALGLIB PROJECT "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE ALGLIB PROJECT BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
ALGLIB Project provides two sources of information: ALGLIB Reference Manual (this document) and ALGLIB User Guide.
ALGLIB Reference Manual contains full description of all publicly accessible ALGLIB units accompanied with examples.
Reference Manual is focused on the source code: it documents units, functions, structures and so on.
If you want to know what unit YYY
can do or what subroutines unit ZZZ
contains Reference Manual is a place to go.
Free software needs free documentation - that's why ALGLIB Reference Manual is licensed under BSD-like documentation license.
Additionally to the Reference Manual we provide you User Guide. User Guide is focused on more general questions: how fast ALGLIB is? how reliable it is? what are the strong and weak sides of the algorithms used? We aim to make ALGLIB User Guide an important source of information both about ALGLIB and numerical analysis algorithms in general. We want it to be a book about algorithms, not just software documentation. And we want it to be unique - that's why ALGLIB User Guide is distributed under less-permissive personal-use-only license.
ALGLIB was not possible without contribution of the next open source projects:
We also want to thank developers of the Intel's local development center (Nizhny Novgorod branch) for their help during MKL integration.
ALGLIB is a C++ interface to the computational core written in C. Both C library and C++ wrapper are automatically generated by code generation tools developed within ALGLIB project. Pre-3.0 versions of ALGLIB included more than 100 units, but it was difficult to work with such large number of files. Since ALGLIB 3.0 all units are merged into 11 packages and two support units:
One package may rely on other ones, but we have tried to reduce number of dependencies.
Every package relies on ap.cpp
and many packages rely on alglibinternal.cpp
.
But many packages require only these two to work, and many other packages need significantly less than 13 packages.
For example, statistics.cpp
requires two packages mentioned above and only one additional package - specialfunctions.cpp
.
There is one more concept to learn - subpackages.
Every package was created from several source files.
For example (as of ALGLIB 3.0.0), linalg.cpp
was created by merging together 14 .cpp files (C++ interface) and 14 .c files (computational core).
These files provide different functionality: one of them calculates triangular factorizations, another generates random matrices, and so on.
We've merged source code, but what to do with their documentation?
Of course, we can merge their documentation (as we've merged units) in one big list of functions and data structures, but such list will be hard to read. Instead, we have decided to merge source code, but leave documentation separate.
If you look at the list of ALGLIB packages, you will see that each package includes several subpackages.
For example, linalg.cpp
includes trfac
, svd
, evd
and other subpackages.
These subpackages do no exist as separate files, namespaces or other entities.
They are just subsets of one large unit which provide significantly different functionality.
They have separate documentation sections, but if you want to use svd
subpackage, you have to include linalg.h
, not svd.h
.
ALGLIB comes in two versions - open source (GPL-licensed) and commercial (closed source) one. Both versions have same functionality, i.e. may solve same set of problems. However, commercial version differs from open source one in following aspects:
This documentation applies to both versions of ALGLIB. Detailed description of commercial version can be found below.
ALGLIB is compatible with any CPU which:
Most mainstream CPU's (in particular, x86, x86_64, ARM and SPARC) satisfy these requirements.
As for Intel architecture, ALGLIB works with both FPU-based and SIMD-based implementations of floating point math. 80-bit internal representation used by Intel FPU is not a problem for ALGLIB.
ALGLIB for C++ (both open source and commercial versions) can be compiled in OS-agnostic mode (no OS-specific preprocessor definitions), when it is compatible with any OS which supports C++98 standard library. In particular, it will work under any POSIX-compatible OS and under Windows.
If you want to use multithreaded capabilities of commercial version of ALGLIB,
you should compile it in OS-specific mode by #defining either AE_OS=AE_WINDOWS
or AE_OS=AE_POSIX
at compile time, depending on OS being used.
Former corresponds to any modern OS (32/64-bit Windows XP and later) from Windows family,
while latter means almost any POSIX-compatible OS.
It applies only to commercial version of ALGLIB.
Open source version is always OS-agnostic, even in the presence of OS-specific definitions.
ALGLIB is compatible with any C++ compiler which:
x/0
will return INF
.
But at least we must be able to compare double precision value with infinity or NAN without raising exception.All modern compilers satisfy these requirements.
However, some very old compilers (ten years old version of Borland C++ Builder, for example) may emit code which does not correctly work with IEEE special values. If you use one of these old compilers, we recommend you to run ALGLIB test suite to ensure that library works.
ALGLIB is compatible with any kind of optimizing compiler as long as:
Generally, all kinds of optimization that were marked by compiler vendor as "safe" are possible. For example, ALGLIB can be compiled:
From the other side, following "unsafe" optimizations will break ALGLIB:
Adding ALGLIB to your project is easy - just pick packages you need and... add them to your project! Under most used compilers (GCC, MSVC) it will work without any additional settings. In other cases you will need to define several preprocessor definitions (this topic will be detailed below), but everything will still be simple.
By "adding to your project" we mean that you should a) compile .cpp files with the rest of your project, and b) include .h files you need. Do not include .cpp files - these files must be compiled separately, not as part of some larger source file. The only files you should include are .h files, stored in the /src folder of the ALGLIB distribution.
As you see, ALGLIB has no project files or makefiles. Why? There are several reasons:
In any case, compiling ALGLIB is so simple that even without project file you can do it in several minutes.
If you use modern versions of MSVC or GCC, you don't need to configure ALGLIB at all. But if you use outdated versions of these compilers (or something else), then you may need to tune definitions of several data types:
ALGLIB tries to autodetect your compiler and to define these types in compiler-specific manner:
ae_int32_t
is defined as int
, because this type is 32 bits wide in all modern compilers.ae_int64_t
is defined as _int64
(MSVC) or as signed long long
(GCC, Sun Studio).ae_int_t
is defined as ptrdiff_t
.In most cases, it is enough. But if anything goes wrong, you have several options:
stdint.h
, you can define AE_HAVE_STDINT
conditional symbolAE_INT32_T
and/or AE_INT64_T
and/or AE_INT_T
symbols.
Just assign datatype name to them, and ALGLIB will automatically use your definition.
You may define all or just one/two types (those which are not detected automatically).You can improve performance of ALGLIB in a several ways:
ALGLIB has two-layered structure: some set of basic performance-critical primitives is implemented using optimized code, and the rest of the library is built on top of these primitives. By default, ALGLIB uses generic C code to implement these primitives (matrix multiplication, decompositions, etc.). This code works everywhere from Intel to SPARC. However, you can tell ALGLIB that it will work under particular architecture by defining appropriate macro at the global level:
AE_CPU=AE_INTEL
- to tell ALGLIB that it will work under Intel
When AE_CPU
macro is defined and equals to the AE_INTEL
, it enables SIMD support.
ALGLIB will use cpuid
instruction to determine SIMD presence at run-time and use SIMD-capable code.
ALGLIB uses SIMD intrinsics which are portable across different compilers and efficient enough for most practical purposes.
If you want to use multithreaded capabilities of commercial version of ALGLIB,
you should compile it in OS-specific mode by #defining either AE_OS=AE_WINDOWS
or AE_OS=AE_POSIX
at compile time, depending on OS being used.
Former corresponds to any modern OS (32/64-bit Windows XP and later) from Windows family,
while latter means almost any POSIX-compatible OS.
It applies only to commercial version of ALGLIB.
Open source version is always OS-agnostic, even in the presence of OS-specific definitions.
In this section we'll consider different compilation scenarios for free and commercial versions of ALGLIB - from simple platform-agnostic compilation to compiling/linking with MKL extensions.
We assume that you unpacked ALGLIB distribution in the current directory and saved here demo.cpp
file,
whose code is given below. Thus, in the current directory you should have exactly one file (demo.cpp
) and
exactly one subdirectory (cpp
folder with ALGLIB distribution).
File listing below contains the very basic program which uses ALGLIB to perform matrix-matrix multiplication. After that program evaluates performance of GEMM (function being called) and prints result to console. We'll show how performance of this program continually increases as we add more and more sophisticated compiler options.
demo.cpp (WINDOWS EXAMPLE)
#include <stdio.h> #include <windows.h> #include "LinAlg.h" double counter() { return 0.001*GetTickCount(); } int main() { alglib::real_2d_array a, b, c; int n = 2000; int i, j; double timeneeded, flops; // Initialize arrays a.setlength(n, n); b.setlength(n, n); c.setlength(n, n); for(i=0; i<n; i++) for(j=0; j<n; j++) { a[i][j] = alglib::randomreal()-0.5; b[i][j] = alglib::randomreal()-0.5; c[i][j] = 0.0; } // Set number of worker threads: "4" means "use 4 cores". // This line is ignored if AE_OS is UNDEFINED. alglib::setnworkers(4); // Perform matrix-matrix product. // We call function with "smp_" prefix, which means that ALGLIB // will try to execute it in parallel manner whenever it is possible. flops = 2*pow((double)n, (double)3); timeneeded = counter(); alglib::smp_rmatrixgemm( n, n, n, 1.0, a, 0, 0, 0, b, 0, 0, 1, 0.0, c, 0, 0); timeneeded = counter()-timeneeded; // Evaluate performance printf("Performance is %.1f GFLOPS\n", (double)(1.0E-9*flops/timeneeded)); return 0; }
Examples below cover Windows compilation from command line with MSVC.
It is very straightforward to adapt them to compilation from MSVC IDE - or to another compilers.
We assume that you already called %VCINSTALLDIR%\bin\amd64\vcvars64.bat
batch file
which loads 64-bit build environment (or its 32-bit counterpart).
We also assume that current directory is clean before example is executed
(i.e. it has ONLY demo.cpp
file and cpp
folder).
We used 3.2 GHz 4-core CPU for this test.
First example covers platform-agnostic compilation without optimization settings - the most simple way to compile ALGLIB. This step is same in both open source and commercial editions. However, in platform-agnostic mode ALGLIB is unable to use all performance related features present in commercial edition.
We starts from copying all cpp
and h
files to current directory,
then we will compile them along with demo.cpp
.
In this and following examples we will omit compiler output for the sake of simplicity.
OS-agnostic mode, no compiler optimizations
> copy cpp\src\*.* . > cl /I. /EHsc /Fedemo.exe *.cpp > demo.exe Performance is 0.7 GFLOPS
Well, 0.7 GFLOPS is not very impressing for a 3.2GHz CPU... Let's add /Ox
to compiler parameters.
OS-agnostic mode, /Ox optimization
> cl /I. /EHsc /Fedemo.exe /Ox *.cpp
> demo.exe
Performance is 0.9 GFLOPS
Still not impressed. Let's turn on optimizations for x86 architecture: define AE_CPU=AE_INTEL
.
This option provides some speed-up in both free and commercial editions of ALGLIB.
OS-agnostic mode, ALGLIB knows it is x86/x64
> cl /I. /EHsc /Fedemo.exe /Ox /DAE_CPU=AE_INTEL *.cpp
> demo.exe
Performance is 4.5 GFLOPS
It is good, but we have 4 cores - and only one of them was used.
Defining AE_OS=AE_WINDOWS
allows ALGLIB to use Windows threads to parallelize execution of some functions.
Starting from this moment, our example applies only to Commercial Edition.
ALGLIB knows it is Windows on x86/x64 CPU (COMMERCIAL EDITION)
> cl /I. /EHsc /Fedemo.exe /Ox /DAE_CPU=AE_INTEL /DAE_OS=AE_WINDOWS *.cpp
> demo.exe
Performance is 16.0 GFLOPS
Not bad. And now we are ready to the final test - linking with MKL extensions.
Linking with MKL extensions differs a bit from standard way of linking with ALGLIB.
ALGLIB itself is compiled with one more preprocessor definition: we define AE_MKL
symbol.
We also link ALGLIB with appropriate (32-bit or 64-bit) alglib???_??mkl.lib
static library,
which is an import library for special lightweight MKL distribution, shipped with ALGLIB.
We also should copy to current directory appropriate alglib???_??mkl.dll
binary file which contains Intel MKL.
Linking with MKL extensions (COMMERCIAL EDITION)
> copy cpp\addons-mkl\alglib*64mkl.lib . > copy cpp\addons-mkl\alglib*64mkl.dll . > cl /I. /EHsc /Fedemo.exe /Ox /DAE_CPU=AE_INTEL /DAE_OS=AE_WINDOWS /DAE_MKL *.cpp alglib*64mkl.lib > demo.exe Performance is 33.1 GFLOPS
From 0.7 GFLOPS to 33.1 GFLOPS - you may see that commercial version of ALGLIB is really worth it!
File listing below contains the very basic program which uses ALGLIB to perform matrix-matrix multiplication. After that program evaluates performance of GEMM (function being called) and prints result to console. We'll show how performance of this program continually increases as we add more and more sophisticated compiler options.
demo.cpp (LINUX EXAMPLE)
#include <stdio.h> #include <sys/time.h> #include "LinAlg.h" double counter() { struct timeval now; alglib_impl::ae_int64_t r, v; gettimeofday(&now, NULL); v = now.tv_sec; r = v*1000; v = now.tv_usec/1000; r = r+v; return 0.001*r; } int main() { alglib::real_2d_array a, b, c; int n = 2000; int i, j; double timeneeded, flops; // Initialize arrays a.setlength(n, n); b.setlength(n, n); c.setlength(n, n); for(i=0; i<n; i++) for(j=0; j<n; j++) { a[i][j] = alglib::randomreal()-0.5; b[i][j] = alglib::randomreal()-0.5; c[i][j] = 0.0; } // Set number of worker threads: "4" means "use 4 cores". // This line is ignored if AE_OS is UNDEFINED. alglib::setnworkers(4); // Perform matrix-matrix product. // We call function with "smp_" prefix, which means that ALGLIB // will try to execute it in parallel manner whenever it is possible. flops = 2*pow((double)n, (double)3); timeneeded = counter(); alglib::smp_rmatrixgemm( n, n, n, 1.0, a, 0, 0, 0, b, 0, 0, 1, 0.0, c, 0, 0); timeneeded = counter()-timeneeded; // Evaluate performance printf("Performance is %.1f GFLOPS\n", (double)(1.0E-9*flops/timeneeded)); return 0; }
Examples below cover x64 Linux compilation from command line with GCC.
We assume that current directory is clean before example is executed
(i.e. it has ONLY demo.cpp
file and cpp
folder).
We used 2.3 GHz 2-core Skylake CPU with 2x Hyperthreading enabled for this test.
First example covers platform-agnostic compilation without optimization settings - the most simple way to compile ALGLIB. This step is same in both open source and commercial editions. However, in platform-agnostic mode ALGLIB is unable to use all performance related features present in commercial edition.
We starts from copying all cpp
and h
files to current directory,
then we will compile them along with demo.cpp
.
In this and following examples we will omit compiler output for the sake of simplicity.
OS-agnostic mode, no compiler optimizations
> cp cpp/src/* . > g++ -I. -o demo.out *.cpp > ./demo.out Performance is 0.9 GFLOPS
Let's add -O3
to compiler parameters.
OS-agnostic mode, -O3 optimization
> g++ -I. -o demo.out -O3 *.cpp
> ./demo.out
Performance is 2.8 GFLOPS
Better, but not impressed. Let's turn on optimizations for x86 architecture: define AE_CPU=AE_INTEL
.
This option provides some speed-up in both free and commercial editions of ALGLIB.
OS-agnostic mode, ALGLIB knows it is x86/x64
> g++ -I. -o demo.out -O3 -DAE_CPU=AE_INTEL *.cpp
> ./demo.out
Performance is 5.0 GFLOPS
It is good, but we have 4 cores (in fact, 2 cores - it is 2-way hyperthreaded system) and only one of them was used.
Defining AE_OS=AE_POSIX
allows ALGLIB to use POSIX threads to parallelize execution of some functions.
You should also specify -pthread
flag to link with pthreads standard library.
Starting from this moment, our example applies only to Commercial Edition.
ALGLIB knows it is POSIX OS on x86/x64 CPU (COMMERCIAL EDITION)
> g++ -I. -o demo.out -O3 -DAE_CPU=AE_INTEL -DAE_OS=AE_POSIX -pthread *.cpp
> ./demo.out
Performance is 9.0 GFLOPS
Not bad. You may notice that performance growth was ~2x, not 4x. The reason is that we tested ALGLIB on hyperthreaded system: although we have 4 logical cores, they share computational resources of just 2 physical cores. And now we are ready to the final test - linking with MKL extensions.
Linking with MKL extensions differs a bit from standard way of linking with ALGLIB.
ALGLIB itself is compiled with one more preprocessor definition: we define AE_MKL
symbol.
We also link ALGLIB with appropriate alglib???_??mkl.so
shared library,
which contains special lightweight MKL distribution shipped with ALGLIB.
We should note that on typical Linux system shared libraries are not loaded from current directory by default. Either you install them into one of the system directories, or use some way to tell linker/loader that you want to load shared library from some specific directory. For our example we choose to update LD_LIBRARY_PATH environment variable.
Linking with MKL extensions (COMMERCIAL EDITION, relevant for ALGLIB 3.13)
> cp cpp/addons-mkl/libalglib*64mkl.so . > ls *.so libalglib313_64mkl.so > g++ -I. -o demo.out -O3 -DAE_CPU=AE_INTEL -DAE_OS=AE_POSIX -pthread -DAE_MKL -L. *.cpp -lalglib313_64mkl > LD_LIBRARY_PATH=. > export LD_LIBRARY_PATH > ./demo.out Performance is 33.8 GFLOPS
Final result: from 0.9 GFLOPS to 33.8 GFLOPS!
Both open source and commercial versions of ALGLIB are 100% thread-safe as long as different user threads work with different instances of objects/arrays. Thread-safety is guaranteed by having no global shared variables.
However, any kind of sharing ALGLIB objects/arrays between different threads is potentially hazardous. Even when this object is seemingly used in read-only mode!
Say, you use ALGLIB neural network NET to process two input vectors X0 and X1,
and get two output vectors Y0 and Y1.
You may decide that neural network is used in read-only mode which does not change state of NET,
because output is written to distinct arrays Y.
Thus, you may want to process these vectors from parallel threads.
But it is not read-only operation, even if it looks like that!
Neural network object NET allocates internal temporary buffers, which are modified by neural processing functions.
Thus, sharing one instance of neural network between two threads is thread-unsafe!
ALGLIB defines several conditional symbols (all start with "AE_" which means "ALGLIB environment") and two namespaces:
alglib_impl
(contains computational core) and alglib
(contains C++ interface).
Although this manual mentions both alglib_impl
and alglib
namespaces, only alglib
namespace should be used by you.
It contains user-friendly C++ interface with automatic memory management, exception handling and all other nice features.
alglib_impl
is less user-friendly, is less documented, and it is too easy to crash your system or cause memory leak if you use it directly.
ALGLIB (ap.h
header) defines several "basic" datatypes (types which are used by all packages) and many package-specific datatypes. "Basic" datatypes are:
alglib::ae_int_t
- signed integer type used by libraryalglib::complex
- double precision complex datatype, safer replacement for std::complex
alglib::ap_error
- exception which is thrown by libraryboolean_1d_array
- 1-dimensional boolean arrayinteger_1d_array
- 1-dimensional integer arrayreal_1d_array
- 1-dimensional real (double precision) arraycomplex_1d_array
- 1-dimensional complex arrayboolean_2d_array
- 2-dimensional boolean arrayinteger_2d_array
- 2-dimensional integer arrayreal_2d_array
- 2-dimensional real (double precision) arraycomplex_2d_array
- 2-dimensional complex arrayPackage-specific datatypes are classes which can be divided into two distinct groups:
The most important constants (defined in the ap.h
header) from ALGLIB namespace are:
alglib::machineepsilon
- small number which is close to the double precision &eps;, but is slightly largeralglib::maxrealnumber
- very large number which is close to the maximum real number, but is slightly smaller alglib::minrealnumber
- very small number which is close to the minimum nonzero real number, but is slightly largeralglib::fp_nan
- NAN (non-signalling under most platforms except for PA-RISC, where it is signalling;
but when PA-RISC CPU is in its default state, it is silently converted to the quiet NAN)alglib::fp_posinf
- positive infinityalglib::fp_neginf
- negative infinity
The most important "basic" functions from ALGLIB namespace (ap.h
header) are:
alglib::randomreal()
- returns random real number from [0,1)alglib::randominteger(mx)
- returns random integer number from [0,nx); mx must be less than RAND_MAXalglib::fp_eq(v1,v2)
- makes IEEE-compliant comparison of two double precision numbers.
If numbers are represented with greater precision than specified by IEEE 754 (as with Intel 80-bit FPU), this functions converts them to 64 bits before comparing.alglib::fp_neq(v1,v2)
- makes IEEE-compliant comparison of two double precision numbers.alglib::fp_less(v1,v2)
- makes IEEE-compliant comparison of two double precision numbers.alglib::fp_less_eq(v1,v2)
- makes IEEE-compliant comparison of two double precision numbers.alglib::fp_greater(v1,v2)
- makes IEEE-compliant comparison of two double precision numbers.alglib::fp_greater_eq(v1,v2)
- makes IEEE-compliant comparison of two double precision numbers.alglib::fp_isnan
- checks whether number is NANalglib::fp_isposinf
- checks whether number is +INFalglib::fp_isneginf
- checks whether number is -INFalglib::fp_isinf
- checks whether number is +INF or -INFalglib::fp_isfinite
- checks whether number is finite value (possibly subnormalized)
ALGLIB (ap.h
header) supports matrixes and vectors (one-dimensional and two-dimensional arrays) of variable size, with numeration starting from zero.
Everything starts from array creation. You should distinguish the creation of array class instance and the memory allocation for the array. When creating the class instance, you can use constructor without any parameters, that creates an empty array without any elements. An attempt to address them may cause the program failure.
You can use copy and assignment constructors that copy one array into another. If, during the copy operation, the source array has no memory allocated for the array elements, destination array will contain no elements either. If the source array has memory allocated for its elements, destination array will allocate the same amount of memory and copy the elements there. That is, the copy operation yields into two independent arrays with indentical contents.
You can also create array from formatted string like "[]", "[true,FALSE,tRUe]", "[[]]]" or "[[1,2],[3.2,4],[5.2]]" (note: '.' is used as decimal point independently from locale settings).
alglib::boolean_1d_array b1; b1 = "[true]"; alglib::real_2d_array r2("[[2,3],[3,4]]"); alglib::real_2d_array r2_1("[[]]"); alglib::real_2d_array r2_2(r2); r2_1 = r2; alglib::complex_1d_array c2; c2 = "[]"; c2 = "[0]"; c2 = "[1,2i]"; c2 = "[+1-2i,-1+5i]"; c2 = "[ 4i-2, 8i+2]"; c2 = "[+4i-2, +8i+2]"; c2 = "[-4i-2, -8i+2]";
After an empty array has been created, you can allocate memory for its elements, using the setlength()
method.
The content of the created array elements is not defined.
If the setlength
method is called for the array with already allocated memory, then, after changing its parameters,
the newly allocated elements also become undefined and the old content is destroyed.
alglib::boolean_1d_array b1; b1.setlength(2); alglib::integer_2d_array r2; r2.setlength(4,3);
Another way to initialize array is to call setcontent()
method.
This method accepts pointer to data which are copied into newly allocated array.
Vectors are stored in contiguous order, matrices are stored row by row.
alglib::real_1d_array r1; double _r1[] = {2, 3}; r1.setcontent(2,_r1); alglib::real_2d_array r2; double _r2[] = {11, 12, 13, 21, 22, 23}; r2.setcontent(2,3,_r2);
You can also attach real vector/matrix object to already allocated double precision array (attaching to boolean/integer/complex arrays is not supported). In this case, no actual data is copied, and attached vector/matrix object becomes a read/write proxy for external array.
alglib::real_1d_array r1; double a1[] = {2, 3}; r1.attach_to_ptr(2,a1); alglib::real_2d_array r2; double a2[] = {11, 12, 13, 21, 22, 23}; r2.attach_to_ptr(2,3,_r2);
To access the array elements, an overloaded operator()
or operator[]
can used.
That is, the code addressing the element of array a
with indexes [i,j]
can look like
a(i,j)
or a[i][j]
.
alglib::integer_1d_array a("[1,2,3]"); alglib::integer_1d_array b("[3,9,27]"); a[0] = b(0); alglib::integer_2d_array c("[[1,2,3],[9,9,9]]"); alglib::integer_2d_array d("[[3,9,27],[8,8,8]]"); d[1][1] = c(0,0);
You can access contents of 1-dimensional array by calling getcontent()
method which returns pointer to the array memory.
For historical reasons 2-dimensional arrays do not provide getcontent()
method, but you can use create reference to any element of array.
2-dimensional arrays store data in row-major order with aligned rows (i.e. generally distance between rows is not equal to number of columns).
You can get stride (distance between consequtive elements in different rows) with getstride()
call.
alglib::integer_1d_array a("[1,2]"); alglib::real_2d_array b("[[0,1],[10,11]]"); alglib::ae_int_t *a_row = a.getcontent(); // all three pointers point to the same location double *b_row0 = &b[0][0]; double *b_row0_2 = &b(0,0); double *b_row0_3 = b[0]; // advancing to the next row of 2-dimensional array double *b_row1 = b_row0 + b.getstride();
Finally, you can get array size with length()
, rows()
or cols()
methods:
alglib::integer_1d_array a("[1,2]"); alglib::real_2d_array b("[[0,1],[10,11]]"); printf("%ld\n", (long)a.length()); printf("%ld\n", (long)b.rows()); printf("%ld\n", (long)b.cols());
Most ALGLIB functions provide two interfaces: 'expert' and 'friendly'. What is the difference between two? When you use 'friendly' interface, ALGLIB:
When you use 'expert' interface, ALGLIB:
Here are several examples of 'friendly' and 'expert' interfaces:
#include "interpolation.h" ... alglib::real_1d_array x("[0,1,2,3]"); alglib::real_1d_array y("[1,5,3,9]"); alglib::real_1d_array y2("[1,5,3,9,0]"); alglib::spline1dinterpolant s; alglib::spline1dbuildlinear(x, y, 4, s); // 'expert' interface is used alglib::spline1dbuildlinear(x, y, s); // 'friendly' interface - input size is // automatically determined alglib::spline1dbuildlinear(x, y2, 4, s); // y2.length() is 5, but it will work alglib::spline1dbuildlinear(x, y2, s); // it won't work because sizes of x and y2 // are inconsistent
'Friendly' interface - matrix semantics:
#include "linalg.h" ... alglib::real_2d_array a; alglib::matinvreport rep; alglib::ae_int_t info; // // 'Friendly' interface: spdmatrixinverse() accepts and returns symmetric matrix // // symmetric positive definite matrix a = "[[2,1],[1,2]]"; // after this line A will contain [[0.66,-0.33],[-0.33,0.66]] // which is symmetric too alglib::spdmatrixinverse(a, info, rep); // you may try to pass nonsymmetric matrix a = "[[2,1],[0,2]]"; // but exception will be thrown in such case alglib::spdmatrixinverse(a, info, rep);
Same function but with 'expert' interface:
#include "linalg.h" ... alglib::real_2d_array a; alglib::matinvreport rep; alglib::ae_int_t info; // // 'Expert' interface, spdmatrixinverse() // // only upper triangle is used; a[1][0] is initialized by NAN, // but it can be arbitrary number a = "[[2,1],[NAN,2]]"; // after this line A will contain [[0.66,-0.33],[NAN,0.66]] // only upper triangle is modified alglib::spdmatrixinverse(a, 2 /* N */, true /* upper triangle is used */, info, rep);
ALGLIB uses two error handling strategies:
What is actually done depends on function being used and error being reported:
alglib::ap_error
exception.
Exception object has msg
parameter which contains short description of error.To make things clear we consider several examples of error handling.
Example 1. mincgreate function creates nonlinear CG optimizer. It accepts problem size N
and initial point X
.
Several things can go wrong - you may pass array which is too short, filled by NAN's, or otherwise pass incorrect data.
However, this function returns no error code - so it throws an exception in case something goes wrong.
There is no other way to tell caller that something went wrong.
Example 2. rmatrixinverse function calculates inverse matrix.
It returns error code, which is set to +1
when problem is solved and is set to -3
if singular matrix was passed to the function.
However, there is no error code for matrix which is non-square or contains infinities.
Well, we could have created corresponding error codes - but we didn't.
So if you pass singular matrix to rmatrixinverse
, you will get completion code -3
.
But if you pass matrix which contains INF in one of its elements, alglib::ap_error
will be thrown.
First error handling strategy (error codes) is used to report "frequent" errors, which can occur during normal execution of user program. Second error handling strategy (exceptions) is used to report "rare" errors which are result of serious flaws in your program (or ALGLIB) - infinities/NAN's in the inputs, inconsistent inputs, etc.
ALGLIB (ap.h
header) includes following Level 1 BLAS functions:
alglib::vdotproduct()
family, which allows to calculate dot product of two real or complex vectorsalglib::vmove()
family, which allows to copy real/complex vector to another location with optimal multiplication by real/complex valuealglib::vmoveneg()
family, which allows to copy real/complex vector to another location with multiplication by -1alglib::vadd()
and alglib::vsub()
families, which allows to add or subtract two real/complex vectors with optimal multiplication by real/complex valuealglib::vmul()
family, which implements in-place multiplication of real/complex vector by real/complex value
Each Level 1 BLAS function accepts input stride and output stride, which are expected to be positive.
Input and output vectors should not overlap.
Functions operating with complex vectors accept additional parameter conj_src
, which specifies whether input vector is conjugated or not.
For each real/complex function there exists "simple" companion which accepts no stride or conjugation modifier. "Simple" function assumes that input/output stride is +1, and no input conjugation is required.
alglib::real_1d_array rvec("[0,1,2,3]"); alglib::real_2d_array rmat("[[1,2],[3,4]]"); alglib::complex_1d_array cvec("[0+1i,1+2i,2-1i,3-2i]"); alglib::complex_2d_array cmat("[[3i,1],[9,2i]]"); alglib::vmove(&rvec[0], 1, &rmat[0][0], rmat.getstride(), 2); // now rvec is [1,3,2,3] alglib::vmove(&cvec[0], 1, &cmat[0][0], rmat.getstride(), "No conj", 2); // now cvec is [3i, 9, 2-1i, 3-2i] alglib::vmove(&cvec[2], 1, &cmat[0][0], 1, "Conj", 2); // now cvec is [3i, 9, -3i, 1]
Here is full list of Level 1 BLAS functions implemented in ALGLIB:
double vdotproduct( const double *v0, ae_int_t stride0, const double *v1, ae_int_t stride1, ae_int_t n); double vdotproduct( const double *v1, const double *v2, ae_int_t N); alglib::complex vdotproduct( const alglib::complex *v0, ae_int_t stride0, const char *conj0, const alglib::complex *v1, ae_int_t stride1, const char *conj1, ae_int_t n); alglib::complex vdotproduct( const alglib::complex *v1, const alglib::complex *v2, ae_int_t N); void vmove( double *vdst, ae_int_t stride_dst, const double* vsrc, ae_int_t stride_src, ae_int_t n); void vmove( double *vdst, const double* vsrc, ae_int_t N); void vmove( alglib::complex *vdst, ae_int_t stride_dst, const alglib::complex* vsrc, ae_int_t stride_src, const char *conj_src, ae_int_t n); void vmove( alglib::complex *vdst, const alglib::complex* vsrc, ae_int_t N); void vmoveneg( double *vdst, ae_int_t stride_dst, const double* vsrc, ae_int_t stride_src, ae_int_t n); void vmoveneg( double *vdst, const double *vsrc, ae_int_t N); void vmoveneg( alglib::complex *vdst, ae_int_t stride_dst, const alglib::complex* vsrc, ae_int_t stride_src, const char *conj_src, ae_int_t n); void vmoveneg( alglib::complex *vdst, const alglib::complex *vsrc, ae_int_t N); void vmove( double *vdst, ae_int_t stride_dst, const double* vsrc, ae_int_t stride_src, ae_int_t n, double alpha); void vmove( double *vdst, const double *vsrc, ae_int_t N, double alpha); void vmove( alglib::complex *vdst, ae_int_t stride_dst, const alglib::complex* vsrc, ae_int_t stride_src, const char *conj_src, ae_int_t n, double alpha); void vmove( alglib::complex *vdst, const alglib::complex *vsrc, ae_int_t N, double alpha); void vmove( alglib::complex *vdst, ae_int_t stride_dst, const alglib::complex* vsrc, ae_int_t stride_src, const char *conj_src, ae_int_t n, alglib::complex alpha); void vmove( alglib::complex *vdst, const alglib::complex *vsrc, ae_int_t N, alglib::complex alpha); void vadd( double *vdst, ae_int_t stride_dst, const double *vsrc, ae_int_t stride_src, ae_int_t n); void vadd( double *vdst, const double *vsrc, ae_int_t N); void vadd( alglib::complex *vdst, ae_int_t stride_dst, const alglib::complex *vsrc, ae_int_t stride_src, const char *conj_src, ae_int_t n); void vadd( alglib::complex *vdst, const alglib::complex *vsrc, ae_int_t N); void vadd( double *vdst, ae_int_t stride_dst, const double *vsrc, ae_int_t stride_src, ae_int_t n, double alpha); void vadd( double *vdst, const double *vsrc, ae_int_t N, double alpha); void vadd( alglib::complex *vdst, ae_int_t stride_dst, const alglib::complex *vsrc, ae_int_t stride_src, const char *conj_src, ae_int_t n, double alpha); void vadd( alglib::complex *vdst, const alglib::complex *vsrc, ae_int_t N, double alpha); void vadd( alglib::complex *vdst, ae_int_t stride_dst, const alglib::complex *vsrc, ae_int_t stride_src, const char *conj_src, ae_int_t n, alglib::complex alpha); void vadd( alglib::complex *vdst, const alglib::complex *vsrc, ae_int_t N, alglib::complex alpha); void vsub( double *vdst, ae_int_t stride_dst, const double *vsrc, ae_int_t stride_src, ae_int_t n); void vsub( double *vdst, const double *vsrc, ae_int_t N); void vsub( alglib::complex *vdst, ae_int_t stride_dst, const alglib::complex *vsrc, ae_int_t stride_src, const char *conj_src, ae_int_t n); void vsub( alglib::complex *vdst, const alglib::complex *vsrc, ae_int_t N); void vsub( double *vdst, ae_int_t stride_dst, const double *vsrc, ae_int_t stride_src, ae_int_t n, double alpha); void vsub( double *vdst, const double *vsrc, ae_int_t N, double alpha); void vsub( alglib::complex *vdst, ae_int_t stride_dst, const alglib::complex *vsrc, ae_int_t stride_src, const char *conj_src, ae_int_t n, double alpha); void vsub( alglib::complex *vdst, const alglib::complex *vsrc, ae_int_t N, double alpha); void vsub( alglib::complex *vdst, ae_int_t stride_dst, const alglib::complex *vsrc, ae_int_t stride_src, const char *conj_src, ae_int_t n, alglib::complex alpha); void vsub( alglib::complex *vdst, const alglib::complex *vsrc, ae_int_t N, alglib::complex alpha); void vmul( double *vdst, ae_int_t stride_dst, ae_int_t n, double alpha); void vmul( double *vdst, ae_int_t N, double alpha); void vmul( alglib::complex *vdst, ae_int_t stride_dst, ae_int_t n, double alpha); void vmul( alglib::complex *vdst, ae_int_t N, double alpha); void vmul( alglib::complex *vdst, ae_int_t stride_dst, ae_int_t n, alglib::complex alpha); void vmul( alglib::complex *vdst, ae_int_t N, alglib::complex alpha);
ALGLIB (ap.h
header) has alglib::read_csv()
function
which allows to read data from CSV file.
Entire file is loaded into memory as double precision 2D array (alglib::real_2d_array
object).
This function provides following features:
See comments on alglib::read_csv()
function for more information about its functionality.
Commercial version of ALGLIB for C++ features four important improvements over open source one:
ALGLIB for C++ can utilize SIMD instructions supported by Intel and AMD processors. This feature is optional and must be explicitly turned on during compile-time. If you do not activate it, ALGLIB will use generic C code, without any processor-specific assembly/intrinsics.
Thus, if you turn on this feature, your code will run faster on x86_32 and x86_64 processors, but will be unportable to non-x86 platforms (and Intel MIC platform, which is not exactly x86!). From the other side, if you do not activate this feature, your code will be portable to almost any modern CPU (SPARC, ARM, ...).
In order to turn on x86-specific optimizations,
you should define AE_CPU=AE_INTEL
preprocessor definition at global level.
It will tell ALGLIB to use SIMD intrinsics supported by GCC, MSVC and Intel compilers.
Additionally you should tell compiler to generate SIMD-capable code.
It can be done in the project settings of your IDE or in the command line:
GCC example: > g++ -msse2 -I. -DAE_CPU=AE_INTEL *.cpp -lm MSVC example: > cl /I. /EHsc /DAE_CPU=AE_INTEL *.cpp
Commercial version of ALGLIB includes out-of-the-box support for multithreading. Many (not all) computationally intensive problems can be solved in multithreaded mode. You should read comments on specific ALGLIB functions to determine what can be multithreaded and what can not.
ALGLIB does not depend on vendor/compiler support for technologies like OpenMP/MPI/... Under Windows ALGLIB uses OS threads and custom synchronization framework. Under POSIX-compatible OS (Solaris, Linux, FreeBSD, NetBSD, OpenBSD, ...) ALGLIB uses POSIX Threads (standard *nix library which is shipped with any POSIX system) with its threading and synchronization primitives. It gives ALGLIB unprecedented portability across operating systems and compilers. ALGLIB does not depend on presence of any custom multithreading library or compiler support for any multithreading technology.
If you want to use multithreaded capabilities of ALGLIB, you should:
Let explain it in more details...
1.
You should compile ALGLIB in OS-specific mode by #defining either
AE_OS=AE_WINDOWS
or AE_OS=AE_POSIX
at compile time, depending on OS being used.
Former corresponds to any modern OS (32/64-bit Windows XP and later) from Windows family,
while latter means almost any POSIX-compatible OS.
When compiling on POSIX, do not forget to link ALGLIB with libpthread
library.
2.
ALGLIB automatically determines number of cores on application startup.
On Windows it is done using GetSystemInfo()
call.
On POSIX systems ALGLIB performs sysconf(_SC_NPROCESSORS_ONLN)
system call.
This system call is supported by all modern POSIX-compatible systems: Solaris, Linux, FreeBSD, NetBSD, OpenBSD.
By default, ALGLIB uses all available cores except for one.
Say, on 4-core system it will use three cores - unless being told to use more or less.
It will keep your system responsive during lengthy computations.
Such behavior may be changed with setnworkers()
call:
alglib::setnworkers(0)
= use all coresalglib::setnworkers(-1)
= leave one core unusedalglib::setnworkers(-2)
= leave two cores unusedalglib::setnworkers(+2)
= use 2 cores (even if you have more)
You may want to specify maximum number of worker threads during compile time
by means of preprocessor definition AE_NWORKERS=N
.
You can add this definition to compiler command line or change corresponding project settings in your IDE.
Here N can be any positive number.
ALGLIB will use exactly N worker threads, unless being told to use less by setnworkers()
call.
Some old POSIX-compatible operating systems do not support sysconf(_SC_NPROCESSORS_ONLN)
system call
which is required in order to automatically determine number of active cores.
On these systems you should specify number of cores manually at compile time.
Without it ALGLIB will run in single-threaded mode.
3. When you use commercial edition of ALGLIB, you may choose between serial and multithreaded versions of SMP-capable functions:
smp_
prefix in its name) creates (or wakes up) worker threads,
inserts task in the worker queue, and waits for completion of the task.
All processing is done in context of worker thread(s).You should carefully decide what version of function to use. Starting/stopping worker thread costs tens of thousands of CPU cycles. Thus you won't get multithreading speedup on small computational problems.
Simultaneous multithreading (SMT) also known as Hyper-threading (Intel) and Cluster-based Multithreading (AMD) is a CPU design where several (usually two) logical cores share resources of one physical core. Say, on dual-core system with 2x HT scale factor you will see 4 logical cores. Each pair of these 4 cores, however, share same hardware resources. Thus, you may get only marginal speedup when running highly optimized software which fully utilizes CPU resources.
Say, if one thread occupies floating-point unit, another thread on the same physical core may work with integer numbers at the same time without any performance penalties. In this case you may get some speedup due to having additional cores. But if both threads keep FPU unit 100% busy, they won't get any multithreaded speedup.
So, if 2 math-intensive threads are dispatched by OS scheduler to different physical cores, you will get 2x speedup due to use of multithreading. But if these threads are dispatched to different logical cores - but same physical core - you won't get any speedup at all! One physical core will be 100% busy, and another one will be 100% idle. From the other side, if you start four threads instead of two, your system will be 100% utilized independently of thread scheduling details.
Let we stress it one more time - multithreading speedup on SMT systems is highly dependent on number of threads you are running and decisions made by OS scheduler. It is not 100% deterministic! With "true SMP" when you run 2 threads, you get 2x speedup (or 1.95, or 1.80 - it depends on algorithm, but this factor is always same). With SMT when you run 2 threads you may get your 2x speedup - or no speedup at all. Modern OS schedulers do a good job on single-socket hardware, but even in this "simple" case they give no guarantees of fair distribution of hardware resources. And things become a bit tricky when you work with multi-socket hardware. On SMT systems the only guaranteed way to 100% utilize your CPU is to create as many worker threads as there are logical cores. In this case OS scheduler has no chance to make its work in a wrong way.
Commercial edition of ALGLIB includes MKL extensions - special lightweight distribution of Intel MKL, highly optimized numerical library from Intel - and precompiled ALGLIB-MKL interface libraries. Linking your programs with MKL extensions allows you to run ALGLIB with maximum performance. MKL binaries are delivered for x86/x64 Windows and x64 Linux platforms.
Unlike the rest of the library, MKL extensions are distributed in binary-only form. ALGLIB itself is still distributed in source code form, but Intel MKL and ALGLIB-MKL interface are distributed as precompiled dynamic/static libraries. We can not distribute them in source because of license restrictions associated with Intel MKL. Also due to license restrictions we can not give you direct access to MKL functionality. You may use MKL to accelerate ALGLIB - without paying for MKL license - but you may not call its functions directly. It is technically possible, but strictly prohibited by both MKL's EULA and ALGLIB License Agreement. If you want to work with MKL, you should obtain separate license from Intel (as of 2018, free licenses are available).
MKL extensions are located in the /cpp/addons-mkl
subdirectory of the ALGLIB distribution.
This directory includes following files:
Here ??? stands for specific ALGLIB version: 313 for ALGLIB 3.13, and so on. Files above are just MKL extensions - ALGLIB itself is not included in these binaries, and you still have to compile primary ALGLIB distribution.
In order to activate MKL extensions you should:
AE_MKL
to activate ALGLIB-MKL connection.
AE_OS=AE_WINDOWS
or AE_OS=AE_POSIX
to activate multithreading capabilities
AE_CPU=AE_INTEL
to use SIMD instructions provided by x86/x64 CPU's in the rest of ALGLIB
alglib???_32/64mkl.lib
and link it with your applicationSeveral examples of ALGLIB+MKL usage are given in the 'compiling ALGLIB: examples' section.
If you bought separate license for Intel MKL, and want to use your own
installation of MKL - and not our lightweight distribution - then you
should compile ALGLIB as it was told in the previous section, with all necessary preprocessor definitions
(AE_OS=AE_WINDOWS or AE_OS=AE_POSIX, AE_CPU=AE_INTEL and AE_MKL defined).
But instead of linking with MKL Extensions binary,
you should add to your project alglib2mkl.c
file from addons-mkl
directory
and compile it (as C file) along with the rest of ALGLIB.
This C file implements interface between MKL and ALGLIB.
Having this file in your project and defining AE_MKL
preprocessor definition
results in ALGLIB using MKL functions.
However, this C file is just interface! It is your responsibility to make sure that C/C++ compiler can find MKL headers, and appropriate MKL static/dynamic libraries are linked to your application.
ALGLIB for C++ can be compiled in exception-free mode, with exceptions
(throw
/try
/catch
constructs) being disabled at compiler level.
Such feature is sometimes used by developers of embedded software.
ALGLIB uses two-level model of errors:
"expected" errors (like degeneracy of linear system or inconsistency of linear constraints)
are reported with dedicated completion codes,
and "critical" errors (like malloc
failures, unexpected NANs/INFs in the input data and so on)
are reported with exceptions.
The idea is that it is hard to put (and handle) completion codes in every ALGLIB function,
so we use exceptions to signal errors which should never happen under normal circumstances.
Internally ALGLIB for C++ is implemented as C++ wrapper around computational core written in pure C.
Thus, internals of ALGLIB core use C-specific methods of error handling -
completion codes and setjmp
/longjmp
functions.
These error handling strategies are combined with sophisticated machinery of C memory management
which makes sure that not even a byte of dynamic memory is lost when we make longjmp
to the error handler.
So, the only point where C++ exceptions are actually used is a boundary between C core and C++ interface.
If you choose to use exceptions (default mode), ALGLIB will throw an exception with short textual description of the situation. And if you choose to work without exceptions, ALGLIB will set global error flag and silently return from the current function/constructor/... instead of throwing an exception. Due to portability issues this error flag is made to be a non-TLS variable, i.e. it is shared between different threads. So, you can use exception-free error handling only in single-threaded programs - although multithreaded programs won't break, there is no way to determine which thread caused an "exception without exceptions".
Exception-free method of reporting critical errors can be activated by #defining two preprocessor symbols at global level:
AE_NO_EXCEPTIONS
- to switch from exception-based to exception-free codeAE_THREADING=AE_SERIAL_UNSAFE
- to confirm that you are aware of limitations associated
with exception-free mode (it does not support multithreading)
We must also note that exception-free mode is incompatible with OS-aware compiling:
you can not have AE_OS=???
defined together with AE_NO_EXCEPTIONS
.
After you #define all the necessary preprocessor symbols, two functions will appear in alglib
namespace:
bool alglib::get_error_flag(const char **p_msg = NULL)
,
which returns current error status (true
is returned on error),
with optional char**
parameter used to get human-readable error message.
void alglib::clear_error_flag()
, which clears error flag
(ALGLIB functions set flag on failure, but do not clear it on successful calls)
You must check error flag after EVERY operation with ALGLIB objects and functions. In addition to calling computational ALGLIB functions, following kinds of operations may result in "exception":
malloc()
call which may fail.
malloc
)
setlength()
/setcontents()
and attaching to external memory with attach_to_ptr()
Due to ALGLIB modular structure it is possible to selectively enable/disable some of its subpackages along with their dependencies. Deactivation of ALGLIB source code is performed at preprocessor level - compiler does not even see disabled code. Partial compilation can be used for two purposes:
You can activate partial compilation by #defining at global level following symbols:
AE_PARTIAL_BUILD
- to disable everything not enabled by default
AE_COMPILE_SUBPACKAGE
- to selectively enable subpackage SUBPACKAGE,
with its name given in upper case (case sensitive).
You may combine several definitions like this in order to enable several subpackages.
Subpackage names can be found in the list of ALGLIB packages and subpackages.
There are three test suites in ALGLIB: computational tests, interface tests, extended tests.
Computational tests are located in /tests/test_c.cpp
.
They are focused on numerical properties of algorithms, stress testing and "deep" tests (large automatically generated problems).
They require significant amount of time to finish (tens of minutes).
Interface tests are located in /tests/test_i.cpp
.
These tests are focused on ability to correctly pass data between computational core and caller, ability to detect simple problems in inputs,
and on ability to at least compile ALGLIB with your compiler.
They are very fast (about a minute to finish including compilation time).
Extended tests are located in /tests/test_x.cpp
.
These tests are focused on testing some special properties
(say, testing that cloning object indeed results in 100% independent copy being created)
and performance of several chosen algorithms.
Running test suite is easy - just
test_c.cpp
, test_i.cpp
or test_x.cpp
)
along with the rest of the library
If you want to be sure that ALGLIB will work with some sophisticated optimization settings, set corresponding flags during compile time.
If your compiler/system are not in the list of supported ones, we recommend you to run both test suites. But if you are running out of time, run at least test_i.cpp
.
8.1
| ||
hqrnd | High quality random numbers generator | |
nearestneighbor | Nearest neighbor search: approximate and exact | |
xdebug | Debug functions to test ALGLIB interface generator | |
8.2
| ||
bdss | Basic dataset functions | |
clustering | Clustering functions (hierarchical, k-means, k-means++) | |
datacomp | Backward compatibility functions | |
dforest | Decision forest classifier (regression model) | |
filters | Different filters used in data analysis | |
lda | Linear discriminant analysis | |
linreg | Linear models | |
logit | Logit models | |
mcpd | Markov Chains for Population/proportional Data | |
mlpbase | Basic functions for neural networks | |
mlpe | Basic functions for neural ensemble models | |
mlptrain | Neural network training | |
pca | Principal component analysis | |
ssa | Singular Spectrum Analysis | |
8.3
| ||
odesolver | Ordinary differential equation solver | |
8.4
| ||
conv | Fast real/complex convolution | |
corr | Fast real/complex cross-correlation | |
fft | Real/complex FFT | |
fht | Real Fast Hartley Transform | |
8.5
| ||
autogk | Adaptive 1-dimensional integration | |
gkq | Gauss-Kronrod quadrature generator | |
gq | Gaussian quadrature generator | |
8.6
| ||
idwint | Inverse distance weighting: interpolation/fitting | |
lsfit | Fitting with least squates target function (linear and nonlinear least-squares) | |
nsfit | Fitting with non-least-squares target functions (ones involving min/max operations, etc) | |
parametric | Parametric curves | |
polint | Polynomial interpolation/fitting | |
ratint | Rational interpolation/fitting | |
rbf | Scattered N-dimensional interpolation with RBF models | |
spline1d | 1D spline interpolation/fitting | |
spline2d | 2D spline interpolation | |
spline3d | 3D spline interpolation | |
8.7
| ||
ablas | Level 2 and Level 3 BLAS operations | |
bdsvd | Bidiagonal SVD | |
evd | Direct and iterative eigensolvers | |
inverseupdate | Sherman-Morrison update of the inverse matrix | |
matdet | Determinant calculation | |
matgen | Random matrix generation | |
matinv | Matrix inverse | |
normestimator | Estimates norm of the sparse matrix (from below) | |
ortfac | Real/complex QR/LQ, bi(tri)diagonal, Hessenberg decompositions | |
rcond | Condition number estimate | |
schur | Schur decomposition | |
sparse | Sparse matrices | |
spdgevd | Generalized symmetric eigensolver | |
svd | Singular value decomposition | |
trfac | LU and Cholesky decompositions (dense and sparse) | |
8.8
| ||
minbc | Box constrained optimizer with fast activation of multiple constraints per step | |
minbleic | Bound constrained optimizer with additional linear equality/inequality constraints | |
mincg | Conjugate gradient optimizer | |
mincomp | Backward compatibility functions | |
minlbfgs | Limited memory BFGS optimizer | |
minlm | Improved Levenberg-Marquardt optimizer | |
minnlc | Nonlinearly constrained optimizer | |
minns | Nonsmooth constrained optimizer | |
minqp | Quadratic programming with bound and linear equality/inequality constraints | |
8.9
| ||
directdensesolvers | Direct dense linear solvers | |
directsparsesolvers | Direct sparse linear solvers | |
lincg | Sparse linear CG solver | |
linlsqr | Sparse linear LSQR solver | |
nleq | Solvers for nonlinear equations | |
polynomialsolver | Polynomial solver | |
8.10
| ||
airyf | Airy functions | |
bessel | Bessel functions | |
betaf | Beta function | |
binomialdistr | Binomial distribution | |
chebyshev | Chebyshev polynomials | |
chisquaredistr | Chi-Square distribution | |
dawson | Dawson integral | |
elliptic | Elliptic integrals | |
expintegrals | Exponential integrals | |
fdistr | F-distribution | |
fresnel | Fresnel integrals | |
gammafunc | Gamma function | |
hermite | Hermite polynomials | |
ibetaf | Incomplete beta function | |
igammaf | Incomplete gamma function | |
jacobianelliptic | Jacobian elliptic functions | |
laguerre | Laguerre polynomials | |
legendre | Legendre polynomials | |
normaldistr | Normal distribution | |
poissondistr | Poisson distribution | |
psif | Psi function | |
studenttdistr | Student's t-distribution | |
trigintegrals | Trigonometric integrals | |
8.11
| ||
basestat | Mean, variance, covariance, correlation, etc. | |
correlationtests | Hypothesis testing: correlation tests | |
jarquebera | Hypothesis testing: Jarque-Bera test | |
mannwhitneyu | Hypothesis testing: Mann-Whitney-U test | |
stest | Hypothesis testing: sign test | |
studentttests | Hypothesis testing: Student's t-test | |
variancetests | Hypothesis testing: F-test and one-sample variance test | |
wsr | Hypothesis testing: Wilcoxon signed rank test | |
ablas
subpackageablas_d_gemm | Matrix multiplication (single-threaded) | |
ablas_d_syrk | Symmetric rank-K update (single-threaded) | |
ablas_smp_gemm | Matrix multiplication (multithreaded) | |
ablas_smp_syrk | Symmetric rank-K update (multithreaded) |
cmatrixcopy
function/************************************************************************* Copy Input parameters: M - number of rows N - number of columns A - source matrix, MxN submatrix is copied and transposed IA - submatrix offset (row index) JA - submatrix offset (column index) B - destination matrix, must be large enough to store result IB - submatrix offset (row index) JB - submatrix offset (column index) *************************************************************************/void alglib::cmatrixcopy( ae_int_t m, ae_int_t n, complex_2d_array a, ae_int_t ia, ae_int_t ja, complex_2d_array& b, ae_int_t ib, ae_int_t jb);
cmatrixgemm
function/************************************************************************* This subroutine calculates C = alpha*op1(A)*op2(B) +beta*C where: * C is MxN general matrix * op1(A) is MxK matrix * op2(B) is KxN matrix * "op" may be identity transformation, transposition, conjugate transposition Additional info: * cache-oblivious algorithm is used. * multiplication result replaces C. If Beta=0, C elements are not used in calculations (not multiplied by zero - just not referenced) * if Alpha=0, A is not used (not multiplied by zero - just not referenced) * if both Beta and Alpha are zero, C is filled by zeros. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. Because starting/stopping worker thread always ! involves some overhead, parallelism starts to be profitable for N's ! larger than 128. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. IMPORTANT: This function does NOT preallocate output matrix C, it MUST be preallocated by caller prior to calling this function. In case C does not have enough space to store result, exception will be generated. INPUT PARAMETERS M - matrix size, M>0 N - matrix size, N>0 K - matrix size, K>0 Alpha - coefficient A - matrix IA - submatrix offset JA - submatrix offset OpTypeA - transformation type: * 0 - no transformation * 1 - transposition * 2 - conjugate transposition B - matrix IB - submatrix offset JB - submatrix offset OpTypeB - transformation type: * 0 - no transformation * 1 - transposition * 2 - conjugate transposition Beta - coefficient C - matrix (PREALLOCATED, large enough to store result) IC - submatrix offset JC - submatrix offset -- ALGLIB routine -- 16.12.2009 Bochkanov Sergey *************************************************************************/void alglib::cmatrixgemm( ae_int_t m, ae_int_t n, ae_int_t k, alglib::complex alpha, complex_2d_array a, ae_int_t ia, ae_int_t ja, ae_int_t optypea, complex_2d_array b, ae_int_t ib, ae_int_t jb, ae_int_t optypeb, alglib::complex beta, complex_2d_array& c, ae_int_t ic, ae_int_t jc); void alglib::smp_cmatrixgemm( ae_int_t m, ae_int_t n, ae_int_t k, alglib::smp_complex alpha, complex_2d_array a, ae_int_t ia, ae_int_t ja, ae_int_t optypea, complex_2d_array b, ae_int_t ib, ae_int_t jb, ae_int_t optypeb, alglib::smp_complex beta, complex_2d_array& c, ae_int_t ic, ae_int_t jc);
cmatrixherk
function/************************************************************************* This subroutine calculates C=alpha*A*A^H+beta*C or C=alpha*A^H*A+beta*C where: * C is NxN Hermitian matrix given by its upper/lower triangle * A is NxK matrix when A*A^H is calculated, KxN matrix otherwise Additional info: * cache-oblivious algorithm is used. * multiplication result replaces C. If Beta=0, C elements are not used in calculations (not multiplied by zero - just not referenced) * if Alpha=0, A is not used (not multiplied by zero - just not referenced) * if both Beta and Alpha are zero, C is filled by zeros. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. Because starting/stopping worker thread always ! involves some overhead, parallelism starts to be profitable for N's ! larger than 128. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS N - matrix size, N>=0 K - matrix size, K>=0 Alpha - coefficient A - matrix IA - submatrix offset (row index) JA - submatrix offset (column index) OpTypeA - multiplication type: * 0 - A*A^H is calculated * 2 - A^H*A is calculated Beta - coefficient C - preallocated input/output matrix IC - submatrix offset (row index) JC - submatrix offset (column index) IsUpper - whether upper or lower triangle of C is updated; this function updates only one half of C, leaving other half unchanged (not referenced at all). -- ALGLIB routine -- 16.12.2009 Bochkanov Sergey *************************************************************************/void alglib::cmatrixherk( ae_int_t n, ae_int_t k, double alpha, complex_2d_array a, ae_int_t ia, ae_int_t ja, ae_int_t optypea, double beta, complex_2d_array& c, ae_int_t ic, ae_int_t jc, bool isupper); void alglib::smp_cmatrixherk( ae_int_t n, ae_int_t k, double alpha, complex_2d_array a, ae_int_t ia, ae_int_t ja, ae_int_t optypea, double beta, complex_2d_array& c, ae_int_t ic, ae_int_t jc, bool isupper);
cmatrixlefttrsm
function/************************************************************************* This subroutine calculates op(A^-1)*X where: * X is MxN general matrix * A is MxM upper/lower triangular/unitriangular matrix * "op" may be identity transformation, transposition, conjugate transposition Multiplication result replaces X. Cache-oblivious algorithm is used. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. Because starting/stopping worker thread always ! involves some overhead, parallelism starts to be profitable for N's ! larger than 128. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS N - matrix size, N>=0 M - matrix size, N>=0 A - matrix, actial matrix is stored in A[I1:I1+M-1,J1:J1+M-1] I1 - submatrix offset J1 - submatrix offset IsUpper - whether matrix is upper triangular IsUnit - whether matrix is unitriangular OpType - transformation type: * 0 - no transformation * 1 - transposition * 2 - conjugate transposition X - matrix, actial matrix is stored in X[I2:I2+M-1,J2:J2+N-1] I2 - submatrix offset J2 - submatrix offset -- ALGLIB routine -- 15.12.2009 Bochkanov Sergey *************************************************************************/void alglib::cmatrixlefttrsm( ae_int_t m, ae_int_t n, complex_2d_array a, ae_int_t i1, ae_int_t j1, bool isupper, bool isunit, ae_int_t optype, complex_2d_array& x, ae_int_t i2, ae_int_t j2); void alglib::smp_cmatrixlefttrsm( ae_int_t m, ae_int_t n, complex_2d_array a, ae_int_t i1, ae_int_t j1, bool isupper, bool isunit, ae_int_t optype, complex_2d_array& x, ae_int_t i2, ae_int_t j2);
cmatrixmv
function/************************************************************************* Matrix-vector product: y := op(A)*x INPUT PARAMETERS: M - number of rows of op(A) M>=0 N - number of columns of op(A) N>=0 A - target matrix IA - submatrix offset (row index) JA - submatrix offset (column index) OpA - operation type: * OpA=0 => op(A) = A * OpA=1 => op(A) = A^T * OpA=2 => op(A) = A^H X - input vector IX - subvector offset IY - subvector offset Y - preallocated matrix, must be large enough to store result OUTPUT PARAMETERS: Y - vector which stores result if M=0, then subroutine does nothing. if N=0, Y is filled by zeros. -- ALGLIB routine -- 28.01.2010 Bochkanov Sergey *************************************************************************/void alglib::cmatrixmv( ae_int_t m, ae_int_t n, complex_2d_array a, ae_int_t ia, ae_int_t ja, ae_int_t opa, complex_1d_array x, ae_int_t ix, complex_1d_array& y, ae_int_t iy);
cmatrixrank1
function/************************************************************************* Rank-1 correction: A := A + u*v' INPUT PARAMETERS: M - number of rows N - number of columns A - target matrix, MxN submatrix is updated IA - submatrix offset (row index) JA - submatrix offset (column index) U - vector #1 IU - subvector offset V - vector #2 IV - subvector offset *************************************************************************/void alglib::cmatrixrank1( ae_int_t m, ae_int_t n, complex_2d_array& a, ae_int_t ia, ae_int_t ja, complex_1d_array& u, ae_int_t iu, complex_1d_array& v, ae_int_t iv);
cmatrixrighttrsm
function/************************************************************************* This subroutine calculates X*op(A^-1) where: * X is MxN general matrix * A is NxN upper/lower triangular/unitriangular matrix * "op" may be identity transformation, transposition, conjugate transposition Multiplication result replaces X. Cache-oblivious algorithm is used. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. Because starting/stopping worker thread always ! involves some overhead, parallelism starts to be profitable for N's ! larger than 128. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS N - matrix size, N>=0 M - matrix size, N>=0 A - matrix, actial matrix is stored in A[I1:I1+N-1,J1:J1+N-1] I1 - submatrix offset J1 - submatrix offset IsUpper - whether matrix is upper triangular IsUnit - whether matrix is unitriangular OpType - transformation type: * 0 - no transformation * 1 - transposition * 2 - conjugate transposition X - matrix, actial matrix is stored in X[I2:I2+M-1,J2:J2+N-1] I2 - submatrix offset J2 - submatrix offset -- ALGLIB routine -- 15.12.2009 Bochkanov Sergey *************************************************************************/void alglib::cmatrixrighttrsm( ae_int_t m, ae_int_t n, complex_2d_array a, ae_int_t i1, ae_int_t j1, bool isupper, bool isunit, ae_int_t optype, complex_2d_array& x, ae_int_t i2, ae_int_t j2); void alglib::smp_cmatrixrighttrsm( ae_int_t m, ae_int_t n, complex_2d_array a, ae_int_t i1, ae_int_t j1, bool isupper, bool isunit, ae_int_t optype, complex_2d_array& x, ae_int_t i2, ae_int_t j2);
cmatrixsyrk
function/************************************************************************* This subroutine is an older version of CMatrixHERK(), one with wrong name (it is HErmitian update, not SYmmetric). It is left here for backward compatibility. -- ALGLIB routine -- 16.12.2009 Bochkanov Sergey *************************************************************************/void alglib::cmatrixsyrk( ae_int_t n, ae_int_t k, double alpha, complex_2d_array a, ae_int_t ia, ae_int_t ja, ae_int_t optypea, double beta, complex_2d_array& c, ae_int_t ic, ae_int_t jc, bool isupper); void alglib::smp_cmatrixsyrk( ae_int_t n, ae_int_t k, double alpha, complex_2d_array a, ae_int_t ia, ae_int_t ja, ae_int_t optypea, double beta, complex_2d_array& c, ae_int_t ic, ae_int_t jc, bool isupper);
cmatrixtranspose
function/************************************************************************* Cache-oblivous complex "copy-and-transpose" Input parameters: M - number of rows N - number of columns A - source matrix, MxN submatrix is copied and transposed IA - submatrix offset (row index) JA - submatrix offset (column index) B - destination matrix, must be large enough to store result IB - submatrix offset (row index) JB - submatrix offset (column index) *************************************************************************/void alglib::cmatrixtranspose( ae_int_t m, ae_int_t n, complex_2d_array a, ae_int_t ia, ae_int_t ja, complex_2d_array& b, ae_int_t ib, ae_int_t jb);
rmatrixcopy
function/************************************************************************* Copy Input parameters: M - number of rows N - number of columns A - source matrix, MxN submatrix is copied and transposed IA - submatrix offset (row index) JA - submatrix offset (column index) B - destination matrix, must be large enough to store result IB - submatrix offset (row index) JB - submatrix offset (column index) *************************************************************************/void alglib::rmatrixcopy( ae_int_t m, ae_int_t n, real_2d_array a, ae_int_t ia, ae_int_t ja, real_2d_array& b, ae_int_t ib, ae_int_t jb);
rmatrixenforcesymmetricity
function/************************************************************************* This code enforces symmetricy of the matrix by copying Upper part to lower one (or vice versa). INPUT PARAMETERS: A - matrix N - number of rows/columns IsUpper - whether we want to copy upper triangle to lower one (True) or vice versa (False). *************************************************************************/void alglib::rmatrixenforcesymmetricity( real_2d_array& a, ae_int_t n, bool isupper);
rmatrixgemm
function/************************************************************************* This subroutine calculates C = alpha*op1(A)*op2(B) +beta*C where: * C is MxN general matrix * op1(A) is MxK matrix * op2(B) is KxN matrix * "op" may be identity transformation, transposition Additional info: * cache-oblivious algorithm is used. * multiplication result replaces C. If Beta=0, C elements are not used in calculations (not multiplied by zero - just not referenced) * if Alpha=0, A is not used (not multiplied by zero - just not referenced) * if both Beta and Alpha are zero, C is filled by zeros. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. Because starting/stopping worker thread always ! involves some overhead, parallelism starts to be profitable for N's ! larger than 128. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. IMPORTANT: This function does NOT preallocate output matrix C, it MUST be preallocated by caller prior to calling this function. In case C does not have enough space to store result, exception will be generated. INPUT PARAMETERS M - matrix size, M>0 N - matrix size, N>0 K - matrix size, K>0 Alpha - coefficient A - matrix IA - submatrix offset JA - submatrix offset OpTypeA - transformation type: * 0 - no transformation * 1 - transposition B - matrix IB - submatrix offset JB - submatrix offset OpTypeB - transformation type: * 0 - no transformation * 1 - transposition Beta - coefficient C - PREALLOCATED output matrix, large enough to store result IC - submatrix offset JC - submatrix offset -- ALGLIB routine -- 2009-2013 Bochkanov Sergey *************************************************************************/void alglib::rmatrixgemm( ae_int_t m, ae_int_t n, ae_int_t k, double alpha, real_2d_array a, ae_int_t ia, ae_int_t ja, ae_int_t optypea, real_2d_array b, ae_int_t ib, ae_int_t jb, ae_int_t optypeb, double beta, real_2d_array& c, ae_int_t ic, ae_int_t jc); void alglib::smp_rmatrixgemm( ae_int_t m, ae_int_t n, ae_int_t k, double alpha, real_2d_array a, ae_int_t ia, ae_int_t ja, ae_int_t optypea, real_2d_array b, ae_int_t ib, ae_int_t jb, ae_int_t optypeb, double beta, real_2d_array& c, ae_int_t ic, ae_int_t jc);
rmatrixgemv
function/************************************************************************* *************************************************************************/void alglib::rmatrixgemv( ae_int_t m, ae_int_t n, double alpha, real_2d_array a, ae_int_t ia, ae_int_t ja, ae_int_t opa, real_1d_array x, ae_int_t ix, double beta, real_1d_array& y, ae_int_t iy);
rmatrixger
function/************************************************************************* Rank-1 correction: A := A + alpha*u*v' NOTE: this function expects A to be large enough to store result. No automatic preallocation happens for smaller arrays. No integrity checks is performed for sizes of A, u, v. INPUT PARAMETERS: M - number of rows N - number of columns A - target matrix, MxN submatrix is updated IA - submatrix offset (row index) JA - submatrix offset (column index) Alpha- coefficient U - vector #1 IU - subvector offset V - vector #2 IV - subvector offset -- ALGLIB routine -- 16.10.2017 Bochkanov Sergey *************************************************************************/void alglib::rmatrixger( ae_int_t m, ae_int_t n, real_2d_array& a, ae_int_t ia, ae_int_t ja, double alpha, real_1d_array u, ae_int_t iu, real_1d_array v, ae_int_t iv);
rmatrixlefttrsm
function/************************************************************************* This subroutine calculates op(A^-1)*X where: * X is MxN general matrix * A is MxM upper/lower triangular/unitriangular matrix * "op" may be identity transformation, transposition Multiplication result replaces X. Cache-oblivious algorithm is used. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. Because starting/stopping worker thread always ! involves some overhead, parallelism starts to be profitable for N's ! larger than 128. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS N - matrix size, N>=0 M - matrix size, N>=0 A - matrix, actial matrix is stored in A[I1:I1+M-1,J1:J1+M-1] I1 - submatrix offset J1 - submatrix offset IsUpper - whether matrix is upper triangular IsUnit - whether matrix is unitriangular OpType - transformation type: * 0 - no transformation * 1 - transposition X - matrix, actial matrix is stored in X[I2:I2+M-1,J2:J2+N-1] I2 - submatrix offset J2 - submatrix offset -- ALGLIB routine -- 15.12.2009 Bochkanov Sergey *************************************************************************/void alglib::rmatrixlefttrsm( ae_int_t m, ae_int_t n, real_2d_array a, ae_int_t i1, ae_int_t j1, bool isupper, bool isunit, ae_int_t optype, real_2d_array& x, ae_int_t i2, ae_int_t j2); void alglib::smp_rmatrixlefttrsm( ae_int_t m, ae_int_t n, real_2d_array a, ae_int_t i1, ae_int_t j1, bool isupper, bool isunit, ae_int_t optype, real_2d_array& x, ae_int_t i2, ae_int_t j2);
rmatrixmv
function/************************************************************************* IMPORTANT: this function is deprecated since ALGLIB 3.13. Use RMatrixGEMV() which is more generic version of this function. Matrix-vector product: y := op(A)*x INPUT PARAMETERS: M - number of rows of op(A) N - number of columns of op(A) A - target matrix IA - submatrix offset (row index) JA - submatrix offset (column index) OpA - operation type: * OpA=0 => op(A) = A * OpA=1 => op(A) = A^T X - input vector IX - subvector offset IY - subvector offset Y - preallocated matrix, must be large enough to store result OUTPUT PARAMETERS: Y - vector which stores result if M=0, then subroutine does nothing. if N=0, Y is filled by zeros. -- ALGLIB routine -- 28.01.2010 Bochkanov Sergey *************************************************************************/void alglib::rmatrixmv( ae_int_t m, ae_int_t n, real_2d_array a, ae_int_t ia, ae_int_t ja, ae_int_t opa, real_1d_array x, ae_int_t ix, real_1d_array& y, ae_int_t iy);
rmatrixrank1
function/************************************************************************* IMPORTANT: this function is deprecated since ALGLIB 3.13. Use RMatrixGER() which is more generic version of this function. Rank-1 correction: A := A + u*v' INPUT PARAMETERS: M - number of rows N - number of columns A - target matrix, MxN submatrix is updated IA - submatrix offset (row index) JA - submatrix offset (column index) U - vector #1 IU - subvector offset V - vector #2 IV - subvector offset *************************************************************************/void alglib::rmatrixrank1( ae_int_t m, ae_int_t n, real_2d_array& a, ae_int_t ia, ae_int_t ja, real_1d_array& u, ae_int_t iu, real_1d_array& v, ae_int_t iv);
rmatrixrighttrsm
function/************************************************************************* This subroutine calculates X*op(A^-1) where: * X is MxN general matrix * A is NxN upper/lower triangular/unitriangular matrix * "op" may be identity transformation, transposition Multiplication result replaces X. Cache-oblivious algorithm is used. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. Because starting/stopping worker thread always ! involves some overhead, parallelism starts to be profitable for N's ! larger than 128. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS N - matrix size, N>=0 M - matrix size, N>=0 A - matrix, actial matrix is stored in A[I1:I1+N-1,J1:J1+N-1] I1 - submatrix offset J1 - submatrix offset IsUpper - whether matrix is upper triangular IsUnit - whether matrix is unitriangular OpType - transformation type: * 0 - no transformation * 1 - transposition X - matrix, actial matrix is stored in X[I2:I2+M-1,J2:J2+N-1] I2 - submatrix offset J2 - submatrix offset -- ALGLIB routine -- 15.12.2009 Bochkanov Sergey *************************************************************************/void alglib::rmatrixrighttrsm( ae_int_t m, ae_int_t n, real_2d_array a, ae_int_t i1, ae_int_t j1, bool isupper, bool isunit, ae_int_t optype, real_2d_array& x, ae_int_t i2, ae_int_t j2); void alglib::smp_rmatrixrighttrsm( ae_int_t m, ae_int_t n, real_2d_array a, ae_int_t i1, ae_int_t j1, bool isupper, bool isunit, ae_int_t optype, real_2d_array& x, ae_int_t i2, ae_int_t j2);
rmatrixsymv
function/************************************************************************* *************************************************************************/void alglib::rmatrixsymv( ae_int_t n, double alpha, real_2d_array a, ae_int_t ia, ae_int_t ja, bool isupper, real_1d_array x, ae_int_t ix, double beta, real_1d_array& y, ae_int_t iy);
rmatrixsyrk
function/************************************************************************* This subroutine calculates C=alpha*A*A^T+beta*C or C=alpha*A^T*A+beta*C where: * C is NxN symmetric matrix given by its upper/lower triangle * A is NxK matrix when A*A^T is calculated, KxN matrix otherwise Additional info: * cache-oblivious algorithm is used. * multiplication result replaces C. If Beta=0, C elements are not used in calculations (not multiplied by zero - just not referenced) * if Alpha=0, A is not used (not multiplied by zero - just not referenced) * if both Beta and Alpha are zero, C is filled by zeros. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. Because starting/stopping worker thread always ! involves some overhead, parallelism starts to be profitable for N's ! larger than 128. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS N - matrix size, N>=0 K - matrix size, K>=0 Alpha - coefficient A - matrix IA - submatrix offset (row index) JA - submatrix offset (column index) OpTypeA - multiplication type: * 0 - A*A^T is calculated * 2 - A^T*A is calculated Beta - coefficient C - preallocated input/output matrix IC - submatrix offset (row index) JC - submatrix offset (column index) IsUpper - whether C is upper triangular or lower triangular -- ALGLIB routine -- 16.12.2009 Bochkanov Sergey *************************************************************************/void alglib::rmatrixsyrk( ae_int_t n, ae_int_t k, double alpha, real_2d_array a, ae_int_t ia, ae_int_t ja, ae_int_t optypea, double beta, real_2d_array& c, ae_int_t ic, ae_int_t jc, bool isupper); void alglib::smp_rmatrixsyrk( ae_int_t n, ae_int_t k, double alpha, real_2d_array a, ae_int_t ia, ae_int_t ja, ae_int_t optypea, double beta, real_2d_array& c, ae_int_t ic, ae_int_t jc, bool isupper);
rmatrixsyvmv
function/************************************************************************* *************************************************************************/double alglib::rmatrixsyvmv( ae_int_t n, real_2d_array a, ae_int_t ia, ae_int_t ja, bool isupper, real_1d_array x, ae_int_t ix, real_1d_array& tmp);
rmatrixtranspose
function/************************************************************************* Cache-oblivous real "copy-and-transpose" Input parameters: M - number of rows N - number of columns A - source matrix, MxN submatrix is copied and transposed IA - submatrix offset (row index) JA - submatrix offset (column index) B - destination matrix, must be large enough to store result IB - submatrix offset (row index) JB - submatrix offset (column index) *************************************************************************/void alglib::rmatrixtranspose( ae_int_t m, ae_int_t n, real_2d_array a, ae_int_t ia, ae_int_t ja, real_2d_array& b, ae_int_t ib, ae_int_t jb);
rmatrixtrsv
function/************************************************************************* This subroutine solves linear system op(A)*x=b where: * A is NxN upper/lower triangular/unitriangular matrix * X and B are Nx1 vectors * "op" may be identity transformation, transposition, conjugate transposition Solution replaces X. IMPORTANT: * no overflow/underflow/denegeracy tests is performed. * no integrity checks for operand sizes, out-of-bounds accesses and so on is performed INPUT PARAMETERS N - matrix size, N>=0 A - matrix, actial matrix is stored in A[IA:IA+N-1,JA:JA+N-1] IA - submatrix offset JA - submatrix offset IsUpper - whether matrix is upper triangular IsUnit - whether matrix is unitriangular OpType - transformation type: * 0 - no transformation * 1 - transposition X - right part, actual vector is stored in X[IX:IX+N-1] IX - offset OUTPUT PARAMETERS X - solution replaces elements X[IX:IX+N-1] -- ALGLIB routine / remastering of LAPACK's DTRSV -- (c) 2017 Bochkanov Sergey - converted to ALGLIB (c) 2016 Reference BLAS level1 routine (LAPACK version 3.7.0) Reference BLAS is a software package provided by Univ. of Tennessee, Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd. *************************************************************************/void alglib::rmatrixtrsv( ae_int_t n, real_2d_array a, ae_int_t ia, ae_int_t ja, bool isupper, bool isunit, ae_int_t optype, real_1d_array& x, ae_int_t ix);
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "linalg.h" using namespace alglib; int main(int argc, char **argv) { real_2d_array a = "[[2,1],[1,3]]"; real_2d_array b = "[[2,1],[0,1]]"; real_2d_array c = "[[0,0],[0,0]]"; // // rmatrixgemm() function allows us to calculate matrix product C:=A*B or // to perform more general operation, C:=alpha*op1(A)*op2(B)+beta*C, // where A, B, C are rectangular matrices, op(X) can be X or X^T, // alpha and beta are scalars. // // This function: // * can apply transposition and/or multiplication by scalar to operands // * can use arbitrary part of matrices A/B (given by submatrix offset) // * can store result into arbitrary part of C // * for performance reasons requires C to be preallocated // // Parameters of this function are: // * M, N, K - sizes of op1(A) (which is MxK), op2(B) (which // is KxN) and C (which is MxN) // * Alpha - coefficient before A*B // * A, IA, JA - matrix A and offset of the submatrix // * OpTypeA - transformation type: // 0 - no transformation // 1 - transposition // * B, IB, JB - matrix B and offset of the submatrix // * OpTypeB - transformation type: // 0 - no transformation // 1 - transposition // * Beta - coefficient before C // * C, IC, JC - preallocated matrix C and offset of the submatrix // // Below we perform simple product C:=A*B (alpha=1, beta=0) // // IMPORTANT: this function works with preallocated C, which must be large // enough to store multiplication result. // ae_int_t m = 2; ae_int_t n = 2; ae_int_t k = 2; double alpha = 1.0; ae_int_t ia = 0; ae_int_t ja = 0; ae_int_t optypea = 0; ae_int_t ib = 0; ae_int_t jb = 0; ae_int_t optypeb = 0; double beta = 0.0; ae_int_t ic = 0; ae_int_t jc = 0; rmatrixgemm(m, n, k, alpha, a, ia, ja, optypea, b, ib, jb, optypeb, beta, c, ic, jc); printf("%s\n", c.tostring(3).c_str()); // EXPECTED: [[4,3],[2,4]] // // Now we try to apply some simple transformation to operands: C:=A*B^T // optypeb = 1; rmatrixgemm(m, n, k, alpha, a, ia, ja, optypea, b, ib, jb, optypeb, beta, c, ic, jc); printf("%s\n", c.tostring(3).c_str()); // EXPECTED: [[5,1],[5,3]] return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "linalg.h" using namespace alglib; int main(int argc, char **argv) { // // rmatrixsyrk() function allows us to calculate symmetric rank-K update // C := beta*C + alpha*A'*A, where C is square N*N matrix, A is square K*N // matrix, alpha and beta are scalars. It is also possible to update by // adding A*A' instead of A'*A. // // Parameters of this function are: // * N, K - matrix size // * Alpha - coefficient before A // * A, IA, JA - matrix and submatrix offsets // * OpTypeA - multiplication type: // * 0 - A*A^T is calculated // * 2 - A^T*A is calculated // * Beta - coefficient before C // * C, IC, JC - preallocated input/output matrix and submatrix offsets // * IsUpper - whether upper or lower triangle of C is updated; // this function updates only one half of C, leaving // other half unchanged (not referenced at all). // // Below we will show how to calculate simple product C:=A'*A // // NOTE: beta=0 and we do not use previous value of C, but still it // MUST be preallocated. // ae_int_t n = 2; ae_int_t k = 1; double alpha = 1.0; ae_int_t ia = 0; ae_int_t ja = 0; ae_int_t optypea = 2; double beta = 0.0; ae_int_t ic = 0; ae_int_t jc = 0; bool isupper = true; real_2d_array a = "[[1,2]]"; // preallocate space to store result real_2d_array c = "[[0,0],[0,0]]"; // calculate product, store result into upper part of c rmatrixsyrk(n, k, alpha, a, ia, ja, optypea, beta, c, ic, jc, isupper); // output result. // IMPORTANT: lower triangle of C was NOT updated! printf("%s\n", c.tostring(3).c_str()); // EXPECTED: [[1,2],[0,4]] return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "linalg.h" using namespace alglib; int main(int argc, char **argv) { // // In this example we assume that you already know how to work with // rmatrixgemm() function. Below we concentrate on its multithreading // capabilities. // // SMP edition of ALGLIB includes smp_rmatrixgemm() - multithreaded // version of rmatrixgemm() function. In the basic edition of ALGLIB // (GPL edition or commercial version without SMP support) this function // just calls single-threaded stub. So, you may call this function from // ANY edition of ALGLIB, but only in SMP edition it will work in really // multithreaded mode. // // In order to use multithreading, you have to: // 1) Install SMP edition of ALGLIB. // 2) This step is specific for C++ users: you should activate OS-specific // capabilities of ALGLIB by defining AE_OS=AE_POSIX (for *nix systems) // or AE_OS=AE_WINDOWS (for Windows systems). // C# users do not have to perform this step because C# programs are // portable across different systems without OS-specific tuning. // 3) Allow ALGLIB to know about number of worker threads to use: // a) autodetection (C++, C#): // ALGLIB will automatically determine number of CPU cores and // (by default) will use all cores except for one. Say, on 4-core // system it will use three cores - unless you manually told it // to use more or less. It will keep your system responsive during // lengthy computations. // Such behavior may be changed with setnworkers() call: // * alglib::setnworkers(0) = use all cores // * alglib::setnworkers(-1) = leave one core unused // * alglib::setnworkers(-2) = leave two cores unused // * alglib::setnworkers(+2) = use 2 cores (even if you have more) // b) manual specification (C++, C#): // You may want to specify maximum number of worker threads during // compile time by means of preprocessor definition AE_NWORKERS. // For C++ it will be "AE_NWORKERS=X" where X can be any positive number. // For C# it is "AE_NWORKERSX", where X should be replaced by number of // workers (AE_NWORKERS2, AE_NWORKERS3, AE_NWORKERS4, ...). // You can add this definition to compiler command line or change // corresponding project settings in your IDE. // // After you installed and configured SMP edition of ALGLIB, you may choose // between serial and multithreaded versions of SMP-capable functions: // * serial version works as usual, in the context of the calling thread // * multithreaded version (with "smp_" prefix) creates (or wakes up) worker // threads, inserts task in the worker queue, and waits for completion of // the task. All processing is done in context of worker thread(s). // // NOTE: because starting/stopping worker threads costs thousands of CPU cycles, // you should not use multithreading for lightweight computational problems. // // NOTE: some old POSIX-compatible operating systems do not support // sysconf(_SC_NPROCESSORS_ONLN) system call which is required in order // to automatically determine number of active cores. On these systems // you should specify number of cores manually at compile time. // Without it ALGLIB will run in single-threaded mode. // // Now, back to our example. In this example we will show you: // * how to call SMP version of rmatrixgemm(). Because we work with tiny 2x2 // matrices, we won't expect to see ANY speedup from using multithreading. // The only purpose of this demo is to show how to call SMP functions. // * how to modify number of worker threads used by ALGLIB // real_2d_array a = "[[2,1],[1,3]]"; real_2d_array b = "[[2,1],[0,1]]"; real_2d_array c = "[[0,0],[0,0]]"; ae_int_t m = 2; ae_int_t n = 2; ae_int_t k = 2; double alpha = 1.0; ae_int_t ia = 0; ae_int_t ja = 0; ae_int_t optypea = 0; ae_int_t ib = 0; ae_int_t jb = 0; ae_int_t optypeb = 0; double beta = 0.0; ae_int_t ic = 0; ae_int_t jc = 0; // serial code c = "[[0,0],[0,0]]"; rmatrixgemm(m, n, k, alpha, a, ia, ja, optypea, b, ib, jb, optypeb, beta, c, ic, jc); // SMP code with default number of worker threads c = "[[0,0],[0,0]]"; smp_rmatrixgemm(m, n, k, alpha, a, ia, ja, optypea, b, ib, jb, optypeb, beta, c, ic, jc); printf("%s\n", c.tostring(3).c_str()); // EXPECTED: [[4,3],[2,4]] // override number of worker threads - use two cores alglib::setnworkers(+2); c = "[[0,0],[0,0]]"; smp_rmatrixgemm(m, n, k, alpha, a, ia, ja, optypea, b, ib, jb, optypeb, beta, c, ic, jc); printf("%s\n", c.tostring(3).c_str()); // EXPECTED: [[4,3],[2,4]] return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "linalg.h" using namespace alglib; int main(int argc, char **argv) { // // In this example we assume that you already know how to work with // rmatrixsyrk() function. Below we concentrate on its multithreading // capabilities. // // SMP edition of ALGLIB includes smp_rmatrixsyrk() - multithreaded // version of rmatrixsyrk() function. In the basic edition of ALGLIB // (GPL edition or commercial version without SMP support) this function // just calls single-threaded stub. So, you may call this function from // ANY edition of ALGLIB, but only in SMP edition it will work in really // multithreaded mode. // // In order to use multithreading, you have to: // 1) Install SMP edition of ALGLIB. // 2) This step is specific for C++ users: you should activate OS-specific // capabilities of ALGLIB by defining AE_OS=AE_POSIX (for *nix systems) // or AE_OS=AE_WINDOWS (for Windows systems). // C# users do not have to perform this step because C# programs are // portable across different systems without OS-specific tuning. // 3) Allow ALGLIB to know about number of worker threads to use: // a) autodetection (C++, C#): // ALGLIB will automatically determine number of CPU cores and // (by default) will use all cores except for one. Say, on 4-core // system it will use three cores - unless you manually told it // to use more or less. It will keep your system responsive during // lengthy computations. // Such behavior may be changed with setnworkers() call: // * alglib::setnworkers(0) = use all cores // * alglib::setnworkers(-1) = leave one core unused // * alglib::setnworkers(-2) = leave two cores unused // * alglib::setnworkers(+2) = use 2 cores (even if you have more) // b) manual specification (C++, C#): // You may want to specify maximum number of worker threads during // compile time by means of preprocessor definition AE_NWORKERS. // For C++ it will be "AE_NWORKERS=X" where X can be any positive number. // For C# it is "AE_NWORKERSX", where X should be replaced by number of // workers (AE_NWORKERS2, AE_NWORKERS3, AE_NWORKERS4, ...). // You can add this definition to compiler command line or change // corresponding project settings in your IDE. // // After you installed and configured SMP edition of ALGLIB, you may choose // between serial and multithreaded versions of SMP-capable functions: // * serial version works as usual, in the context of the calling thread // * multithreaded version (with "smp_" prefix) creates (or wakes up) worker // threads, inserts task in the worker queue, and waits for completion of // the task. All processing is done in context of worker thread(s). // // NOTE: because starting/stopping worker threads costs thousands of CPU cycles, // you should not use multithreading for lightweight computational problems. // // NOTE: some old POSIX-compatible operating systems do not support // sysconf(_SC_NPROCESSORS_ONLN) system call which is required in order // to automatically determine number of active cores. On these systems // you should specify number of cores manually at compile time. // Without it ALGLIB will run in single-threaded mode. // // Now, back to our example. In this example we will show you: // * how to call SMP version of rmatrixsyrk(). Because we work with tiny 2x2 // matrices, we won't expect to see ANY speedup from using multithreading. // The only purpose of this demo is to show how to call SMP functions. // * how to modify number of worker threads used by ALGLIB // ae_int_t n = 2; ae_int_t k = 1; double alpha = 1.0; ae_int_t ia = 0; ae_int_t ja = 0; ae_int_t optypea = 2; double beta = 0.0; ae_int_t ic = 0; ae_int_t jc = 0; bool isupper = true; real_2d_array a = "[[1,2]]"; real_2d_array c = "[[]]"; // // Default number of worker threads. // Preallocate space to store result, call multithreaded version, test. // // NOTE: this function updates only one triangular part of C. In our // example we choose to update upper triangle. // c = "[[0,0],[0,0]]"; smp_rmatrixsyrk(n, k, alpha, a, ia, ja, optypea, beta, c, ic, jc, isupper); printf("%s\n", c.tostring(3).c_str()); // EXPECTED: [[1,2],[0,4]] // // Override default number of worker threads (set to 2). // Preallocate space to store result, call multithreaded version, test. // // NOTE: this function updates only one triangular part of C. In our // example we choose to update upper triangle. // alglib::setnworkers(+2); c = "[[0,0],[0,0]]"; smp_rmatrixsyrk(n, k, alpha, a, ia, ja, optypea, beta, c, ic, jc, isupper); printf("%s\n", c.tostring(3).c_str()); // EXPECTED: [[1,2],[0,4]] return 0; }
airyf
subpackageairy
function/************************************************************************* Airy function Solution of the differential equation y"(x) = xy. The function returns the two independent solutions Ai, Bi and their first derivatives Ai'(x), Bi'(x). Evaluation is by power series summation for small x, by rational minimax approximations for large x. ACCURACY: Error criterion is absolute when function <= 1, relative when function > 1, except * denotes relative error criterion. For large negative x, the absolute error increases as x^1.5. For large positive x, the relative error increases as x^1.5. Arithmetic domain function # trials peak rms IEEE -10, 0 Ai 10000 1.6e-15 2.7e-16 IEEE 0, 10 Ai 10000 2.3e-14* 1.8e-15* IEEE -10, 0 Ai' 10000 4.6e-15 7.6e-16 IEEE 0, 10 Ai' 10000 1.8e-14* 1.5e-15* IEEE -10, 10 Bi 30000 4.2e-15 5.3e-16 IEEE -10, 10 Bi' 30000 4.9e-15 7.3e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/void alglib::airy( double x, double& ai, double& aip, double& bi, double& bip);
autogk
subpackageautogk_d1 | Integrating f=exp(x) by adaptive integrator |
autogkreport
class/************************************************************************* Integration report: * TerminationType = completetion code: * -5 non-convergence of Gauss-Kronrod nodes calculation subroutine. * -1 incorrect parameters were specified * 1 OK * Rep.NFEV countains number of function calculations * Rep.NIntervals contains number of intervals [a,b] was partitioned into. *************************************************************************/class autogkreport { ae_int_t terminationtype; ae_int_t nfev; ae_int_t nintervals; };
autogkstate
class/************************************************************************* This structure stores state of the integration algorithm. Although this class has public fields, they are not intended for external use. You should use ALGLIB functions to work with this class: * autogksmooth()/AutoGKSmoothW()/... to create objects * autogkintegrate() to begin integration * autogkresults() to get results *************************************************************************/class autogkstate { };
autogkintegrate
function/************************************************************************* This function is used to launcn iterations of ODE solver It accepts following parameters: diff - callback which calculates dy/dx for given y and x obj - optional object which is passed to diff; can be NULL -- ALGLIB -- Copyright 07.05.2009 by Bochkanov Sergey *************************************************************************/void autogkintegrate(autogkstate &state, void (*func)(double x, double xminusa, double bminusx, double &y, void *ptr), void *ptr = NULL);
Examples: [1]
autogkresults
function/************************************************************************* Adaptive integration results Called after AutoGKIteration returned False. Input parameters: State - algorithm state (used by AutoGKIteration). Output parameters: V - integral(f(x)dx,a,b) Rep - optimization report (see AutoGKReport description) -- ALGLIB -- Copyright 14.11.2007 by Bochkanov Sergey *************************************************************************/void alglib::autogkresults( autogkstate state, double& v, autogkreport& rep);
Examples: [1]
autogksingular
function/************************************************************************* Integration on a finite interval [A,B]. Integrand have integrable singularities at A/B. F(X) must diverge as "(x-A)^alpha" at A, as "(B-x)^beta" at B, with known alpha/beta (alpha>-1, beta>-1). If alpha/beta are not known, estimates from below can be used (but these estimates should be greater than -1 too). One of alpha/beta variables (or even both alpha/beta) may be equal to 0, which means than function F(x) is non-singular at A/B. Anyway (singular at bounds or not), function F(x) is supposed to be continuous on (A,B). Fast-convergent algorithm based on a Gauss-Kronrod formula is used. Result is calculated with accuracy close to the machine precision. INPUT PARAMETERS: A, B - interval boundaries (A<B, A=B or A>B) Alpha - power-law coefficient of the F(x) at A, Alpha>-1 Beta - power-law coefficient of the F(x) at B, Beta>-1 OUTPUT PARAMETERS State - structure which stores algorithm state SEE ALSO AutoGKSmooth, AutoGKSmoothW, AutoGKResults. -- ALGLIB -- Copyright 06.05.2009 by Bochkanov Sergey *************************************************************************/void alglib::autogksingular( double a, double b, double alpha, double beta, autogkstate& state);
autogksmooth
function/************************************************************************* Integration of a smooth function F(x) on a finite interval [a,b]. Fast-convergent algorithm based on a Gauss-Kronrod formula is used. Result is calculated with accuracy close to the machine precision. Algorithm works well only with smooth integrands. It may be used with continuous non-smooth integrands, but with less performance. It should never be used with integrands which have integrable singularities at lower or upper limits - algorithm may crash. Use AutoGKSingular in such cases. INPUT PARAMETERS: A, B - interval boundaries (A<B, A=B or A>B) OUTPUT PARAMETERS State - structure which stores algorithm state SEE ALSO AutoGKSmoothW, AutoGKSingular, AutoGKResults. -- ALGLIB -- Copyright 06.05.2009 by Bochkanov Sergey *************************************************************************/void alglib::autogksmooth(double a, double b, autogkstate& state);
Examples: [1]
autogksmoothw
function/************************************************************************* Integration of a smooth function F(x) on a finite interval [a,b]. This subroutine is same as AutoGKSmooth(), but it guarantees that interval [a,b] is partitioned into subintervals which have width at most XWidth. Subroutine can be used when integrating nearly-constant function with narrow "bumps" (about XWidth wide). If "bumps" are too narrow, AutoGKSmooth subroutine can overlook them. INPUT PARAMETERS: A, B - interval boundaries (A<B, A=B or A>B) OUTPUT PARAMETERS State - structure which stores algorithm state SEE ALSO AutoGKSmooth, AutoGKSingular, AutoGKResults. -- ALGLIB -- Copyright 06.05.2009 by Bochkanov Sergey *************************************************************************/void alglib::autogksmoothw( double a, double b, double xwidth, autogkstate& state);
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "integration.h" using namespace alglib; void int_function_1_func(double x, double xminusa, double bminusx, double &y, void *ptr) { // this callback calculates f(x)=exp(x) y = exp(x); } int main(int argc, char **argv) { // // This example demonstrates integration of f=exp(x) on [0,1]: // * first, autogkstate is initialized // * then we call integration function // * and finally we obtain results with autogkresults() call // double a = 0; double b = 1; autogkstate s; double v; autogkreport rep; autogksmooth(a, b, s); alglib::autogkintegrate(s, int_function_1_func); autogkresults(s, v, rep); printf("%.2f\n", double(v)); // EXPECTED: 1.7182 return 0; }
basestat
subpackagebasestat_d_base | Basic functionality (moments, adev, median, percentile) | |
basestat_d_c2 | Correlation (covariance) between two random variables | |
basestat_d_cm | Correlation (covariance) between components of random vector | |
basestat_d_cm2 | Correlation (covariance) between two random vectors |
cov2
function/************************************************************************* 2-sample covariance Input parameters: X - sample 1 (array indexes: [0..N-1]) Y - sample 2 (array indexes: [0..N-1]) N - N>=0, sample size: * if given, only N leading elements of X/Y are processed * if not given, automatically determined from input sizes Result: covariance (zero for N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/double alglib::cov2(real_1d_array x, real_1d_array y); double alglib::cov2(real_1d_array x, real_1d_array y, ae_int_t n);
Examples: [1]
covm
function/************************************************************************* Covariance matrix SMP EDITION OF ALGLIB: ! This function can utilize multicore capabilities of your system. In ! order to do this you have to call version with "smp_" prefix, which ! indicates that multicore code will be used. ! ! This note is given for users of SMP edition; if you use GPL edition, ! or commercial edition of ALGLIB without SMP support, you still will ! be able to call smp-version of this function, but all computations ! will be done serially. ! ! We recommend you to carefully read ALGLIB Reference Manual, section ! called 'SMP support', before using parallel version of this function. ! ! You should remember that starting/stopping worker thread always have ! non-zero cost. Although multicore version is pretty efficient on ! large problems, we do not recommend you to use it on small problems - ! with covariance matrices smaller than 128*128. INPUT PARAMETERS: X - array[N,M], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation N - N>=0, number of observations: * if given, only leading N rows of X are used * if not given, automatically determined from input size M - M>0, number of variables: * if given, only leading M columns of X are used * if not given, automatically determined from input size OUTPUT PARAMETERS: C - array[M,M], covariance matrix (zero if N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/void alglib::covm(real_2d_array x, real_2d_array& c); void alglib::covm( real_2d_array x, ae_int_t n, ae_int_t m, real_2d_array& c); void alglib::smp_covm(real_2d_array x, real_2d_array& c); void alglib::smp_covm( real_2d_array x, ae_int_t n, ae_int_t m, real_2d_array& c);
Examples: [1]
covm2
function/************************************************************************* Cross-covariance matrix SMP EDITION OF ALGLIB: ! This function can utilize multicore capabilities of your system. In ! order to do this you have to call version with "smp_" prefix, which ! indicates that multicore code will be used. ! ! This note is given for users of SMP edition; if you use GPL edition, ! or commercial edition of ALGLIB without SMP support, you still will ! be able to call smp-version of this function, but all computations ! will be done serially. ! ! We recommend you to carefully read ALGLIB Reference Manual, section ! called 'SMP support', before using parallel version of this function. ! ! You should remember that starting/stopping worker thread always have ! non-zero cost. Although multicore version is pretty efficient on ! large problems, we do not recommend you to use it on small problems - ! with covariance matrices smaller than 128*128. INPUT PARAMETERS: X - array[N,M1], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation Y - array[N,M2], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation N - N>=0, number of observations: * if given, only leading N rows of X/Y are used * if not given, automatically determined from input sizes M1 - M1>0, number of variables in X: * if given, only leading M1 columns of X are used * if not given, automatically determined from input size M2 - M2>0, number of variables in Y: * if given, only leading M1 columns of X are used * if not given, automatically determined from input size OUTPUT PARAMETERS: C - array[M1,M2], cross-covariance matrix (zero if N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/void alglib::covm2(real_2d_array x, real_2d_array y, real_2d_array& c); void alglib::covm2( real_2d_array x, real_2d_array y, ae_int_t n, ae_int_t m1, ae_int_t m2, real_2d_array& c); void alglib::smp_covm2(real_2d_array x, real_2d_array y, real_2d_array& c); void alglib::smp_covm2( real_2d_array x, real_2d_array y, ae_int_t n, ae_int_t m1, ae_int_t m2, real_2d_array& c);
Examples: [1]
pearsoncorr2
function/************************************************************************* Pearson product-moment correlation coefficient Input parameters: X - sample 1 (array indexes: [0..N-1]) Y - sample 2 (array indexes: [0..N-1]) N - N>=0, sample size: * if given, only N leading elements of X/Y are processed * if not given, automatically determined from input sizes Result: Pearson product-moment correlation coefficient (zero for N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/double alglib::pearsoncorr2(real_1d_array x, real_1d_array y); double alglib::pearsoncorr2(real_1d_array x, real_1d_array y, ae_int_t n);
Examples: [1]
pearsoncorrelation
function/************************************************************************* Obsolete function, we recommend to use PearsonCorr2(). -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/double alglib::pearsoncorrelation( real_1d_array x, real_1d_array y, ae_int_t n);
pearsoncorrm
function/************************************************************************* Pearson product-moment correlation matrix SMP EDITION OF ALGLIB: ! This function can utilize multicore capabilities of your system. In ! order to do this you have to call version with "smp_" prefix, which ! indicates that multicore code will be used. ! ! This note is given for users of SMP edition; if you use GPL edition, ! or commercial edition of ALGLIB without SMP support, you still will ! be able to call smp-version of this function, but all computations ! will be done serially. ! ! We recommend you to carefully read ALGLIB Reference Manual, section ! called 'SMP support', before using parallel version of this function. ! ! You should remember that starting/stopping worker thread always have ! non-zero cost. Although multicore version is pretty efficient on ! large problems, we do not recommend you to use it on small problems - ! with correlation matrices smaller than 128*128. INPUT PARAMETERS: X - array[N,M], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation N - N>=0, number of observations: * if given, only leading N rows of X are used * if not given, automatically determined from input size M - M>0, number of variables: * if given, only leading M columns of X are used * if not given, automatically determined from input size OUTPUT PARAMETERS: C - array[M,M], correlation matrix (zero if N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/void alglib::pearsoncorrm(real_2d_array x, real_2d_array& c); void alglib::pearsoncorrm( real_2d_array x, ae_int_t n, ae_int_t m, real_2d_array& c); void alglib::smp_pearsoncorrm(real_2d_array x, real_2d_array& c); void alglib::smp_pearsoncorrm( real_2d_array x, ae_int_t n, ae_int_t m, real_2d_array& c);
Examples: [1]
pearsoncorrm2
function/************************************************************************* Pearson product-moment cross-correlation matrix SMP EDITION OF ALGLIB: ! This function can utilize multicore capabilities of your system. In ! order to do this you have to call version with "smp_" prefix, which ! indicates that multicore code will be used. ! ! This note is given for users of SMP edition; if you use GPL edition, ! or commercial edition of ALGLIB without SMP support, you still will ! be able to call smp-version of this function, but all computations ! will be done serially. ! ! We recommend you to carefully read ALGLIB Reference Manual, section ! called 'SMP support', before using parallel version of this function. ! ! You should remember that starting/stopping worker thread always have ! non-zero cost. Although multicore version is pretty efficient on ! large problems, we do not recommend you to use it on small problems - ! with correlation matrices smaller than 128*128. INPUT PARAMETERS: X - array[N,M1], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation Y - array[N,M2], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation N - N>=0, number of observations: * if given, only leading N rows of X/Y are used * if not given, automatically determined from input sizes M1 - M1>0, number of variables in X: * if given, only leading M1 columns of X are used * if not given, automatically determined from input size M2 - M2>0, number of variables in Y: * if given, only leading M1 columns of X are used * if not given, automatically determined from input size OUTPUT PARAMETERS: C - array[M1,M2], cross-correlation matrix (zero if N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/void alglib::pearsoncorrm2( real_2d_array x, real_2d_array y, real_2d_array& c); void alglib::pearsoncorrm2( real_2d_array x, real_2d_array y, ae_int_t n, ae_int_t m1, ae_int_t m2, real_2d_array& c); void alglib::smp_pearsoncorrm2( real_2d_array x, real_2d_array y, real_2d_array& c); void alglib::smp_pearsoncorrm2( real_2d_array x, real_2d_array y, ae_int_t n, ae_int_t m1, ae_int_t m2, real_2d_array& c);
Examples: [1]
rankdata
function/************************************************************************* This function replaces data in XY by their ranks: * XY is processed row-by-row * rows are processed separately * tied data are correctly handled (tied ranks are calculated) * ranking starts from 0, ends at NFeatures-1 * sum of within-row values is equal to (NFeatures-1)*NFeatures/2 SMP EDITION OF ALGLIB: ! This function can utilize multicore capabilities of your system. In ! order to do this you have to call version with "smp_" prefix, which ! indicates that multicore code will be used. ! ! This note is given for users of SMP edition; if you use GPL edition, ! or commercial edition of ALGLIB without SMP support, you still will ! be able to call smp-version of this function, but all computations ! will be done serially. ! ! We recommend you to carefully read ALGLIB Reference Manual, section ! called 'SMP support', before using parallel version of this function. ! ! You should remember that starting/stopping worker thread always have ! non-zero cost. Although multicore version is pretty efficient on ! large problems, we do not recommend you to use it on small problems - ! ones where expected operations count is less than 100.000 INPUT PARAMETERS: XY - array[NPoints,NFeatures], dataset NPoints - number of points NFeatures- number of features OUTPUT PARAMETERS: XY - data are replaced by their within-row ranks; ranking starts from 0, ends at NFeatures-1 -- ALGLIB -- Copyright 18.04.2013 by Bochkanov Sergey *************************************************************************/void alglib::rankdata(real_2d_array& xy); void alglib::rankdata( real_2d_array& xy, ae_int_t npoints, ae_int_t nfeatures); void alglib::smp_rankdata(real_2d_array& xy); void alglib::smp_rankdata( real_2d_array& xy, ae_int_t npoints, ae_int_t nfeatures);
rankdatacentered
function/************************************************************************* This function replaces data in XY by their CENTERED ranks: * XY is processed row-by-row * rows are processed separately * tied data are correctly handled (tied ranks are calculated) * centered ranks are just usual ranks, but centered in such way that sum of within-row values is equal to 0.0. * centering is performed by subtracting mean from each row, i.e it changes mean value, but does NOT change higher moments SMP EDITION OF ALGLIB: ! This function can utilize multicore capabilities of your system. In ! order to do this you have to call version with "smp_" prefix, which ! indicates that multicore code will be used. ! ! This note is given for users of SMP edition; if you use GPL edition, ! or commercial edition of ALGLIB without SMP support, you still will ! be able to call smp-version of this function, but all computations ! will be done serially. ! ! We recommend you to carefully read ALGLIB Reference Manual, section ! called 'SMP support', before using parallel version of this function. ! ! You should remember that starting/stopping worker thread always have ! non-zero cost. Although multicore version is pretty efficient on ! large problems, we do not recommend you to use it on small problems - ! ones where expected operations count is less than 100.000 INPUT PARAMETERS: XY - array[NPoints,NFeatures], dataset NPoints - number of points NFeatures- number of features OUTPUT PARAMETERS: XY - data are replaced by their within-row ranks; ranking starts from 0, ends at NFeatures-1 -- ALGLIB -- Copyright 18.04.2013 by Bochkanov Sergey *************************************************************************/void alglib::rankdatacentered(real_2d_array& xy); void alglib::rankdatacentered( real_2d_array& xy, ae_int_t npoints, ae_int_t nfeatures); void alglib::smp_rankdatacentered(real_2d_array& xy); void alglib::smp_rankdatacentered( real_2d_array& xy, ae_int_t npoints, ae_int_t nfeatures);
sampleadev
function/************************************************************************* ADev Input parameters: X - sample N - N>=0, sample size: * if given, only leading N elements of X are processed * if not given, automatically determined from size of X Output parameters: ADev- ADev -- ALGLIB -- Copyright 06.09.2006 by Bochkanov Sergey *************************************************************************/void alglib::sampleadev(real_1d_array x, double& adev); void alglib::sampleadev(real_1d_array x, ae_int_t n, double& adev);
Examples: [1]
samplekurtosis
function/************************************************************************* Calculation of the kurtosis. INPUT PARAMETERS: X - sample N - N>=0, sample size: * if given, only leading N elements of X are processed * if not given, automatically determined from size of X NOTE: This function return result which calculated by 'SampleMoments' function and stored at 'Kurtosis' variable. -- ALGLIB -- Copyright 06.09.2006 by Bochkanov Sergey *************************************************************************/double alglib::samplekurtosis(real_1d_array x); double alglib::samplekurtosis(real_1d_array x, ae_int_t n);
samplemean
function/************************************************************************* Calculation of the mean. INPUT PARAMETERS: X - sample N - N>=0, sample size: * if given, only leading N elements of X are processed * if not given, automatically determined from size of X NOTE: This function return result which calculated by 'SampleMoments' function and stored at 'Mean' variable. -- ALGLIB -- Copyright 06.09.2006 by Bochkanov Sergey *************************************************************************/double alglib::samplemean(real_1d_array x); double alglib::samplemean(real_1d_array x, ae_int_t n);
samplemedian
function/************************************************************************* Median calculation. Input parameters: X - sample (array indexes: [0..N-1]) N - N>=0, sample size: * if given, only leading N elements of X are processed * if not given, automatically determined from size of X Output parameters: Median -- ALGLIB -- Copyright 06.09.2006 by Bochkanov Sergey *************************************************************************/void alglib::samplemedian(real_1d_array x, double& median); void alglib::samplemedian(real_1d_array x, ae_int_t n, double& median);
Examples: [1]
samplemoments
function/************************************************************************* Calculation of the distribution moments: mean, variance, skewness, kurtosis. INPUT PARAMETERS: X - sample N - N>=0, sample size: * if given, only leading N elements of X are processed * if not given, automatically determined from size of X OUTPUT PARAMETERS Mean - mean. Variance- variance. Skewness- skewness (if variance<>0; zero otherwise). Kurtosis- kurtosis (if variance<>0; zero otherwise). NOTE: variance is calculated by dividing sum of squares by N-1, not N. -- ALGLIB -- Copyright 06.09.2006 by Bochkanov Sergey *************************************************************************/void alglib::samplemoments( real_1d_array x, double& mean, double& variance, double& skewness, double& kurtosis); void alglib::samplemoments( real_1d_array x, ae_int_t n, double& mean, double& variance, double& skewness, double& kurtosis);
Examples: [1]
samplepercentile
function/************************************************************************* Percentile calculation. Input parameters: X - sample (array indexes: [0..N-1]) N - N>=0, sample size: * if given, only leading N elements of X are processed * if not given, automatically determined from size of X P - percentile (0<=P<=1) Output parameters: V - percentile -- ALGLIB -- Copyright 01.03.2008 by Bochkanov Sergey *************************************************************************/void alglib::samplepercentile(real_1d_array x, double p, double& v); void alglib::samplepercentile( real_1d_array x, ae_int_t n, double p, double& v);
Examples: [1]
sampleskewness
function/************************************************************************* Calculation of the skewness. INPUT PARAMETERS: X - sample N - N>=0, sample size: * if given, only leading N elements of X are processed * if not given, automatically determined from size of X NOTE: This function return result which calculated by 'SampleMoments' function and stored at 'Skewness' variable. -- ALGLIB -- Copyright 06.09.2006 by Bochkanov Sergey *************************************************************************/double alglib::sampleskewness(real_1d_array x); double alglib::sampleskewness(real_1d_array x, ae_int_t n);
samplevariance
function/************************************************************************* Calculation of the variance. INPUT PARAMETERS: X - sample N - N>=0, sample size: * if given, only leading N elements of X are processed * if not given, automatically determined from size of X NOTE: This function return result which calculated by 'SampleMoments' function and stored at 'Variance' variable. -- ALGLIB -- Copyright 06.09.2006 by Bochkanov Sergey *************************************************************************/double alglib::samplevariance(real_1d_array x); double alglib::samplevariance(real_1d_array x, ae_int_t n);
spearmancorr2
function/************************************************************************* Spearman's rank correlation coefficient Input parameters: X - sample 1 (array indexes: [0..N-1]) Y - sample 2 (array indexes: [0..N-1]) N - N>=0, sample size: * if given, only N leading elements of X/Y are processed * if not given, automatically determined from input sizes Result: Spearman's rank correlation coefficient (zero for N=0 or N=1) -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/double alglib::spearmancorr2(real_1d_array x, real_1d_array y); double alglib::spearmancorr2( real_1d_array x, real_1d_array y, ae_int_t n);
Examples: [1]
spearmancorrm
function/************************************************************************* Spearman's rank correlation matrix SMP EDITION OF ALGLIB: ! This function can utilize multicore capabilities of your system. In ! order to do this you have to call version with "smp_" prefix, which ! indicates that multicore code will be used. ! ! This note is given for users of SMP edition; if you use GPL edition, ! or commercial edition of ALGLIB without SMP support, you still will ! be able to call smp-version of this function, but all computations ! will be done serially. ! ! We recommend you to carefully read ALGLIB Reference Manual, section ! called 'SMP support', before using parallel version of this function. ! ! You should remember that starting/stopping worker thread always have ! non-zero cost. Although multicore version is pretty efficient on ! large problems, we do not recommend you to use it on small problems - ! with correlation matrices smaller than 128*128. INPUT PARAMETERS: X - array[N,M], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation N - N>=0, number of observations: * if given, only leading N rows of X are used * if not given, automatically determined from input size M - M>0, number of variables: * if given, only leading M columns of X are used * if not given, automatically determined from input size OUTPUT PARAMETERS: C - array[M,M], correlation matrix (zero if N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/void alglib::spearmancorrm(real_2d_array x, real_2d_array& c); void alglib::spearmancorrm( real_2d_array x, ae_int_t n, ae_int_t m, real_2d_array& c); void alglib::smp_spearmancorrm(real_2d_array x, real_2d_array& c); void alglib::smp_spearmancorrm( real_2d_array x, ae_int_t n, ae_int_t m, real_2d_array& c);
Examples: [1]
spearmancorrm2
function/************************************************************************* Spearman's rank cross-correlation matrix SMP EDITION OF ALGLIB: ! This function can utilize multicore capabilities of your system. In ! order to do this you have to call version with "smp_" prefix, which ! indicates that multicore code will be used. ! ! This note is given for users of SMP edition; if you use GPL edition, ! or commercial edition of ALGLIB without SMP support, you still will ! be able to call smp-version of this function, but all computations ! will be done serially. ! ! We recommend you to carefully read ALGLIB Reference Manual, section ! called 'SMP support', before using parallel version of this function. ! ! You should remember that starting/stopping worker thread always have ! non-zero cost. Although multicore version is pretty efficient on ! large problems, we do not recommend you to use it on small problems - ! with correlation matrices smaller than 128*128. INPUT PARAMETERS: X - array[N,M1], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation Y - array[N,M2], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation N - N>=0, number of observations: * if given, only leading N rows of X/Y are used * if not given, automatically determined from input sizes M1 - M1>0, number of variables in X: * if given, only leading M1 columns of X are used * if not given, automatically determined from input size M2 - M2>0, number of variables in Y: * if given, only leading M1 columns of X are used * if not given, automatically determined from input size OUTPUT PARAMETERS: C - array[M1,M2], cross-correlation matrix (zero if N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/void alglib::spearmancorrm2( real_2d_array x, real_2d_array y, real_2d_array& c); void alglib::spearmancorrm2( real_2d_array x, real_2d_array y, ae_int_t n, ae_int_t m1, ae_int_t m2, real_2d_array& c); void alglib::smp_spearmancorrm2( real_2d_array x, real_2d_array y, real_2d_array& c); void alglib::smp_spearmancorrm2( real_2d_array x, real_2d_array y, ae_int_t n, ae_int_t m1, ae_int_t m2, real_2d_array& c);
Examples: [1]
spearmanrankcorrelation
function/************************************************************************* Obsolete function, we recommend to use SpearmanCorr2(). -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/double alglib::spearmanrankcorrelation( real_1d_array x, real_1d_array y, ae_int_t n);
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "statistics.h" using namespace alglib; int main(int argc, char **argv) { real_1d_array x = "[0,1,4,9,16,25,36,49,64,81]"; double mean; double variance; double skewness; double kurtosis; double adev; double p; double v; // // Here we demonstrate calculation of sample moments // (mean, variance, skewness, kurtosis) // samplemoments(x, mean, variance, skewness, kurtosis); printf("%.1f\n", double(mean)); // EXPECTED: 28.5 printf("%.1f\n", double(variance)); // EXPECTED: 801.1667 printf("%.1f\n", double(skewness)); // EXPECTED: 0.5751 printf("%.1f\n", double(kurtosis)); // EXPECTED: -1.2666 // // Average deviation // sampleadev(x, adev); printf("%.1f\n", double(adev)); // EXPECTED: 23.2 // // Median and percentile // samplemedian(x, v); printf("%.1f\n", double(v)); // EXPECTED: 20.5 p = 0.5; samplepercentile(x, p, v); printf("%.1f\n", double(v)); // EXPECTED: 20.5 return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "statistics.h" using namespace alglib; int main(int argc, char **argv) { // // We have two samples - x and y, and want to measure dependency between them // real_1d_array x = "[0,1,4,9,16,25,36,49,64,81]"; real_1d_array y = "[0,1,2,3,4,5,6,7,8,9]"; double v; // // Three dependency measures are calculated: // * covariation // * Pearson correlation // * Spearman rank correlation // v = cov2(x, y); printf("%.2f\n", double(v)); // EXPECTED: 82.5 v = pearsoncorr2(x, y); printf("%.2f\n", double(v)); // EXPECTED: 0.9627 v = spearmancorr2(x, y); printf("%.2f\n", double(v)); // EXPECTED: 1.000 return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "statistics.h" using namespace alglib; int main(int argc, char **argv) { // // X is a sample matrix: // * I-th row corresponds to I-th observation // * J-th column corresponds to J-th variable // real_2d_array x = "[[1,0,1],[1,1,0],[-1,1,0],[-2,-1,1],[-1,0,9]]"; real_2d_array c; // // Three dependency measures are calculated: // * covariation // * Pearson correlation // * Spearman rank correlation // // Result is stored into C, with C[i,j] equal to correlation // (covariance) between I-th and J-th variables of X. // covm(x, c); printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [[1.80,0.60,-1.40],[0.60,0.70,-0.80],[-1.40,-0.80,14.70]] pearsoncorrm(x, c); printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [[1.000,0.535,-0.272],[0.535,1.000,-0.249],[-0.272,-0.249,1.000]] spearmancorrm(x, c); printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [[1.000,0.556,-0.306],[0.556,1.000,-0.750],[-0.306,-0.750,1.000]] return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "statistics.h" using namespace alglib; int main(int argc, char **argv) { // // X and Y are sample matrices: // * I-th row corresponds to I-th observation // * J-th column corresponds to J-th variable // real_2d_array x = "[[1,0,1],[1,1,0],[-1,1,0],[-2,-1,1],[-1,0,9]]"; real_2d_array y = "[[2,3],[2,1],[-1,6],[-9,9],[7,1]]"; real_2d_array c; // // Three dependency measures are calculated: // * covariation // * Pearson correlation // * Spearman rank correlation // // Result is stored into C, with C[i,j] equal to correlation // (covariance) between I-th variable of X and J-th variable of Y. // covm2(x, y, c); printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [[4.100,-3.250],[2.450,-1.500],[13.450,-5.750]] pearsoncorrm2(x, y, c); printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [[0.519,-0.699],[0.497,-0.518],[0.596,-0.433]] spearmancorrm2(x, y, c); printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [[0.541,-0.649],[0.216,-0.433],[0.433,-0.135]] return 0; }
bdss
subpackagedsoptimalsplit2
function/************************************************************************* Optimal binary classification Algorithms finds optimal (=with minimal cross-entropy) binary partition. Internal subroutine. INPUT PARAMETERS: A - array[0..N-1], variable C - array[0..N-1], class numbers (0 or 1). N - array size OUTPUT PARAMETERS: Info - completetion code: * -3, all values of A[] are same (partition is impossible) * -2, one of C[] is incorrect (<0, >1) * -1, incorrect pararemets were passed (N<=0). * 1, OK Threshold- partiton boundary. Left part contains values which are strictly less than Threshold. Right part contains values which are greater than or equal to Threshold. PAL, PBL- probabilities P(0|v<Threshold) and P(1|v<Threshold) PAR, PBR- probabilities P(0|v>=Threshold) and P(1|v>=Threshold) CVE - cross-validation estimate of cross-entropy -- ALGLIB -- Copyright 22.05.2008 by Bochkanov Sergey *************************************************************************/void alglib::dsoptimalsplit2( real_1d_array a, integer_1d_array c, ae_int_t n, ae_int_t& info, double& threshold, double& pal, double& pbl, double& par, double& pbr, double& cve);
dsoptimalsplit2fast
function/************************************************************************* Optimal partition, internal subroutine. Fast version. Accepts: A array[0..N-1] array of attributes array[0..N-1] C array[0..N-1] array of class labels TiesBuf array[0..N] temporaries (ties) CntBuf array[0..2*NC-1] temporaries (counts) Alpha centering factor (0<=alpha<=1, recommended value - 0.05) BufR array[0..N-1] temporaries BufI array[0..N-1] temporaries Output: Info error code (">0"=OK, "<0"=bad) RMS training set RMS error CVRMS leave-one-out RMS error Note: content of all arrays is changed by subroutine; it doesn't allocate temporaries. -- ALGLIB -- Copyright 11.12.2008 by Bochkanov Sergey *************************************************************************/void alglib::dsoptimalsplit2fast( real_1d_array& a, integer_1d_array& c, integer_1d_array& tiesbuf, integer_1d_array& cntbuf, real_1d_array& bufr, integer_1d_array& bufi, ae_int_t n, ae_int_t nc, double alpha, ae_int_t& info, double& threshold, double& rms, double& cvrms);
bdsvd
subpackagermatrixbdsvd
function/************************************************************************* Singular value decomposition of a bidiagonal matrix (extended algorithm) COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes one important improvement of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Generally, commercial ALGLIB is several times faster than open-source ! generic C edition, and many times faster than open-source C# edition. ! ! Multithreaded acceleration is NOT supported for this function. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. The algorithm performs the singular value decomposition of a bidiagonal matrix B (upper or lower) representing it as B = Q*S*P^T, where Q and P - orthogonal matrices, S - diagonal matrix with non-negative elements on the main diagonal, in descending order. The algorithm finds singular values. In addition, the algorithm can calculate matrices Q and P (more precisely, not the matrices, but their product with given matrices U and VT - U*Q and (P^T)*VT)). Of course, matrices U and VT can be of any type, including identity. Furthermore, the algorithm can calculate Q'*C (this product is calculated more effectively than U*Q, because this calculation operates with rows instead of matrix columns). The feature of the algorithm is its ability to find all singular values including those which are arbitrarily close to 0 with relative accuracy close to machine precision. If the parameter IsFractionalAccuracyRequired is set to True, all singular values will have high relative accuracy close to machine precision. If the parameter is set to False, only the biggest singular value will have relative accuracy close to machine precision. The absolute error of other singular values is equal to the absolute error of the biggest singular value. Input parameters: D - main diagonal of matrix B. Array whose index ranges within [0..N-1]. E - superdiagonal (or subdiagonal) of matrix B. Array whose index ranges within [0..N-2]. N - size of matrix B. IsUpper - True, if the matrix is upper bidiagonal. IsFractionalAccuracyRequired - THIS PARAMETER IS IGNORED SINCE ALGLIB 3.5.0 SINGULAR VALUES ARE ALWAYS SEARCHED WITH HIGH ACCURACY. U - matrix to be multiplied by Q. Array whose indexes range within [0..NRU-1, 0..N-1]. The matrix can be bigger, in that case only the submatrix [0..NRU-1, 0..N-1] will be multiplied by Q. NRU - number of rows in matrix U. C - matrix to be multiplied by Q'. Array whose indexes range within [0..N-1, 0..NCC-1]. The matrix can be bigger, in that case only the submatrix [0..N-1, 0..NCC-1] will be multiplied by Q'. NCC - number of columns in matrix C. VT - matrix to be multiplied by P^T. Array whose indexes range within [0..N-1, 0..NCVT-1]. The matrix can be bigger, in that case only the submatrix [0..N-1, 0..NCVT-1] will be multiplied by P^T. NCVT - number of columns in matrix VT. Output parameters: D - singular values of matrix B in descending order. U - if NRU>0, contains matrix U*Q. VT - if NCVT>0, contains matrix (P^T)*VT. C - if NCC>0, contains matrix Q'*C. Result: True, if the algorithm has converged. False, if the algorithm hasn't converged (rare case). NOTE: multiplication U*Q is performed by means of transposition to internal buffer, multiplication and backward transposition. It helps to avoid costly columnwise operations and speed-up algorithm. Additional information: The type of convergence is controlled by the internal parameter TOL. If the parameter is greater than 0, the singular values will have relative accuracy TOL. If TOL<0, the singular values will have absolute accuracy ABS(TOL)*norm(B). By default, |TOL| falls within the range of 10*Epsilon and 100*Epsilon, where Epsilon is the machine precision. It is not recommended to use TOL less than 10*Epsilon since this will considerably slow down the algorithm and may not lead to error decreasing. History: * 31 March, 2007. changed MAXITR from 6 to 12. -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University October 31, 1999. *************************************************************************/bool alglib::rmatrixbdsvd( real_1d_array& d, real_1d_array e, ae_int_t n, bool isupper, bool isfractionalaccuracyrequired, real_2d_array& u, ae_int_t nru, real_2d_array& c, ae_int_t ncc, real_2d_array& vt, ae_int_t ncvt);
bessel
subpackagebesseli0
function/************************************************************************* Modified Bessel function of order zero Returns modified Bessel function of order zero of the argument. The function is defined as i0(x) = j0( ix ). The range is partitioned into the two intervals [0,8] and (8, infinity). Chebyshev polynomial expansions are employed in each interval. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,30 30000 5.8e-16 1.4e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/double alglib::besseli0(double x);
besseli1
function/************************************************************************* Modified Bessel function of order one Returns modified Bessel function of order one of the argument. The function is defined as i1(x) = -i j1( ix ). The range is partitioned into the two intervals [0,8] and (8, infinity). Chebyshev polynomial expansions are employed in each interval. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0, 30 30000 1.9e-15 2.1e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1985, 1987, 2000 by Stephen L. Moshier *************************************************************************/double alglib::besseli1(double x);
besselj0
function/************************************************************************* Bessel function of order zero Returns Bessel function of order zero of the argument. The domain is divided into the intervals [0, 5] and (5, infinity). In the first interval the following rational approximation is used: 2 2 (w - r ) (w - r ) P (w) / Q (w) 1 2 3 8 2 where w = x and the two r's are zeros of the function. In the second interval, the Hankel asymptotic expansion is employed with two rational functions of degree 6/6 and 7/7. ACCURACY: Absolute error: arithmetic domain # trials peak rms IEEE 0, 30 60000 4.2e-16 1.1e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/double alglib::besselj0(double x);
besselj1
function/************************************************************************* Bessel function of order one Returns Bessel function of order one of the argument. The domain is divided into the intervals [0, 8] and (8, infinity). In the first interval a 24 term Chebyshev expansion is used. In the second, the asymptotic trigonometric representation is employed using two rational functions of degree 5/5. ACCURACY: Absolute error: arithmetic domain # trials peak rms IEEE 0, 30 30000 2.6e-16 1.1e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/double alglib::besselj1(double x);
besseljn
function/************************************************************************* Bessel function of integer order Returns Bessel function of order n, where n is a (possibly negative) integer. The ratio of jn(x) to j0(x) is computed by backward recurrence. First the ratio jn/jn-1 is found by a continued fraction expansion. Then the recurrence relating successive orders is applied until j0 or j1 is reached. If n = 0 or 1 the routine for j0 or j1 is called directly. ACCURACY: Absolute error: arithmetic range # trials peak rms IEEE 0, 30 5000 4.4e-16 7.9e-17 Not suitable for large n or x. Use jv() (fractional order) instead. Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/double alglib::besseljn(ae_int_t n, double x);
besselk0
function/************************************************************************* Modified Bessel function, second kind, order zero Returns modified Bessel function of the second kind of order zero of the argument. The range is partitioned into the two intervals [0,8] and (8, infinity). Chebyshev polynomial expansions are employed in each interval. ACCURACY: Tested at 2000 random points between 0 and 8. Peak absolute error (relative when K0 > 1) was 1.46e-14; rms, 4.26e-15. Relative error: arithmetic domain # trials peak rms IEEE 0, 30 30000 1.2e-15 1.6e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/double alglib::besselk0(double x);
besselk1
function/************************************************************************* Modified Bessel function, second kind, order one Computes the modified Bessel function of the second kind of order one of the argument. The range is partitioned into the two intervals [0,2] and (2, infinity). Chebyshev polynomial expansions are employed in each interval. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0, 30 30000 1.2e-15 1.6e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/double alglib::besselk1(double x);
besselkn
function/************************************************************************* Modified Bessel function, second kind, integer order Returns modified Bessel function of the second kind of order n of the argument. The range is partitioned into the two intervals [0,9.55] and (9.55, infinity). An ascending power series is used in the low range, and an asymptotic expansion in the high range. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,30 90000 1.8e-8 3.0e-10 Error is high only near the crossover point x = 9.55 between the two expansions used. Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1988, 2000 by Stephen L. Moshier *************************************************************************/double alglib::besselkn(ae_int_t nn, double x);
bessely0
function/************************************************************************* Bessel function of the second kind, order zero Returns Bessel function of the second kind, of order zero, of the argument. The domain is divided into the intervals [0, 5] and (5, infinity). In the first interval a rational approximation R(x) is employed to compute y0(x) = R(x) + 2 * log(x) * j0(x) / PI. Thus a call to j0() is required. In the second interval, the Hankel asymptotic expansion is employed with two rational functions of degree 6/6 and 7/7. ACCURACY: Absolute error, when y0(x) < 1; else relative error: arithmetic domain # trials peak rms IEEE 0, 30 30000 1.3e-15 1.6e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/double alglib::bessely0(double x);
bessely1
function/************************************************************************* Bessel function of second kind of order one Returns Bessel function of the second kind of order one of the argument. The domain is divided into the intervals [0, 8] and (8, infinity). In the first interval a 25 term Chebyshev expansion is used, and a call to j1() is required. In the second, the asymptotic trigonometric representation is employed using two rational functions of degree 5/5. ACCURACY: Absolute error: arithmetic domain # trials peak rms IEEE 0, 30 30000 1.0e-15 1.3e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/double alglib::bessely1(double x);
besselyn
function/************************************************************************* Bessel function of second kind of integer order Returns Bessel function of order n, where n is a (possibly negative) integer. The function is evaluated by forward recurrence on n, starting with values computed by the routines y0() and y1(). If n = 0 or 1 the routine for y0 or y1 is called directly. ACCURACY: Absolute error, except relative when y > 1: arithmetic domain # trials peak rms IEEE 0, 30 30000 3.4e-15 4.3e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/double alglib::besselyn(ae_int_t n, double x);
betaf
subpackagebeta
function/************************************************************************* Beta function - - | (a) | (b) beta( a, b ) = -----------. - | (a+b) For large arguments the logarithm of the function is evaluated using lgam(), then exponentiated. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,30 30000 8.1e-14 1.1e-14 Cephes Math Library Release 2.0: April, 1987 Copyright 1984, 1987 by Stephen L. Moshier *************************************************************************/double alglib::beta(double a, double b);
binomialdistr
subpackagebinomialcdistribution
function/************************************************************************* Complemented binomial distribution Returns the sum of the terms k+1 through n of the Binomial probability density: n -- ( n ) j n-j > ( ) p (1-p) -- ( j ) j=k+1 The terms are not summed directly; instead the incomplete beta integral is employed, according to the formula y = bdtrc( k, n, p ) = incbet( k+1, n-k, p ). The arguments must be positive, with p ranging from 0 to 1. ACCURACY: Tested at random points (a,b,p). a,b Relative error: arithmetic domain # trials peak rms For p between 0.001 and 1: IEEE 0,100 100000 6.7e-15 8.2e-16 For p between 0 and .001: IEEE 0,100 100000 1.5e-13 2.7e-15 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/double alglib::binomialcdistribution(ae_int_t k, ae_int_t n, double p);
binomialdistribution
function/************************************************************************* Binomial distribution Returns the sum of the terms 0 through k of the Binomial probability density: k -- ( n ) j n-j > ( ) p (1-p) -- ( j ) j=0 The terms are not summed directly; instead the incomplete beta integral is employed, according to the formula y = bdtr( k, n, p ) = incbet( n-k, k+1, 1-p ). The arguments must be positive, with p ranging from 0 to 1. ACCURACY: Tested at random points (a,b,p), with p between 0 and 1. a,b Relative error: arithmetic domain # trials peak rms For p between 0.001 and 1: IEEE 0,100 100000 4.3e-15 2.6e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/double alglib::binomialdistribution(ae_int_t k, ae_int_t n, double p);
invbinomialdistribution
function/************************************************************************* Inverse binomial distribution Finds the event probability p such that the sum of the terms 0 through k of the Binomial probability density is equal to the given cumulative probability y. This is accomplished using the inverse beta integral function and the relation 1 - p = incbi( n-k, k+1, y ). ACCURACY: Tested at random points (a,b,p). a,b Relative error: arithmetic domain # trials peak rms For p between 0.001 and 1: IEEE 0,100 100000 2.3e-14 6.4e-16 IEEE 0,10000 100000 6.6e-12 1.2e-13 For p between 10^-6 and 0.001: IEEE 0,100 100000 2.0e-12 1.3e-14 IEEE 0,10000 100000 1.5e-12 3.2e-14 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/double alglib::invbinomialdistribution(ae_int_t k, ae_int_t n, double y);
chebyshev
subpackagechebyshevcalculate
function/************************************************************************* Calculation of the value of the Chebyshev polynomials of the first and second kinds. Parameters: r - polynomial kind, either 1 or 2. n - degree, n>=0 x - argument, -1 <= x <= 1 Result: the value of the Chebyshev polynomial at x *************************************************************************/double alglib::chebyshevcalculate(ae_int_t r, ae_int_t n, double x);
chebyshevcoefficients
function/************************************************************************* Representation of Tn as C[0] + C[1]*X + ... + C[N]*X^N Input parameters: N - polynomial degree, n>=0 Output parameters: C - coefficients *************************************************************************/void alglib::chebyshevcoefficients(ae_int_t n, real_1d_array& c);
chebyshevsum
function/************************************************************************* Summation of Chebyshev polynomials using Clenshaw's recurrence formula. This routine calculates c[0]*T0(x) + c[1]*T1(x) + ... + c[N]*TN(x) or c[0]*U0(x) + c[1]*U1(x) + ... + c[N]*UN(x) depending on the R. Parameters: r - polynomial kind, either 1 or 2. n - degree, n>=0 x - argument Result: the value of the Chebyshev polynomial at x *************************************************************************/double alglib::chebyshevsum( real_1d_array c, ae_int_t r, ae_int_t n, double x);
fromchebyshev
function/************************************************************************* Conversion of a series of Chebyshev polynomials to a power series. Represents A[0]*T0(x) + A[1]*T1(x) + ... + A[N]*Tn(x) as B[0] + B[1]*X + ... + B[N]*X^N. Input parameters: A - Chebyshev series coefficients N - degree, N>=0 Output parameters B - power series coefficients *************************************************************************/void alglib::fromchebyshev(real_1d_array a, ae_int_t n, real_1d_array& b);
chisquaredistr
subpackagechisquarecdistribution
function/************************************************************************* Complemented Chi-square distribution Returns the area under the right hand tail (from x to infinity) of the Chi square probability density function with v degrees of freedom: inf. - 1 | | v/2-1 -t/2 P( x | v ) = ----------- | t e dt v/2 - | | 2 | (v/2) - x where x is the Chi-square variable. The incomplete gamma integral is used, according to the formula y = chdtr( v, x ) = igamc( v/2.0, x/2.0 ). The arguments must both be positive. ACCURACY: See incomplete gamma function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/double alglib::chisquarecdistribution(double v, double x);
chisquaredistribution
function/************************************************************************* Chi-square distribution Returns the area under the left hand tail (from 0 to x) of the Chi square probability density function with v degrees of freedom. x - 1 | | v/2-1 -t/2 P( x | v ) = ----------- | t e dt v/2 - | | 2 | (v/2) - 0 where x is the Chi-square variable. The incomplete gamma integral is used, according to the formula y = chdtr( v, x ) = igam( v/2.0, x/2.0 ). The arguments must both be positive. ACCURACY: See incomplete gamma function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/double alglib::chisquaredistribution(double v, double x);
invchisquaredistribution
function/************************************************************************* Inverse of complemented Chi-square distribution Finds the Chi-square argument x such that the integral from x to infinity of the Chi-square density is equal to the given cumulative probability y. This is accomplished using the inverse gamma integral function and the relation x/2 = igami( df/2, y ); ACCURACY: See inverse incomplete gamma function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/double alglib::invchisquaredistribution(double v, double y);
clustering
subpackageclst_ahc | Simple hierarchical clusterization with Euclidean distance function | |
clst_distance | Clusterization with different metric types | |
clst_kclusters | Obtaining K top clusters from clusterization tree | |
clst_kmeans | Simple k-means clusterization | |
clst_linkage | Clusterization with different linkage types |
ahcreport
class/************************************************************************* This structure is used to store results of the agglomerative hierarchical clustering (AHC). Following information is returned: * TerminationType - completion code: * 1 for successful completion of algorithm * -5 inappropriate combination of clustering algorithm and distance function was used. As for now, it is possible only when Ward's method is called for dataset with non-Euclidean distance function. In case negative completion code is returned, other fields of report structure are invalid and should not be used. * NPoints contains number of points in the original dataset * Z contains information about merges performed (see below). Z contains indexes from the original (unsorted) dataset and it can be used when you need to know what points were merged. However, it is not convenient when you want to build a dendrograd (see below). * if you want to build dendrogram, you can use Z, but it is not good option, because Z contains indexes from unsorted dataset. Dendrogram built from such dataset is likely to have intersections. So, you have to reorder you points before building dendrogram. Permutation which reorders point is returned in P. Another representation of merges, which is more convenient for dendorgram construction, is returned in PM. * more information on format of Z, P and PM can be found below and in the examples from ALGLIB Reference Manual. FORMAL DESCRIPTION OF FIELDS: NPoints number of points Z array[NPoints-1,2], contains indexes of clusters linked in pairs to form clustering tree. I-th row corresponds to I-th merge: * Z[I,0] - index of the first cluster to merge * Z[I,1] - index of the second cluster to merge * Z[I,0]<Z[I,1] * clusters are numbered from 0 to 2*NPoints-2, with indexes from 0 to NPoints-1 corresponding to points of the original dataset, and indexes from NPoints to 2*NPoints-2 correspond to clusters generated by subsequent merges (I-th row of Z creates cluster with index NPoints+I). IMPORTANT: indexes in Z[] are indexes in the ORIGINAL, unsorted dataset. In addition to Z algorithm outputs permutation which rearranges points in such way that subsequent merges are performed on adjacent points (such order is needed if you want to build dendrogram). However, indexes in Z are related to original, unrearranged sequence of points. P array[NPoints], permutation which reorders points for dendrogram construction. P[i] contains index of the position where we should move I-th point of the original dataset in order to apply merges PZ/PM. PZ same as Z, but for permutation of points given by P. The only thing which changed are indexes of the original points; indexes of clusters remained same. MergeDist array[NPoints-1], contains distances between clusters being merged (MergeDist[i] correspond to merge stored in Z[i,...]): * CLINK, SLINK and average linkage algorithms report "raw", unmodified distance metric. * Ward's method reports weighted intra-cluster variance, which is equal to ||Ca-Cb||^2 * Sa*Sb/(Sa+Sb). Here A and B are clusters being merged, Ca is a center of A, Cb is a center of B, Sa is a size of A, Sb is a size of B. PM array[NPoints-1,6], another representation of merges, which is suited for dendrogram construction. It deals with rearranged points (permutation P is applied) and represents merges in a form which different from one used by Z. For each I from 0 to NPoints-2, I-th row of PM represents merge performed on two clusters C0 and C1. Here: * C0 contains points with indexes PM[I,0]...PM[I,1] * C1 contains points with indexes PM[I,2]...PM[I,3] * indexes stored in PM are given for dataset sorted according to permutation P * PM[I,1]=PM[I,2]-1 (only adjacent clusters are merged) * PM[I,0]<=PM[I,1], PM[I,2]<=PM[I,3], i.e. both clusters contain at least one point * heights of "subdendrograms" corresponding to C0/C1 are stored in PM[I,4] and PM[I,5]. Subdendrograms corresponding to single-point clusters have height=0. Dendrogram of the merge result has height H=max(H0,H1)+1. NOTE: there is one-to-one correspondence between merges described by Z and PM. I-th row of Z describes same merge of clusters as I-th row of PM, with "left" cluster from Z corresponding to the "left" one from PM. -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/class ahcreport { ae_int_t terminationtype; ae_int_t npoints; integer_1d_array p; integer_2d_array z; integer_2d_array pz; integer_2d_array pm; real_1d_array mergedist; };
clusterizerstate
class/************************************************************************* This structure is a clusterization engine. You should not try to access its fields directly. Use ALGLIB functions in order to work with this object. -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/class clusterizerstate { };
kmeansreport
class/************************************************************************* This structure is used to store results of the k-means clustering algorithm. Following information is always returned: * NPoints contains number of points in the original dataset * TerminationType contains completion code, negative on failure, positive on success * K contains number of clusters For positive TerminationType we return: * NFeatures contains number of variables in the original dataset * C, which contains centers found by algorithm * CIdx, which maps points of the original dataset to clusters FORMAL DESCRIPTION OF FIELDS: NPoints number of points, >=0 NFeatures number of variables, >=1 TerminationType completion code: * -5 if distance type is anything different from Euclidean metric * -3 for degenerate dataset: a) less than K distinct points, b) K=0 for non-empty dataset. * +1 for successful completion K number of clusters C array[K,NFeatures], rows of the array store centers CIdx array[NPoints], which contains cluster indexes IterationsCount actual number of iterations performed by clusterizer. If algorithm performed more than one random restart, total number of iterations is returned. Energy merit function, "energy", sum of squared deviations from cluster centers -- ALGLIB -- Copyright 27.11.2012 by Bochkanov Sergey *************************************************************************/class kmeansreport { ae_int_t npoints; ae_int_t nfeatures; ae_int_t terminationtype; ae_int_t iterationscount; double energy; ae_int_t k; real_2d_array c; integer_1d_array cidx; };
clusterizercreate
function/************************************************************************* This function initializes clusterizer object. Newly initialized object is empty, i.e. it does not contain dataset. You should use it as follows: 1. creation 2. dataset is added with ClusterizerSetPoints() 3. additional parameters are set 3. clusterization is performed with one of the clustering functions -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/void alglib::clusterizercreate(clusterizerstate& s);
clusterizergetdistances
function/************************************************************************* This function returns distance matrix for dataset COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Agglomerative hierarchical clustering algorithm has two phases: ! distance matrix calculation and clustering itself. Only first phase ! (distance matrix calculation) is accelerated by Intel MKL and multi- ! threading. Thus, acceleration is significant only for medium or high- ! dimensional problems. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: XY - array[NPoints,NFeatures], dataset NPoints - number of points, >=0 NFeatures- number of features, >=1 DistType- distance function: * 0 Chebyshev distance (L-inf norm) * 1 city block distance (L1 norm) * 2 Euclidean distance (L2 norm, non-squared) * 10 Pearson correlation: dist(a,b) = 1-corr(a,b) * 11 Absolute Pearson correlation: dist(a,b) = 1-|corr(a,b)| * 12 Uncentered Pearson correlation (cosine of the angle): dist(a,b) = a'*b/(|a|*|b|) * 13 Absolute uncentered Pearson correlation dist(a,b) = |a'*b|/(|a|*|b|) * 20 Spearman rank correlation: dist(a,b) = 1-rankcorr(a,b) * 21 Absolute Spearman rank correlation dist(a,b) = 1-|rankcorr(a,b)| OUTPUT PARAMETERS: D - array[NPoints,NPoints], distance matrix (full matrix is returned, with lower and upper triangles) NOTE: different distance functions have different performance penalty: * Euclidean or Pearson correlation distances are the fastest ones * Spearman correlation distance function is a bit slower * city block and Chebyshev distances are order of magnitude slower The reason behing difference in performance is that correlation-based distance functions are computed using optimized linear algebra kernels, while Chebyshev and city block distance functions are computed using simple nested loops with two branches at each iteration. -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/void alglib::clusterizergetdistances( real_2d_array xy, ae_int_t npoints, ae_int_t nfeatures, ae_int_t disttype, real_2d_array& d); void alglib::smp_clusterizergetdistances( real_2d_array xy, ae_int_t npoints, ae_int_t nfeatures, ae_int_t disttype, real_2d_array& d);
clusterizergetkclusters
function/************************************************************************* This function takes as input clusterization report Rep, desired clusters count K, and builds top K clusters from hierarchical clusterization tree. It returns assignment of points to clusters (array of cluster indexes). INPUT PARAMETERS: Rep - report from ClusterizerRunAHC() performed on XY K - desired number of clusters, 1<=K<=NPoints. K can be zero only when NPoints=0. OUTPUT PARAMETERS: CIdx - array[NPoints], I-th element contains cluster index (from 0 to K-1) for I-th point of the dataset. CZ - array[K]. This array allows to convert cluster indexes returned by this function to indexes used by Rep.Z. J-th cluster returned by this function corresponds to CZ[J]-th cluster stored in Rep.Z/PZ/PM. It is guaranteed that CZ[I]<CZ[I+1]. NOTE: K clusters built by this subroutine are assumed to have no hierarchy. Although they were obtained by manipulation with top K nodes of dendrogram (i.e. hierarchical decomposition of dataset), this function does not return information about hierarchy. Each of the clusters stand on its own. NOTE: Cluster indexes returned by this function does not correspond to indexes returned in Rep.Z/PZ/PM. Either you work with hierarchical representation of the dataset (dendrogram), or you work with "flat" representation returned by this function. Each of representations has its own clusters indexing system (former uses [0, 2*NPoints-2]), while latter uses [0..K-1]), although it is possible to perform conversion from one system to another by means of CZ array, returned by this function, which allows you to convert indexes stored in CIdx to the numeration system used by Rep.Z. NOTE: this subroutine is optimized for moderate values of K. Say, for K=5 it will perform many times faster than for K=100. Its worst-case performance is O(N*K), although in average case it perform better (up to O(N*log(K))). -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/void alglib::clusterizergetkclusters( ahcreport rep, ae_int_t k, integer_1d_array& cidx, integer_1d_array& cz);
clusterizerrunahc
function/************************************************************************* This function performs agglomerative hierarchical clustering COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Agglomerative hierarchical clustering algorithm has two phases: ! distance matrix calculation and clustering itself. Only first phase ! (distance matrix calculation) is accelerated by Intel MKL and multi- ! threading. Thus, acceleration is significant only for medium or high- ! dimensional problems. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: S - clusterizer state, initialized by ClusterizerCreate() OUTPUT PARAMETERS: Rep - clustering results; see description of AHCReport structure for more information. NOTE 1: hierarchical clustering algorithms require large amounts of memory. In particular, this implementation needs sizeof(double)*NPoints^2 bytes, which are used to store distance matrix. In case we work with user-supplied matrix, this amount is multiplied by 2 (we have to store original matrix and to work with its copy). For example, problem with 10000 points would require 800M of RAM, even when working in a 1-dimensional space. -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/void alglib::clusterizerrunahc(clusterizerstate s, ahcreport& rep); void alglib::smp_clusterizerrunahc(clusterizerstate s, ahcreport& rep);
clusterizerrunkmeans
function/************************************************************************* This function performs clustering by k-means++ algorithm. You may change algorithm properties by calling: * ClusterizerSetKMeansLimits() to change number of restarts or iterations * ClusterizerSetKMeansInit() to change initialization algorithm By default, one restart and unlimited number of iterations are used. Initialization algorithm is chosen automatically. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function: ! * multicore support (can be used from C# and C++) ! * access to high-performance C++ core (actual for C# users) ! ! K-means clustering algorithm has two phases: selection of initial ! centers and clustering itself. ALGLIB parallelizes both phases. ! Parallel version is optimized for the following scenario: medium or ! high-dimensional problem (20 or more dimensions) with large number of ! points and clusters. However, some speed-up can be obtained even when ! assumptions above are violated. ! ! As for native-vs-managed comparison, working with native core brings ! 30-40% improvement in speed over pure C# version of ALGLIB. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: S - clusterizer state, initialized by ClusterizerCreate() K - number of clusters, K>=0. K can be zero only when algorithm is called for empty dataset, in this case completion code is set to success (+1). If K=0 and dataset size is non-zero, we can not meaningfully assign points to some center (there are no centers because K=0) and return -3 as completion code (failure). OUTPUT PARAMETERS: Rep - clustering results; see description of KMeansReport structure for more information. NOTE 1: k-means clustering can be performed only for datasets with Euclidean distance function. Algorithm will return negative completion code in Rep.TerminationType in case dataset was added to clusterizer with DistType other than Euclidean (or dataset was specified by distance matrix instead of explicitly given points). NOTE 2: by default, k-means uses non-deterministic seed to initialize RNG which is used to select initial centers. As result, each run of algorithm may return different values. If you need deterministic behavior, use ClusterizerSetSeed() function. -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/void alglib::clusterizerrunkmeans( clusterizerstate s, ae_int_t k, kmeansreport& rep); void alglib::smp_clusterizerrunkmeans( clusterizerstate s, ae_int_t k, kmeansreport& rep);
clusterizerseparatedbycorr
function/************************************************************************* This function accepts AHC report Rep, desired maximum intercluster correlation and returns top clusters from hierarchical clusterization tree which are separated by correlation R or LOWER. It returns assignment of points to clusters (array of cluster indexes). There is one more function with similar name - ClusterizerSeparatedByDist, which returns clusters with intercluster distance equal to R or HIGHER (note: higher for distance, lower for correlation). INPUT PARAMETERS: Rep - report from ClusterizerRunAHC() performed on XY R - desired maximum intercluster correlation, -1<=R<=+1 OUTPUT PARAMETERS: K - number of clusters, 1<=K<=NPoints CIdx - array[NPoints], I-th element contains cluster index (from 0 to K-1) for I-th point of the dataset. CZ - array[K]. This array allows to convert cluster indexes returned by this function to indexes used by Rep.Z. J-th cluster returned by this function corresponds to CZ[J]-th cluster stored in Rep.Z/PZ/PM. It is guaranteed that CZ[I]<CZ[I+1]. NOTE: K clusters built by this subroutine are assumed to have no hierarchy. Although they were obtained by manipulation with top K nodes of dendrogram (i.e. hierarchical decomposition of dataset), this function does not return information about hierarchy. Each of the clusters stand on its own. NOTE: Cluster indexes returned by this function does not correspond to indexes returned in Rep.Z/PZ/PM. Either you work with hierarchical representation of the dataset (dendrogram), or you work with "flat" representation returned by this function. Each of representations has its own clusters indexing system (former uses [0, 2*NPoints-2]), while latter uses [0..K-1]), although it is possible to perform conversion from one system to another by means of CZ array, returned by this function, which allows you to convert indexes stored in CIdx to the numeration system used by Rep.Z. NOTE: this subroutine is optimized for moderate values of K. Say, for K=5 it will perform many times faster than for K=100. Its worst-case performance is O(N*K), although in average case it perform better (up to O(N*log(K))). -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/void alglib::clusterizerseparatedbycorr( ahcreport rep, double r, ae_int_t& k, integer_1d_array& cidx, integer_1d_array& cz);
clusterizerseparatedbydist
function/************************************************************************* This function accepts AHC report Rep, desired minimum intercluster distance and returns top clusters from hierarchical clusterization tree which are separated by distance R or HIGHER. It returns assignment of points to clusters (array of cluster indexes). There is one more function with similar name - ClusterizerSeparatedByCorr, which returns clusters with intercluster correlation equal to R or LOWER (note: higher for distance, lower for correlation). INPUT PARAMETERS: Rep - report from ClusterizerRunAHC() performed on XY R - desired minimum intercluster distance, R>=0 OUTPUT PARAMETERS: K - number of clusters, 1<=K<=NPoints CIdx - array[NPoints], I-th element contains cluster index (from 0 to K-1) for I-th point of the dataset. CZ - array[K]. This array allows to convert cluster indexes returned by this function to indexes used by Rep.Z. J-th cluster returned by this function corresponds to CZ[J]-th cluster stored in Rep.Z/PZ/PM. It is guaranteed that CZ[I]<CZ[I+1]. NOTE: K clusters built by this subroutine are assumed to have no hierarchy. Although they were obtained by manipulation with top K nodes of dendrogram (i.e. hierarchical decomposition of dataset), this function does not return information about hierarchy. Each of the clusters stand on its own. NOTE: Cluster indexes returned by this function does not correspond to indexes returned in Rep.Z/PZ/PM. Either you work with hierarchical representation of the dataset (dendrogram), or you work with "flat" representation returned by this function. Each of representations has its own clusters indexing system (former uses [0, 2*NPoints-2]), while latter uses [0..K-1]), although it is possible to perform conversion from one system to another by means of CZ array, returned by this function, which allows you to convert indexes stored in CIdx to the numeration system used by Rep.Z. NOTE: this subroutine is optimized for moderate values of K. Say, for K=5 it will perform many times faster than for K=100. Its worst-case performance is O(N*K), although in average case it perform better (up to O(N*log(K))). -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/void alglib::clusterizerseparatedbydist( ahcreport rep, double r, ae_int_t& k, integer_1d_array& cidx, integer_1d_array& cz);
clusterizersetahcalgo
function/************************************************************************* This function sets agglomerative hierarchical clustering algorithm INPUT PARAMETERS: S - clusterizer state, initialized by ClusterizerCreate() Algo - algorithm type: * 0 complete linkage (default algorithm) * 1 single linkage * 2 unweighted average linkage * 3 weighted average linkage * 4 Ward's method NOTE: Ward's method works correctly only with Euclidean distance, that's why algorithm will return negative termination code (failure) for any other distance type. It is possible, however, to use this method with user-supplied distance matrix. It is your responsibility to pass one which was calculated with Euclidean distance function. -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/void alglib::clusterizersetahcalgo(clusterizerstate s, ae_int_t algo);
clusterizersetdistances
function/************************************************************************* This function adds dataset given by distance matrix to the clusterizer structure. It is important that dataset is not given explicitly - only distance matrix is given. This function overrides all previous calls of ClusterizerSetPoints() or ClusterizerSetDistances(). INPUT PARAMETERS: S - clusterizer state, initialized by ClusterizerCreate() D - array[NPoints,NPoints], distance matrix given by its upper or lower triangle (main diagonal is ignored because its entries are expected to be zero). NPoints - number of points IsUpper - whether upper or lower triangle of D is given. NOTE 1: different clustering algorithms have different limitations: * agglomerative hierarchical clustering algorithms may be used with any kind of distance metric, including one which is given by distance matrix * k-means++ clustering algorithm may be used only with Euclidean distance function and explicitly given points - it can not be used with dataset given by distance matrix Thus, if you call this function, you will be unable to use k-means clustering algorithm to process your problem. -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/void alglib::clusterizersetdistances( clusterizerstate s, real_2d_array d, bool isupper); void alglib::clusterizersetdistances( clusterizerstate s, real_2d_array d, ae_int_t npoints, bool isupper);
Examples: [1]
clusterizersetkmeansinit
function/************************************************************************* This function sets k-means initialization algorithm. Several different algorithms can be chosen, including k-means++. INPUT PARAMETERS: S - clusterizer state, initialized by ClusterizerCreate() InitAlgo- initialization algorithm: * 0 automatic selection ( different versions of ALGLIB may select different algorithms) * 1 random initialization * 2 k-means++ initialization (best quality of initial centers, but long non-parallelizable initialization phase with bad cache locality) * 3 "fast-greedy" algorithm with efficient, easy to parallelize initialization. Quality of initial centers is somewhat worse than that of k-means++. This algorithm is a default one in the current version of ALGLIB. *-1 "debug" algorithm which always selects first K rows of dataset; this algorithm is used for debug purposes only. Do not use it in the industrial code! -- ALGLIB -- Copyright 21.01.2015 by Bochkanov Sergey *************************************************************************/void alglib::clusterizersetkmeansinit( clusterizerstate s, ae_int_t initalgo);
clusterizersetkmeanslimits
function/************************************************************************* This function sets k-means properties: number of restarts and maximum number of iterations per one run. INPUT PARAMETERS: S - clusterizer state, initialized by ClusterizerCreate() Restarts- restarts count, >=1. k-means++ algorithm performs several restarts and chooses best set of centers (one with minimum squared distance). MaxIts - maximum number of k-means iterations performed during one run. >=0, zero value means that algorithm performs unlimited number of iterations. -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/void alglib::clusterizersetkmeanslimits( clusterizerstate s, ae_int_t restarts, ae_int_t maxits);
clusterizersetpoints
function/************************************************************************* This function adds dataset to the clusterizer structure. This function overrides all previous calls of ClusterizerSetPoints() or ClusterizerSetDistances(). INPUT PARAMETERS: S - clusterizer state, initialized by ClusterizerCreate() XY - array[NPoints,NFeatures], dataset NPoints - number of points, >=0 NFeatures- number of features, >=1 DistType- distance function: * 0 Chebyshev distance (L-inf norm) * 1 city block distance (L1 norm) * 2 Euclidean distance (L2 norm), non-squared * 10 Pearson correlation: dist(a,b) = 1-corr(a,b) * 11 Absolute Pearson correlation: dist(a,b) = 1-|corr(a,b)| * 12 Uncentered Pearson correlation (cosine of the angle): dist(a,b) = a'*b/(|a|*|b|) * 13 Absolute uncentered Pearson correlation dist(a,b) = |a'*b|/(|a|*|b|) * 20 Spearman rank correlation: dist(a,b) = 1-rankcorr(a,b) * 21 Absolute Spearman rank correlation dist(a,b) = 1-|rankcorr(a,b)| NOTE 1: different distance functions have different performance penalty: * Euclidean or Pearson correlation distances are the fastest ones * Spearman correlation distance function is a bit slower * city block and Chebyshev distances are order of magnitude slower The reason behing difference in performance is that correlation-based distance functions are computed using optimized linear algebra kernels, while Chebyshev and city block distance functions are computed using simple nested loops with two branches at each iteration. NOTE 2: different clustering algorithms have different limitations: * agglomerative hierarchical clustering algorithms may be used with any kind of distance metric * k-means++ clustering algorithm may be used only with Euclidean distance function Thus, list of specific clustering algorithms you may use depends on distance function you specify when you set your dataset. -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/void alglib::clusterizersetpoints( clusterizerstate s, real_2d_array xy, ae_int_t disttype); void alglib::clusterizersetpoints( clusterizerstate s, real_2d_array xy, ae_int_t npoints, ae_int_t nfeatures, ae_int_t disttype);
clusterizersetseed
function/************************************************************************* This function sets seed which is used to initialize internal RNG. By default, deterministic seed is used - same for each run of clusterizer. If you specify non-deterministic seed value, then some algorithms which depend on random initialization (in current version: k-means) may return slightly different results after each run. INPUT PARAMETERS: S - clusterizer state, initialized by ClusterizerCreate() Seed - seed: * positive values = use deterministic seed for each run of algorithms which depend on random initialization * zero or negative values = use non-deterministic seed -- ALGLIB -- Copyright 08.06.2017 by Bochkanov Sergey *************************************************************************/void alglib::clusterizersetseed(clusterizerstate s, ae_int_t seed);
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "dataanalysis.h" using namespace alglib; int main(int argc, char **argv) { // // The very simple clusterization example // // We have a set of points in 2D space: // (P0,P1,P2,P3,P4) = ((1,1),(1,2),(4,1),(2,3),(4,1.5)) // // | // | P3 // | // | P1 // | P4 // | P0 P2 // |------------------------- // // We want to perform Agglomerative Hierarchic Clusterization (AHC), // using complete linkage (default algorithm) and Euclidean distance // (default metric). // // In order to do that, we: // * create clusterizer with clusterizercreate() // * set points XY and metric (2=Euclidean) with clusterizersetpoints() // * run AHC algorithm with clusterizerrunahc // // You may see that clusterization itself is a minor part of the example, // most of which is dominated by comments :) // clusterizerstate s; ahcreport rep; real_2d_array xy = "[[1,1],[1,2],[4,1],[2,3],[4,1.5]]"; clusterizercreate(s); clusterizersetpoints(s, xy, 2); clusterizerrunahc(s, rep); // // Now we've built our clusterization tree. Rep.z contains information which // is required to build dendrogram. I-th row of rep.z represents one merge // operation, with first cluster to merge having index rep.z[I,0] and second // one having index rep.z[I,1]. Merge result has index NPoints+I. // // Clusters with indexes less than NPoints are single-point initial clusters, // while ones with indexes from NPoints to 2*NPoints-2 are multi-point // clusters created during merges. // // In our example, Z=[[2,4], [0,1], [3,6], [5,7]] // // It means that: // * first, we merge C2=(P2) and C4=(P4), and create C5=(P2,P4) // * then, we merge C2=(P0) and C1=(P1), and create C6=(P0,P1) // * then, we merge C3=(P3) and C6=(P0,P1), and create C7=(P0,P1,P3) // * finally, we merge C5 and C7 and create C8=(P0,P1,P2,P3,P4) // // Thus, we have following dendrogram: // // ------8----- // | | // | ----7---- // | | | // ---5--- | ---6--- // | | | | | // P2 P4 P3 P0 P1 // printf("%s\n", rep.z.tostring().c_str()); // EXPECTED: [[2,4],[0,1],[3,6],[5,7]] // // We've built dendrogram above by reordering our dataset. // // Without such reordering it would be impossible to build dendrogram without // intersections. Luckily, ahcreport structure contains two additional fields // which help to build dendrogram from your data: // * rep.p, which contains permutation applied to dataset // * rep.pm, which contains another representation of merges // // In our example we have: // * P=[3,4,0,2,1] // * PZ=[[0,0,1,1,0,0],[3,3,4,4,0,0],[2,2,3,4,0,1],[0,1,2,4,1,2]] // // Permutation array P tells us that P0 should be moved to position 3, // P1 moved to position 4, P2 moved to position 0 and so on: // // (P0 P1 P2 P3 P4) => (P2 P4 P3 P0 P1) // // Merges array PZ tells us how to perform merges on the sorted dataset. // One row of PZ corresponds to one merge operations, with first pair of // elements denoting first of the clusters to merge (start index, end // index) and next pair of elements denoting second of the clusters to // merge. Clusters being merged are always adjacent, with first one on // the left and second one on the right. // // For example, first row of PZ tells us that clusters [0,0] and [1,1] are // merged (single-point clusters, with first one containing P2 and second // one containing P4). Third row of PZ tells us that we merge one single- // point cluster [2,2] with one two-point cluster [3,4]. // // There are two more elements in each row of PZ. These are the helper // elements, which denote HEIGHT (not size) of left and right subdendrograms. // For example, according to PZ, first two merges are performed on clusterization // trees of height 0, while next two merges are performed on 0-1 and 1-2 // pairs of trees correspondingly. // printf("%s\n", rep.p.tostring().c_str()); // EXPECTED: [3,4,0,2,1] printf("%s\n", rep.pm.tostring().c_str()); // EXPECTED: [[0,0,1,1,0,0],[3,3,4,4,0,0],[2,2,3,4,0,1],[0,1,2,4,1,2]] return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "dataanalysis.h" using namespace alglib; int main(int argc, char **argv) { // // We have three points in 4D space: // (P0,P1,P2) = ((1, 2, 1, 2), (6, 7, 6, 7), (7, 6, 7, 6)) // // We want to try clustering them with different distance functions. // Distance function is chosen when we add dataset to the clusterizer. // We can choose several distance types - Euclidean, city block, Chebyshev, // several correlation measures or user-supplied distance matrix. // // Here we'll try three distances: Euclidean, Pearson correlation, // user-supplied distance matrix. Different distance functions lead // to different choices being made by algorithm during clustering. // clusterizerstate s; ahcreport rep; ae_int_t disttype; real_2d_array xy = "[[1, 2, 1, 2], [6, 7, 6, 7], [7, 6, 7, 6]]"; clusterizercreate(s); // With Euclidean distance function (disttype=2) two closest points // are P1 and P2, thus: // * first, we merge P1 and P2 to form C3=[P1,P2] // * second, we merge P0 and C3 to form C4=[P0,P1,P2] disttype = 2; clusterizersetpoints(s, xy, disttype); clusterizerrunahc(s, rep); printf("%s\n", rep.z.tostring().c_str()); // EXPECTED: [[1,2],[0,3]] // With Pearson correlation distance function (disttype=10) situation // is different - distance between P0 and P1 is zero, thus: // * first, we merge P0 and P1 to form C3=[P0,P1] // * second, we merge P2 and C3 to form C4=[P0,P1,P2] disttype = 10; clusterizersetpoints(s, xy, disttype); clusterizerrunahc(s, rep); printf("%s\n", rep.z.tostring().c_str()); // EXPECTED: [[0,1],[2,3]] // Finally, we try clustering with user-supplied distance matrix: // [ 0 3 1 ] // P = [ 3 0 3 ], where P[i,j] = dist(Pi,Pj) // [ 1 3 0 ] // // * first, we merge P0 and P2 to form C3=[P0,P2] // * second, we merge P1 and C3 to form C4=[P0,P1,P2] real_2d_array d = "[[0,3,1],[3,0,3],[1,3,0]]"; clusterizersetdistances(s, d, true); clusterizerrunahc(s, rep); printf("%s\n", rep.z.tostring().c_str()); // EXPECTED: [[0,2],[1,3]] return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "dataanalysis.h" using namespace alglib; int main(int argc, char **argv) { // // We have a set of points in 2D space: // (P0,P1,P2,P3,P4) = ((1,1),(1,2),(4,1),(2,3),(4,1.5)) // // | // | P3 // | // | P1 // | P4 // | P0 P2 // |------------------------- // // We perform Agglomerative Hierarchic Clusterization (AHC) and we want // to get top K clusters from clusterization tree for different K. // clusterizerstate s; ahcreport rep; real_2d_array xy = "[[1,1],[1,2],[4,1],[2,3],[4,1.5]]"; integer_1d_array cidx; integer_1d_array cz; clusterizercreate(s); clusterizersetpoints(s, xy, 2); clusterizerrunahc(s, rep); // with K=5, every points is assigned to its own cluster: // C0=P0, C1=P1 and so on... clusterizergetkclusters(rep, 5, cidx, cz); printf("%s\n", cidx.tostring().c_str()); // EXPECTED: [0,1,2,3,4] // with K=1 we have one large cluster C0=[P0,P1,P2,P3,P4,P5] clusterizergetkclusters(rep, 1, cidx, cz); printf("%s\n", cidx.tostring().c_str()); // EXPECTED: [0,0,0,0,0] // with K=3 we have three clusters C0=[P3], C1=[P2,P4], C2=[P0,P1] clusterizergetkclusters(rep, 3, cidx, cz); printf("%s\n", cidx.tostring().c_str()); // EXPECTED: [2,2,1,0,1] return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "dataanalysis.h" using namespace alglib; int main(int argc, char **argv) { // // The very simple clusterization example // // We have a set of points in 2D space: // (P0,P1,P2,P3,P4) = ((1,1),(1,2),(4,1),(2,3),(4,1.5)) // // | // | P3 // | // | P1 // | P4 // | P0 P2 // |------------------------- // // We want to perform k-means++ clustering with K=2. // // In order to do that, we: // * create clusterizer with clusterizercreate() // * set points XY and metric (must be Euclidean, distype=2) with clusterizersetpoints() // * (optional) set number of restarts from random positions to 5 // * run k-means algorithm with clusterizerrunkmeans() // // You may see that clusterization itself is a minor part of the example, // most of which is dominated by comments :) // clusterizerstate s; kmeansreport rep; real_2d_array xy = "[[1,1],[1,2],[4,1],[2,3],[4,1.5]]"; clusterizercreate(s); clusterizersetpoints(s, xy, 2); clusterizersetkmeanslimits(s, 5, 0); clusterizerrunkmeans(s, 2, rep); // // We've performed clusterization, and it succeeded (completion code is +1). // // Now first center is stored in the first row of rep.c, second one is stored // in the second row. rep.cidx can be used to determine which center is // closest to some specific point of the dataset. // printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1 // We called clusterizersetpoints() with disttype=2 because k-means++ // algorithm does NOT support metrics other than Euclidean. But what if we // try to use some other metric? // // We change metric type by calling clusterizersetpoints() one more time, // and try to run k-means algo again. It fails. // clusterizersetpoints(s, xy, 0); clusterizerrunkmeans(s, 2, rep); printf("%d\n", int(rep.terminationtype)); // EXPECTED: -5 return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "dataanalysis.h" using namespace alglib; int main(int argc, char **argv) { // // We have a set of points in 1D space: // (P0,P1,P2,P3,P4) = (1, 3, 10, 16, 20) // // We want to perform Agglomerative Hierarchic Clusterization (AHC), // using either complete or single linkage and Euclidean distance // (default metric). // // First two steps merge P0/P1 and P3/P4 independently of the linkage type. // However, third step depends on linkage type being used: // * in case of complete linkage P2=10 is merged with [P0,P1] // * in case of single linkage P2=10 is merged with [P3,P4] // clusterizerstate s; ahcreport rep; real_2d_array xy = "[[1],[3],[10],[16],[20]]"; integer_1d_array cidx; integer_1d_array cz; clusterizercreate(s); clusterizersetpoints(s, xy, 2); // use complete linkage, reduce set down to 2 clusters. // print clusterization with clusterizergetkclusters(2). // P2 must belong to [P0,P1] clusterizersetahcalgo(s, 0); clusterizerrunahc(s, rep); clusterizergetkclusters(rep, 2, cidx, cz); printf("%s\n", cidx.tostring().c_str()); // EXPECTED: [1,1,1,0,0] // use single linkage, reduce set down to 2 clusters. // print clusterization with clusterizergetkclusters(2). // P2 must belong to [P2,P3] clusterizersetahcalgo(s, 1); clusterizerrunahc(s, rep); clusterizergetkclusters(rep, 2, cidx, cz); printf("%s\n", cidx.tostring().c_str()); // EXPECTED: [0,0,1,1,1] return 0; }
conv
subpackageconvc1d
function/************************************************************************* 1-dimensional complex convolution. For given A/B returns conv(A,B) (non-circular). Subroutine can automatically choose between three implementations: straightforward O(M*N) formula for very small N (or M), overlap-add algorithm for cases where max(M,N) is significantly larger than min(M,N), but O(M*N) algorithm is too slow, and general FFT-based formula for cases where two previois algorithms are too slow. Algorithm has max(M,N)*log(max(M,N)) complexity for any M/N. INPUT PARAMETERS A - array[0..M-1] - complex function to be transformed M - problem size B - array[0..N-1] - complex function to be transformed N - problem size OUTPUT PARAMETERS R - convolution: A*B. array[0..N+M-2]. NOTE: It is assumed that A is zero at T<0, B is zero too. If one or both functions have non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void alglib::convc1d( complex_1d_array a, ae_int_t m, complex_1d_array b, ae_int_t n, complex_1d_array& r);
convc1dcircular
function/************************************************************************* 1-dimensional circular complex convolution. For given S/R returns conv(S,R) (circular). Algorithm has linearithmic complexity for any M/N. IMPORTANT: normal convolution is commutative, i.e. it is symmetric - conv(A,B)=conv(B,A). Cyclic convolution IS NOT. One function - S - is a signal, periodic function, and another - R - is a response, non-periodic function with limited length. INPUT PARAMETERS S - array[0..M-1] - complex periodic signal M - problem size B - array[0..N-1] - complex non-periodic response N - problem size OUTPUT PARAMETERS R - convolution: A*B. array[0..M-1]. NOTE: It is assumed that B is zero at T<0. If it has non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void alglib::convc1dcircular( complex_1d_array s, ae_int_t m, complex_1d_array r, ae_int_t n, complex_1d_array& c);
convc1dcircularinv
function/************************************************************************* 1-dimensional circular complex deconvolution (inverse of ConvC1DCircular()). Algorithm has M*log(M)) complexity for any M (composite or prime). INPUT PARAMETERS A - array[0..M-1] - convolved periodic signal, A = conv(R, B) M - convolved signal length B - array[0..N-1] - non-periodic response N - response length OUTPUT PARAMETERS R - deconvolved signal. array[0..M-1]. NOTE: deconvolution is unstable process and may result in division by zero (if your response function is degenerate, i.e. has zero Fourier coefficient). NOTE: It is assumed that B is zero at T<0. If it has non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void alglib::convc1dcircularinv( complex_1d_array a, ae_int_t m, complex_1d_array b, ae_int_t n, complex_1d_array& r);
convc1dinv
function/************************************************************************* 1-dimensional complex non-circular deconvolution (inverse of ConvC1D()). Algorithm has M*log(M)) complexity for any M (composite or prime). INPUT PARAMETERS A - array[0..M-1] - convolved signal, A = conv(R, B) M - convolved signal length B - array[0..N-1] - response N - response length, N<=M OUTPUT PARAMETERS R - deconvolved signal. array[0..M-N]. NOTE: deconvolution is unstable process and may result in division by zero (if your response function is degenerate, i.e. has zero Fourier coefficient). NOTE: It is assumed that A is zero at T<0, B is zero too. If one or both functions have non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void alglib::convc1dinv( complex_1d_array a, ae_int_t m, complex_1d_array b, ae_int_t n, complex_1d_array& r);
convr1d
function/************************************************************************* 1-dimensional real convolution. Analogous to ConvC1D(), see ConvC1D() comments for more details. INPUT PARAMETERS A - array[0..M-1] - real function to be transformed M - problem size B - array[0..N-1] - real function to be transformed N - problem size OUTPUT PARAMETERS R - convolution: A*B. array[0..N+M-2]. NOTE: It is assumed that A is zero at T<0, B is zero too. If one or both functions have non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void alglib::convr1d( real_1d_array a, ae_int_t m, real_1d_array b, ae_int_t n, real_1d_array& r);
convr1dcircular
function/************************************************************************* 1-dimensional circular real convolution. Analogous to ConvC1DCircular(), see ConvC1DCircular() comments for more details. INPUT PARAMETERS S - array[0..M-1] - real signal M - problem size B - array[0..N-1] - real response N - problem size OUTPUT PARAMETERS R - convolution: A*B. array[0..M-1]. NOTE: It is assumed that B is zero at T<0. If it has non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void alglib::convr1dcircular( real_1d_array s, ae_int_t m, real_1d_array r, ae_int_t n, real_1d_array& c);
convr1dcircularinv
function/************************************************************************* 1-dimensional complex deconvolution (inverse of ConvC1D()). Algorithm has M*log(M)) complexity for any M (composite or prime). INPUT PARAMETERS A - array[0..M-1] - convolved signal, A = conv(R, B) M - convolved signal length B - array[0..N-1] - response N - response length OUTPUT PARAMETERS R - deconvolved signal. array[0..M-N]. NOTE: deconvolution is unstable process and may result in division by zero (if your response function is degenerate, i.e. has zero Fourier coefficient). NOTE: It is assumed that B is zero at T<0. If it has non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void alglib::convr1dcircularinv( real_1d_array a, ae_int_t m, real_1d_array b, ae_int_t n, real_1d_array& r);
convr1dinv
function/************************************************************************* 1-dimensional real deconvolution (inverse of ConvC1D()). Algorithm has M*log(M)) complexity for any M (composite or prime). INPUT PARAMETERS A - array[0..M-1] - convolved signal, A = conv(R, B) M - convolved signal length B - array[0..N-1] - response N - response length, N<=M OUTPUT PARAMETERS R - deconvolved signal. array[0..M-N]. NOTE: deconvolution is unstable process and may result in division by zero (if your response function is degenerate, i.e. has zero Fourier coefficient). NOTE: It is assumed that A is zero at T<0, B is zero too. If one or both functions have non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void alglib::convr1dinv( real_1d_array a, ae_int_t m, real_1d_array b, ae_int_t n, real_1d_array& r);
corr
subpackagecorrc1d
function/************************************************************************* 1-dimensional complex cross-correlation. For given Pattern/Signal returns corr(Pattern,Signal) (non-circular). Correlation is calculated using reduction to convolution. Algorithm with max(N,N)*log(max(N,N)) complexity is used (see ConvC1D() for more info about performance). IMPORTANT: for historical reasons subroutine accepts its parameters in reversed order: CorrC1D(Signal, Pattern) = Pattern x Signal (using traditional definition of cross-correlation, denoting cross-correlation as "x"). INPUT PARAMETERS Signal - array[0..N-1] - complex function to be transformed, signal containing pattern N - problem size Pattern - array[0..M-1] - complex function to be transformed, pattern to search withing signal M - problem size OUTPUT PARAMETERS R - cross-correlation, array[0..N+M-2]: * positive lags are stored in R[0..N-1], R[i] = sum(conj(pattern[j])*signal[i+j] * negative lags are stored in R[N..N+M-2], R[N+M-1-i] = sum(conj(pattern[j])*signal[-i+j] NOTE: It is assumed that pattern domain is [0..M-1]. If Pattern is non-zero on [-K..M-1], you can still use this subroutine, just shift result by K. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void alglib::corrc1d( complex_1d_array signal, ae_int_t n, complex_1d_array pattern, ae_int_t m, complex_1d_array& r);
corrc1dcircular
function/************************************************************************* 1-dimensional circular complex cross-correlation. For given Pattern/Signal returns corr(Pattern,Signal) (circular). Algorithm has linearithmic complexity for any M/N. IMPORTANT: for historical reasons subroutine accepts its parameters in reversed order: CorrC1DCircular(Signal, Pattern) = Pattern x Signal (using traditional definition of cross-correlation, denoting cross-correlation as "x"). INPUT PARAMETERS Signal - array[0..N-1] - complex function to be transformed, periodic signal containing pattern N - problem size Pattern - array[0..M-1] - complex function to be transformed, non-periodic pattern to search withing signal M - problem size OUTPUT PARAMETERS R - convolution: A*B. array[0..M-1]. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void alglib::corrc1dcircular( complex_1d_array signal, ae_int_t m, complex_1d_array pattern, ae_int_t n, complex_1d_array& c);
corrr1d
function/************************************************************************* 1-dimensional real cross-correlation. For given Pattern/Signal returns corr(Pattern,Signal) (non-circular). Correlation is calculated using reduction to convolution. Algorithm with max(N,N)*log(max(N,N)) complexity is used (see ConvC1D() for more info about performance). IMPORTANT: for historical reasons subroutine accepts its parameters in reversed order: CorrR1D(Signal, Pattern) = Pattern x Signal (using traditional definition of cross-correlation, denoting cross-correlation as "x"). INPUT PARAMETERS Signal - array[0..N-1] - real function to be transformed, signal containing pattern N - problem size Pattern - array[0..M-1] - real function to be transformed, pattern to search withing signal M - problem size OUTPUT PARAMETERS R - cross-correlation, array[0..N+M-2]: * positive lags are stored in R[0..N-1], R[i] = sum(pattern[j]*signal[i+j] * negative lags are stored in R[N..N+M-2], R[N+M-1-i] = sum(pattern[j]*signal[-i+j] NOTE: It is assumed that pattern domain is [0..M-1]. If Pattern is non-zero on [-K..M-1], you can still use this subroutine, just shift result by K. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void alglib::corrr1d( real_1d_array signal, ae_int_t n, real_1d_array pattern, ae_int_t m, real_1d_array& r);
corrr1dcircular
function/************************************************************************* 1-dimensional circular real cross-correlation. For given Pattern/Signal returns corr(Pattern,Signal) (circular). Algorithm has linearithmic complexity for any M/N. IMPORTANT: for historical reasons subroutine accepts its parameters in reversed order: CorrR1DCircular(Signal, Pattern) = Pattern x Signal (using traditional definition of cross-correlation, denoting cross-correlation as "x"). INPUT PARAMETERS Signal - array[0..N-1] - real function to be transformed, periodic signal containing pattern N - problem size Pattern - array[0..M-1] - real function to be transformed, non-periodic pattern to search withing signal M - problem size OUTPUT PARAMETERS R - convolution: A*B. array[0..M-1]. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/void alglib::corrr1dcircular( real_1d_array signal, ae_int_t m, real_1d_array pattern, ae_int_t n, real_1d_array& c);
correlationtests
subpackagepearsoncorrelationsignificance
function/************************************************************************* Pearson's correlation coefficient significance test This test checks hypotheses about whether X and Y are samples of two continuous distributions having zero correlation or whether their correlation is non-zero. The following tests are performed: * two-tailed test (null hypothesis - X and Y have zero correlation) * left-tailed test (null hypothesis - the correlation coefficient is greater than or equal to 0) * right-tailed test (null hypothesis - the correlation coefficient is less than or equal to 0). Requirements: * the number of elements in each sample is not less than 5 * normality of distributions of X and Y. Input parameters: R - Pearson's correlation coefficient for X and Y N - number of elements in samples, N>=5. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/void alglib::pearsoncorrelationsignificance( double r, ae_int_t n, double& bothtails, double& lefttail, double& righttail);
spearmanrankcorrelationsignificance
function/************************************************************************* Spearman's rank correlation coefficient significance test This test checks hypotheses about whether X and Y are samples of two continuous distributions having zero correlation or whether their correlation is non-zero. The following tests are performed: * two-tailed test (null hypothesis - X and Y have zero correlation) * left-tailed test (null hypothesis - the correlation coefficient is greater than or equal to 0) * right-tailed test (null hypothesis - the correlation coefficient is less than or equal to 0). Requirements: * the number of elements in each sample is not less than 5. The test is non-parametric and doesn't require distributions X and Y to be normal. Input parameters: R - Spearman's rank correlation coefficient for X and Y N - number of elements in samples, N>=5. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/void alglib::spearmanrankcorrelationsignificance( double r, ae_int_t n, double& bothtails, double& lefttail, double& righttail);
datacomp
subpackagekmeansgenerate
function/************************************************************************* k-means++ clusterization. Backward compatibility function, we recommend to use CLUSTERING subpackage as better replacement. -- ALGLIB -- Copyright 21.03.2009 by Bochkanov Sergey *************************************************************************/void alglib::kmeansgenerate( real_2d_array xy, ae_int_t npoints, ae_int_t nvars, ae_int_t k, ae_int_t restarts, ae_int_t& info, real_2d_array& c, integer_1d_array& xyc);
dawson
subpackagedawsonintegral
function/************************************************************************* Dawson's Integral Approximates the integral x - 2 | | 2 dawsn(x) = exp( -x ) | exp( t ) dt | | - 0 Three different rational approximations are employed, for the intervals 0 to 3.25; 3.25 to 6.25; and 6.25 up. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,10 10000 6.9e-16 1.0e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/double alglib::dawsonintegral(double x);
dforest
subpackagedecisionforest
class/************************************************************************* *************************************************************************/class decisionforest { };
dfreport
class/************************************************************************* *************************************************************************/class dfreport { double relclserror; double avgce; double rmserror; double avgerror; double avgrelerror; double oobrelclserror; double oobavgce; double oobrmserror; double oobavgerror; double oobavgrelerror; };
dfavgce
function/************************************************************************* Average cross-entropy (in bits per element) on the test set INPUT PARAMETERS: DF - decision forest model XY - test set NPoints - test set size RESULT: CrossEntropy/(NPoints*LN(2)). Zero if model solves regression task. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/double alglib::dfavgce( decisionforest df, real_2d_array xy, ae_int_t npoints);
dfavgerror
function/************************************************************************* Average error on the test set INPUT PARAMETERS: DF - decision forest model XY - test set NPoints - test set size RESULT: Its meaning for regression task is obvious. As for classification task, it means average error when estimating posterior probabilities. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/double alglib::dfavgerror( decisionforest df, real_2d_array xy, ae_int_t npoints);
dfavgrelerror
function/************************************************************************* Average relative error on the test set INPUT PARAMETERS: DF - decision forest model XY - test set NPoints - test set size RESULT: Its meaning for regression task is obvious. As for classification task, it means average relative error when estimating posterior probability of belonging to the correct class. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/double alglib::dfavgrelerror( decisionforest df, real_2d_array xy, ae_int_t npoints);
dfbuildrandomdecisionforest
function/************************************************************************* This subroutine builds random decision forest. INPUT PARAMETERS: XY - training set NPoints - training set size, NPoints>=1 NVars - number of independent variables, NVars>=1 NClasses - task type: * NClasses=1 - regression task with one dependent variable * NClasses>1 - classification task with NClasses classes. NTrees - number of trees in a forest, NTrees>=1. recommended values: 50-100. R - percent of a training set used to build individual trees. 0<R<=1. recommended values: 0.1 <= R <= 0.66. OUTPUT PARAMETERS: Info - return code: * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<1, NVars<1, NClasses<1, NTrees<1, R<=0 or R>1). * 1, if task has been solved DF - model built Rep - training report, contains error on a training set and out-of-bag estimates of generalization error. -- ALGLIB -- Copyright 19.02.2009 by Bochkanov Sergey *************************************************************************/void alglib::dfbuildrandomdecisionforest( real_2d_array xy, ae_int_t npoints, ae_int_t nvars, ae_int_t nclasses, ae_int_t ntrees, double r, ae_int_t& info, decisionforest& df, dfreport& rep);
dfbuildrandomdecisionforestx1
function/************************************************************************* This subroutine builds random decision forest. This function gives ability to tune number of variables used when choosing best split. INPUT PARAMETERS: XY - training set NPoints - training set size, NPoints>=1 NVars - number of independent variables, NVars>=1 NClasses - task type: * NClasses=1 - regression task with one dependent variable * NClasses>1 - classification task with NClasses classes. NTrees - number of trees in a forest, NTrees>=1. recommended values: 50-100. NRndVars - number of variables used when choosing best split R - percent of a training set used to build individual trees. 0<R<=1. recommended values: 0.1 <= R <= 0.66. OUTPUT PARAMETERS: Info - return code: * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<1, NVars<1, NClasses<1, NTrees<1, R<=0 or R>1). * 1, if task has been solved DF - model built Rep - training report, contains error on a training set and out-of-bag estimates of generalization error. -- ALGLIB -- Copyright 19.02.2009 by Bochkanov Sergey *************************************************************************/void alglib::dfbuildrandomdecisionforestx1( real_2d_array xy, ae_int_t npoints, ae_int_t nvars, ae_int_t nclasses, ae_int_t ntrees, ae_int_t nrndvars, double r, ae_int_t& info, decisionforest& df, dfreport& rep);
dfprocess
function/************************************************************************* Procesing INPUT PARAMETERS: DF - decision forest model X - input vector, array[0..NVars-1]. OUTPUT PARAMETERS: Y - result. Regression estimate when solving regression task, vector of posterior probabilities for classification task. See also DFProcessI. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/void alglib::dfprocess( decisionforest df, real_1d_array x, real_1d_array& y);
dfprocessi
function/************************************************************************* 'interactive' variant of DFProcess for languages like Python which support constructs like "Y = DFProcessI(DF,X)" and interactive mode of interpreter This function allocates new array on each call, so it is significantly slower than its 'non-interactive' counterpart, but it is more convenient when you call it from command line. -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/void alglib::dfprocessi( decisionforest df, real_1d_array x, real_1d_array& y);
dfrelclserror
function/************************************************************************* Relative classification error on the test set INPUT PARAMETERS: DF - decision forest model XY - test set NPoints - test set size RESULT: percent of incorrectly classified cases. Zero if model solves regression task. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/double alglib::dfrelclserror( decisionforest df, real_2d_array xy, ae_int_t npoints);
dfrmserror
function/************************************************************************* RMS error on the test set INPUT PARAMETERS: DF - decision forest model XY - test set NPoints - test set size RESULT: root mean square error. Its meaning for regression task is obvious. As for classification task, RMS error means error when estimating posterior probabilities. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/double alglib::dfrmserror( decisionforest df, real_2d_array xy, ae_int_t npoints);
dfserialize
function/************************************************************************* This function serializes data structure to string. Important properties of s_out: * it contains alphanumeric characters, dots, underscores, minus signs * these symbols are grouped into words, which are separated by spaces and Windows-style (CR+LF) newlines * although serializer uses spaces and CR+LF as separators, you can replace any separator character by arbitrary combination of spaces, tabs, Windows or Unix newlines. It allows flexible reformatting of the string in case you want to include it into text or XML file. But you should not insert separators into the middle of the "words" nor you should change case of letters. * s_out can be freely moved between 32-bit and 64-bit systems, little and big endian machines, and so on. You can serialize structure on 32-bit machine and unserialize it on 64-bit one (or vice versa), or serialize it on SPARC and unserialize on x86. You can also serialize it in C++ version of ALGLIB and unserialize in C# one, and vice versa. *************************************************************************/void dfserialize(decisionforest &obj, std::string &s_out); void dfserialize(decisionforest &obj, std::ostream &s_out);
dfunserialize
function/************************************************************************* This function unserializes data structure from string. *************************************************************************/void dfunserialize(const std::string &s_in, decisionforest &obj); void dfunserialize(const std::istream &s_in, decisionforest &obj);
directdensesolvers
subpackagedensesolverlsreport
class/************************************************************************* *************************************************************************/class densesolverlsreport { double r2; real_2d_array cx; ae_int_t n; ae_int_t k; };
densesolverreport
class/************************************************************************* *************************************************************************/class densesolverreport { double r1; double rinf; };
cmatrixlusolve
function/************************************************************************* Complex dense linear solver for A*x=b with complex N*N A given by its LU decomposition and N*1 vectors x and b. This is "slow-but-robust" version of the complex linear solver with additional features which add significant performance overhead. Faster version is CMatrixLUSolveFast() function. Algorithm features: * automatic detection of degenerate cases * O(N^2) complexity * condition number estimation No iterative refinement is provided because exact form of original matrix is not known to subroutine. Use CMatrixSolve or CMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in 10-15x performance penalty when compared ! with "fast" version which just calls triangular solver. ! ! This performance penalty is insignificant when compared with ! cost of large LU decomposition. However, if you call this ! function many times for the same left side, this overhead ! BECOMES significant. It also becomes significant for small- ! scale problems. ! ! In such cases we strongly recommend you to use faster solver, ! CMatrixLUSolveFast() function. INPUT PARAMETERS LUA - array[0..N-1,0..N-1], LU decomposition, CMatrixLU result P - array[0..N-1], pivots array, CMatrixLU result N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Info - return code: * -3 matrix is very badly conditioned or exactly singular. * -1 N<=0 was passed * 1 task is solved (but matrix A may be ill-conditioned, check R1/RInf parameters for condition numbers). Rep - additional report, following fields are set: * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N], it contains: * info>0 => solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::cmatrixlusolve( complex_2d_array lua, integer_1d_array p, ae_int_t n, complex_1d_array b, ae_int_t& info, densesolverreport& rep, complex_1d_array& x);
cmatrixlusolvefast
function/************************************************************************* Complex dense linear solver for A*x=b with N*N complex A given by its LU decomposition and N*1 vectors x and b. This is fast lightweight version of solver, which is significantly faster than CMatrixLUSolve(), but does not provide additional information (like condition numbers). Algorithm features: * O(N^2) complexity * no additional time-consuming features, just triangular solver INPUT PARAMETERS LUA - array[0..N-1,0..N-1], LU decomposition, CMatrixLU result P - array[0..N-1], pivots array, CMatrixLU result N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Info - return code: * -3 matrix is exactly singular (ill conditioned matrices are not recognized). * -1 N<=0 was passed * 1 task is solved B - array[N]: * info>0 => overwritten by solution * info=-3 => filled by zeros NOTE: unlike CMatrixLUSolve(), this function does NOT check for near-degeneracy of input matrix. It checks for EXACT degeneracy, because this check is easy to do. However, very badly conditioned matrices may went unnoticed. -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::cmatrixlusolvefast( complex_2d_array lua, integer_1d_array p, ae_int_t n, complex_1d_array& b, ae_int_t& info);
cmatrixlusolvem
function/************************************************************************* Dense solver for A*X=B with N*N complex A given by its LU decomposition, and N*M matrices X and B (multiple right sides). "Slow-but-feature-rich" version of the solver. Algorithm features: * automatic detection of degenerate cases * O(M*N^2) complexity * condition number estimation No iterative refinement is provided because exact form of original matrix is not known to subroutine. Use CMatrixSolve or CMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in significant performance penalty when ! compared with "fast" version which just calls triangular ! solver. ! ! This performance penalty is especially apparent when you use ! ALGLIB parallel capabilities (condition number estimation is ! inherently sequential). It also becomes significant for ! small-scale problems. ! ! In such cases we strongly recommend you to use faster solver, ! CMatrixLUSolveMFast() function. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. Triangular solver is relatively easy to parallelize. ! However, parallelization will be efficient only for large number of ! right parts M. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS LUA - array[0..N-1,0..N-1], LU decomposition, RMatrixLU result P - array[0..N-1], pivots array, RMatrixLU result N - size of A B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - return code: * -3 matrix is very badly conditioned or exactly singular. * -1 N<=0 was passed * 1 task is solved (but matrix A may be ill-conditioned, check R1/RInf parameters for condition numbers). Rep - additional report, following fields are set: * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N,M], it contains: * info>0 => solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::cmatrixlusolvem( complex_2d_array lua, integer_1d_array p, ae_int_t n, complex_2d_array b, ae_int_t m, ae_int_t& info, densesolverreport& rep, complex_2d_array& x); void alglib::smp_cmatrixlusolvem( complex_2d_array lua, integer_1d_array p, ae_int_t n, complex_2d_array b, ae_int_t m, ae_int_t& info, densesolverreport& rep, complex_2d_array& x);
cmatrixlusolvemfast
function/************************************************************************* Dense solver for A*X=B with N*N complex A given by its LU decomposition, and N*M matrices X and B (multiple right sides). "Fast-but-lightweight" version of the solver. Algorithm features: * O(M*N^2) complexity * no additional time-consuming features COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. Triangular solver is relatively easy to parallelize. ! However, parallelization will be efficient only for large number of ! right parts M. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS LUA - array[0..N-1,0..N-1], LU decomposition, RMatrixLU result P - array[0..N-1], pivots array, RMatrixLU result N - size of A B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - return code: * -3 matrix is exactly singular (ill conditioned matrices are not recognized). * -1 N<=0 was passed * 1 task is solved B - array[N,M]: * info>0 => overwritten by solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::cmatrixlusolvemfast( complex_2d_array lua, integer_1d_array p, ae_int_t n, complex_2d_array& b, ae_int_t m, ae_int_t& info); void alglib::smp_cmatrixlusolvemfast( complex_2d_array lua, integer_1d_array p, ae_int_t n, complex_2d_array& b, ae_int_t m, ae_int_t& info);
cmatrixmixedsolve
function/************************************************************************* Dense solver. Same as RMatrixMixedSolve(), but for complex matrices. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(N^2) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix LUA - array[0..N-1,0..N-1], LU decomposition, CMatrixLU result P - array[0..N-1], pivots array, CMatrixLU result N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Info - return code: * -3 matrix is very badly conditioned or exactly singular. * -1 N<=0 was passed * 1 task is solved (but matrix A may be ill-conditioned, check R1/RInf parameters for condition numbers). Rep - additional report, following fields are set: * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N], it contains: * info>0 => solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::cmatrixmixedsolve( complex_2d_array a, complex_2d_array lua, integer_1d_array p, ae_int_t n, complex_1d_array b, ae_int_t& info, densesolverreport& rep, complex_1d_array& x);
cmatrixmixedsolvem
function/************************************************************************* Dense solver. Same as RMatrixMixedSolveM(), but for complex matrices. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(M*N^2) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix LUA - array[0..N-1,0..N-1], LU decomposition, CMatrixLU result P - array[0..N-1], pivots array, CMatrixLU result N - size of A B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - return code: * -3 matrix is very badly conditioned or exactly singular. * -1 N<=0 was passed * 1 task is solved (but matrix A may be ill-conditioned, check R1/RInf parameters for condition numbers). Rep - additional report, following fields are set: * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N,M], it contains: * info>0 => solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::cmatrixmixedsolvem( complex_2d_array a, complex_2d_array lua, integer_1d_array p, ae_int_t n, complex_2d_array b, ae_int_t m, ae_int_t& info, densesolverreport& rep, complex_2d_array& x);
cmatrixsolve
function/************************************************************************* Complex dense solver for A*x=B with N*N complex matrix A and N*1 complex vectors x and b. "Slow-but-feature-rich" version of the solver. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(N^3) complexity IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system ! and performs iterative refinement, which results in ! significant performance penalty when compared with "fast" ! version which just performs LU decomposition and calls ! triangular solver. ! ! This performance penalty is especially visible in the ! multithreaded mode, because both condition number estimation ! and iterative refinement are inherently sequential ! calculations. ! ! Thus, if you need high performance and if you are pretty sure ! that your system is well conditioned, we strongly recommend ! you to use faster solver, CMatrixSolveFast() function. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. We should note that LU decomposition is harder to ! parallelize than, say, matrix-matrix product - this algorithm has ! many internal synchronization points which can not be avoided. However ! parallelism starts to be profitable starting from N=1024, achieving ! near-linear speedup for N=4096 or higher. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Info - return code: * -3 matrix is very badly conditioned or exactly singular. * -1 N<=0 was passed * 1 task is solved (but matrix A may be ill-conditioned, check R1/RInf parameters for condition numbers). Rep - additional report, following fields are set: * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N], it contains: * info>0 => solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::cmatrixsolve( complex_2d_array a, ae_int_t n, complex_1d_array b, ae_int_t& info, densesolverreport& rep, complex_1d_array& x); void alglib::smp_cmatrixsolve( complex_2d_array a, ae_int_t n, complex_1d_array b, ae_int_t& info, densesolverreport& rep, complex_1d_array& x);
cmatrixsolvefast
function/************************************************************************* Complex dense solver for A*x=B with N*N complex matrix A and N*1 complex vectors x and b. "Fast-but-lightweight" version of the solver. Algorithm features: * O(N^3) complexity * no additional time consuming features, just triangular solver COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. We should note that LU decomposition is harder to ! parallelize than, say, matrix-matrix product - this algorithm has ! many internal synchronization points which can not be avoided. However ! parallelism starts to be profitable starting from N=1024, achieving ! near-linear speedup for N=4096 or higher. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS: Info - return code: * -3 matrix is exactly singular (ill conditioned matrices are not recognized). * -1 N<=0 was passed * 1 task is solved B - array[N]: * info>0 => overwritten by solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::cmatrixsolvefast( complex_2d_array a, ae_int_t n, complex_1d_array& b, ae_int_t& info); void alglib::smp_cmatrixsolvefast( complex_2d_array a, ae_int_t n, complex_1d_array& b, ae_int_t& info);
cmatrixsolvem
function/************************************************************************* Complex dense solver for A*X=B with N*N complex matrix A, N*M complex matrices X and B. "Slow-but-feature-rich" version which provides additional functions, at the cost of slower performance. Faster version may be invoked with CMatrixSolveMFast() function. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(N^3+M*N^2) complexity IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system ! and performs iterative refinement, which results in ! significant performance penalty when compared with "fast" ! version which just performs LU decomposition and calls ! triangular solver. ! ! This performance penalty is especially visible in the ! multithreaded mode, because both condition number estimation ! and iterative refinement are inherently sequential ! calculations. ! ! Thus, if you need high performance and if you are pretty sure ! that your system is well conditioned, we strongly recommend ! you to use faster solver, CMatrixSolveMFast() function. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. We should note that LU decomposition is harder to ! parallelize than, say, matrix-matrix product - this algorithm has ! many internal synchronization points which can not be avoided. However ! parallelism starts to be profitable starting from N=1024, achieving ! near-linear speedup for N=4096 or higher. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1,0..M-1], right part M - right part size RFS - iterative refinement switch: * True - refinement is used. Less performance, more precision. * False - refinement is not used. More performance, less precision. OUTPUT PARAMETERS Info - return code: * -3 matrix is very badly conditioned or exactly singular. X is filled by zeros in such cases. * -1 N<=0 was passed * 1 task is solved (but matrix A may be ill-conditioned, check R1/RInf parameters for condition numbers). Rep - additional report, following fields are set: * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N,M], it contains: * info>0 => solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::cmatrixsolvem( complex_2d_array a, ae_int_t n, complex_2d_array b, ae_int_t m, bool rfs, ae_int_t& info, densesolverreport& rep, complex_2d_array& x); void alglib::smp_cmatrixsolvem( complex_2d_array a, ae_int_t n, complex_2d_array b, ae_int_t m, bool rfs, ae_int_t& info, densesolverreport& rep, complex_2d_array& x);
cmatrixsolvemfast
function/************************************************************************* Complex dense solver for A*X=B with N*N complex matrix A, N*M complex matrices X and B. "Fast-but-lightweight" version which provides just triangular solver - and no additional functions like iterative refinement or condition number estimation. Algorithm features: * O(N^3+M*N^2) complexity * no additional time consuming functions COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. We should note that LU decomposition is harder to ! parallelize than, say, matrix-matrix product - this algorithm has ! many internal synchronization points which can not be avoided. However ! parallelism starts to be profitable starting from N=1024, achieving ! near-linear speedup for N=4096 or higher. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS: Info - return code: * -3 matrix is exactly singular (ill conditioned matrices are not recognized). * -1 N<=0 was passed * 1 task is solved B - array[N,M]: * info>0 => overwritten by solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 16.03.2015 by Bochkanov Sergey *************************************************************************/void alglib::cmatrixsolvemfast( complex_2d_array a, ae_int_t n, complex_2d_array& b, ae_int_t m, ae_int_t& info); void alglib::smp_cmatrixsolvemfast( complex_2d_array a, ae_int_t n, complex_2d_array& b, ae_int_t m, ae_int_t& info);
hpdmatrixcholeskysolve
function/************************************************************************* Dense solver for A*x=b with N*N Hermitian positive definite matrix A given by its Cholesky decomposition, and N*1 complex vectors x and b. This is "slow-but-feature-rich" version of the solver which estimates condition number of the system. Algorithm features: * automatic detection of degenerate cases * O(N^2) complexity * condition number estimation * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in 10-15x performance penalty when compared ! with "fast" version which just calls triangular solver. ! ! This performance penalty is insignificant when compared with ! cost of large LU decomposition. However, if you call this ! function many times for the same left side, this overhead ! BECOMES significant. It also becomes significant for small- ! scale problems (N<50). ! ! In such cases we strongly recommend you to use faster solver, ! HPDMatrixCholeskySolveFast() function. INPUT PARAMETERS CHA - array[0..N-1,0..N-1], Cholesky decomposition, SPDMatrixCholesky result N - size of A IsUpper - what half of CHA is provided B - array[0..N-1], right part OUTPUT PARAMETERS Info - return code: * -3 A is is exactly singular or ill conditioned X is filled by zeros in such cases. * -1 N<=0 was passed * 1 task is solved Rep - additional report, following fields are set: * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N]: * for info>0 - solution * for info=-3 - filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::hpdmatrixcholeskysolve( complex_2d_array cha, ae_int_t n, bool isupper, complex_1d_array b, ae_int_t& info, densesolverreport& rep, complex_1d_array& x);
hpdmatrixcholeskysolvefast
function/************************************************************************* Dense solver for A*x=b with N*N Hermitian positive definite matrix A given by its Cholesky decomposition, and N*1 complex vectors x and b. This is "fast-but-lightweight" version of the solver. Algorithm features: * O(N^2) complexity * matrix is represented by its upper or lower triangle * no additional time-consuming features INPUT PARAMETERS CHA - array[0..N-1,0..N-1], Cholesky decomposition, SPDMatrixCholesky result N - size of A IsUpper - what half of CHA is provided B - array[0..N-1], right part OUTPUT PARAMETERS Info - return code: * -3 A is is exactly singular or ill conditioned B is filled by zeros in such cases. * -1 N<=0 was passed * 1 task is solved B - array[N]: * for info>0 - overwritten by solution * for info=-3 - filled by zeros -- ALGLIB -- Copyright 18.03.2015 by Bochkanov Sergey *************************************************************************/void alglib::hpdmatrixcholeskysolvefast( complex_2d_array cha, ae_int_t n, bool isupper, complex_1d_array& b, ae_int_t& info);
hpdmatrixcholeskysolvem
function/************************************************************************* Dense solver for A*X=B with N*N Hermitian positive definite matrix A given by its Cholesky decomposition and N*M complex matrices X and B. This is "slow-but-feature-rich" version of the solver which, in addition to the solution, estimates condition number of the system. Algorithm features: * automatic detection of degenerate cases * O(M*N^2) complexity * condition number estimation * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in significant performance penalty when ! compared with "fast" version which just calls triangular ! solver. Amount of overhead introduced depends on M (the ! larger - the more efficient). ! ! This performance penalty is insignificant when compared with ! cost of large Cholesky decomposition. However, if you call ! this function many times for the same left side, this ! overhead BECOMES significant. It also becomes significant ! for small-scale problems (N<50). ! ! In such cases we strongly recommend you to use faster solver, ! HPDMatrixCholeskySolveMFast() function. INPUT PARAMETERS CHA - array[N,N], Cholesky decomposition, HPDMatrixCholesky result N - size of CHA IsUpper - what half of CHA is provided B - array[N,M], right part M - right part size OUTPUT PARAMETERS: Info - return code: * -3 A is singular, or VERY close to singular. X is filled by zeros in such cases. * -1 N<=0 was passed * 1 task was solved Rep - additional report, following fields are set: * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N]: * for info>0 contains solution * for info=-3 filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::hpdmatrixcholeskysolvem( complex_2d_array cha, ae_int_t n, bool isupper, complex_2d_array b, ae_int_t m, ae_int_t& info, densesolverreport& rep, complex_2d_array& x); void alglib::smp_hpdmatrixcholeskysolvem( complex_2d_array cha, ae_int_t n, bool isupper, complex_2d_array b, ae_int_t m, ae_int_t& info, densesolverreport& rep, complex_2d_array& x);
hpdmatrixcholeskysolvemfast
function/************************************************************************* Dense solver for A*X=B with N*N Hermitian positive definite matrix A given by its Cholesky decomposition and N*M complex matrices X and B. This is "fast-but-lightweight" version of the solver. Algorithm features: * O(M*N^2) complexity * matrix is represented by its upper or lower triangle * no additional time-consuming features INPUT PARAMETERS CHA - array[N,N], Cholesky decomposition, HPDMatrixCholesky result N - size of CHA IsUpper - what half of CHA is provided B - array[N,M], right part M - right part size OUTPUT PARAMETERS: Info - return code: * -3 A is singular, or VERY close to singular. X is filled by zeros in such cases. * -1 N<=0 was passed * 1 task was solved B - array[N]: * for info>0 overwritten by solution * for info=-3 filled by zeros -- ALGLIB -- Copyright 18.03.2015 by Bochkanov Sergey *************************************************************************/void alglib::hpdmatrixcholeskysolvemfast( complex_2d_array cha, ae_int_t n, bool isupper, complex_2d_array& b, ae_int_t m, ae_int_t& info); void alglib::smp_hpdmatrixcholeskysolvemfast( complex_2d_array cha, ae_int_t n, bool isupper, complex_2d_array& b, ae_int_t m, ae_int_t& info);
hpdmatrixsolve
function/************************************************************************* Dense solver for A*x=b, with N*N Hermitian positive definite matrix A, and N*1 complex vectors x and b. "Slow-but-feature-rich" version of the solver. Algorithm features: * automatic detection of degenerate cases * condition number estimation * O(N^3) complexity * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in significant performance penalty when ! compared with "fast" version which just performs Cholesky ! decomposition and calls triangular solver. ! ! This performance penalty is especially visible in the ! multithreaded mode, because both condition number estimation ! and iterative refinement are inherently sequential ! calculations. ! ! Thus, if you need high performance and if you are pretty sure ! that your system is well conditioned, we strongly recommend ! you to use faster solver, HPDMatrixSolveFast() function. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. We should note that Cholesky decomposition is harder ! to parallelize than, say, matrix-matrix product - this algorithm has ! several synchronization points which can not be avoided. However, ! parallelism starts to be profitable starting from N=500. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1], right part OUTPUT PARAMETERS Info - same as in RMatrixSolve Returns -3 for non-HPD matrices. Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::hpdmatrixsolve( complex_2d_array a, ae_int_t n, bool isupper, complex_1d_array b, ae_int_t& info, densesolverreport& rep, complex_1d_array& x); void alglib::smp_hpdmatrixsolve( complex_2d_array a, ae_int_t n, bool isupper, complex_1d_array b, ae_int_t& info, densesolverreport& rep, complex_1d_array& x);
hpdmatrixsolvefast
function/************************************************************************* Dense solver for A*x=b, with N*N Hermitian positive definite matrix A, and N*1 complex vectors x and b. "Fast-but-lightweight" version of the solver without additional functions. Algorithm features: * O(N^3) complexity * matrix is represented by its upper or lower triangle * no additional time consuming functions COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. We should note that Cholesky decomposition is harder ! to parallelize than, say, matrix-matrix product - this algorithm has ! several synchronization points which can not be avoided. However, ! parallelism starts to be profitable starting from N=500. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1], right part OUTPUT PARAMETERS Info - return code: * -3 A is is exactly singular or not positive definite X is filled by zeros in such cases. * -1 N<=0 was passed * 1 task was solved B - array[0..N-1]: * overwritten by solution * zeros, if A is exactly singular (diagonal of its LU decomposition has exact zeros). -- ALGLIB -- Copyright 17.03.2015 by Bochkanov Sergey *************************************************************************/void alglib::hpdmatrixsolvefast( complex_2d_array a, ae_int_t n, bool isupper, complex_1d_array& b, ae_int_t& info); void alglib::smp_hpdmatrixsolvefast( complex_2d_array a, ae_int_t n, bool isupper, complex_1d_array& b, ae_int_t& info);
hpdmatrixsolvem
function/************************************************************************* Dense solver for A*X=B, with N*N Hermitian positive definite matrix A and N*M complex matrices X and B. "Slow-but-feature-rich" version of the solver. Algorithm features: * automatic detection of degenerate cases * condition number estimation * O(N^3+M*N^2) complexity * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in significant performance penalty when ! compared with "fast" version which just calls triangular ! solver. ! ! This performance penalty is especially apparent when you use ! ALGLIB parallel capabilities (condition number estimation is ! inherently sequential). It also becomes significant for ! small-scale problems (N<100). ! ! In such cases we strongly recommend you to use faster solver, ! HPDMatrixSolveMFast() function. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. We should note that Cholesky decomposition is harder ! to parallelize than, say, matrix-matrix product - this algorithm has ! several synchronization points which can not be avoided. However, ! parallelism starts to be profitable starting from N=500. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - same as in RMatrixSolve. Returns -3 for non-HPD matrices. Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::hpdmatrixsolvem( complex_2d_array a, ae_int_t n, bool isupper, complex_2d_array b, ae_int_t m, ae_int_t& info, densesolverreport& rep, complex_2d_array& x); void alglib::smp_hpdmatrixsolvem( complex_2d_array a, ae_int_t n, bool isupper, complex_2d_array b, ae_int_t m, ae_int_t& info, densesolverreport& rep, complex_2d_array& x);
hpdmatrixsolvemfast
function/************************************************************************* Dense solver for A*X=B, with N*N Hermitian positive definite matrix A and N*M complex matrices X and B. "Fast-but-lightweight" version of the solver. Algorithm features: * O(N^3+M*N^2) complexity * matrix is represented by its upper or lower triangle * no additional time consuming features like condition number estimation COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. We should note that Cholesky decomposition is harder ! to parallelize than, say, matrix-matrix product - this algorithm has ! several synchronization points which can not be avoided. However, ! parallelism starts to be profitable starting from N=500. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - return code: * -3 A is is exactly singular or is not positive definite. B is filled by zeros in such cases. * -1 N<=0 was passed * 1 task is solved B - array[0..N-1]: * overwritten by solution * zeros, if problem was not solved -- ALGLIB -- Copyright 17.03.2015 by Bochkanov Sergey *************************************************************************/void alglib::hpdmatrixsolvemfast( complex_2d_array a, ae_int_t n, bool isupper, complex_2d_array& b, ae_int_t m, ae_int_t& info); void alglib::smp_hpdmatrixsolvemfast( complex_2d_array a, ae_int_t n, bool isupper, complex_2d_array& b, ae_int_t m, ae_int_t& info);
rmatrixlusolve
function/************************************************************************* Dense solver. This subroutine solves a system A*x=b, where A is NxN non-denegerate real matrix given by its LU decomposition, x and b are real vectors. This is "slow-but-robust" version of the linear LU-based solver. Faster version is RMatrixLUSolveFast() function. Algorithm features: * automatic detection of degenerate cases * O(N^2) complexity * condition number estimation No iterative refinement is provided because exact form of original matrix is not known to subroutine. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in 10-15x performance penalty when compared ! with "fast" version which just calls triangular solver. ! ! This performance penalty is insignificant when compared with ! cost of large LU decomposition. However, if you call this ! function many times for the same left side, this overhead ! BECOMES significant. It also becomes significant for small- ! scale problems. ! ! In such cases we strongly recommend you to use faster solver, ! RMatrixLUSolveFast() function. INPUT PARAMETERS LUA - array[N,N], LU decomposition, RMatrixLU result P - array[N], pivots array, RMatrixLU result N - size of A B - array[N], right part OUTPUT PARAMETERS Info - return code: * -3 matrix is very badly conditioned or exactly singular. * -1 N<=0 was passed * 1 task is solved (but matrix A may be ill-conditioned, check R1/RInf parameters for condition numbers). Rep - additional report, following fields are set: * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N], it contains: * info>0 => solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::rmatrixlusolve( real_2d_array lua, integer_1d_array p, ae_int_t n, real_1d_array b, ae_int_t& info, densesolverreport& rep, real_1d_array& x);
rmatrixlusolvefast
function/************************************************************************* Dense solver. This subroutine solves a system A*x=b, where A is NxN non-denegerate real matrix given by its LU decomposition, x and b are real vectors. This is "fast-without-any-checks" version of the linear LU-based solver. Slower but more robust version is RMatrixLUSolve() function. Algorithm features: * O(N^2) complexity * fast algorithm without ANY additional checks, just triangular solver INPUT PARAMETERS LUA - array[0..N-1,0..N-1], LU decomposition, RMatrixLU result P - array[0..N-1], pivots array, RMatrixLU result N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Info - return code: * -3 matrix is exactly singular (ill conditioned matrices are not recognized). X is filled by zeros in such cases. * -1 N<=0 was passed * 1 task is solved B - array[N]: * info>0 => overwritten by solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 18.03.2015 by Bochkanov Sergey *************************************************************************/void alglib::rmatrixlusolvefast( real_2d_array lua, integer_1d_array p, ae_int_t n, real_1d_array& b, ae_int_t& info);
rmatrixlusolvem
function/************************************************************************* Dense solver. Similar to RMatrixLUSolve() but solves task with multiple right parts (where b and x are NxM matrices). This is "robust-but-slow" version of LU-based solver which performs additional checks for non-degeneracy of inputs (condition number estimation). If you need best performance, use "fast-without-any-checks" version, RMatrixLUSolveMFast(). Algorithm features: * automatic detection of degenerate cases * O(M*N^2) complexity * condition number estimation No iterative refinement is provided because exact form of original matrix is not known to subroutine. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in significant performance penalty when ! compared with "fast" version which just calls triangular ! solver. ! ! This performance penalty is especially apparent when you use ! ALGLIB parallel capabilities (condition number estimation is ! inherently sequential). It also becomes significant for ! small-scale problems. ! ! In such cases we strongly recommend you to use faster solver, ! RMatrixLUSolveMFast() function. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. Triangular solver is relatively easy to parallelize. ! However, parallelization will be efficient only for large number of ! right parts M. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS LUA - array[N,N], LU decomposition, RMatrixLU result P - array[N], pivots array, RMatrixLU result N - size of A B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - return code: * -3 matrix is very badly conditioned or exactly singular. X is filled by zeros in such cases. * -1 N<=0 was passed * 1 task is solved (but matrix A may be ill-conditioned, check R1/RInf parameters for condition numbers). Rep - additional report, following fields are set: * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N,M], it contains: * info>0 => solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::rmatrixlusolvem( real_2d_array lua, integer_1d_array p, ae_int_t n, real_2d_array b, ae_int_t m, ae_int_t& info, densesolverreport& rep, real_2d_array& x); void alglib::smp_rmatrixlusolvem( real_2d_array lua, integer_1d_array p, ae_int_t n, real_2d_array b, ae_int_t m, ae_int_t& info, densesolverreport& rep, real_2d_array& x);
rmatrixlusolvemfast
function/************************************************************************* Dense solver. Similar to RMatrixLUSolve() but solves task with multiple right parts, where b and x are NxM matrices. This is "fast-without-any-checks" version of LU-based solver. It does not estimate condition number of a system, so it is extremely fast. If you need better detection of near-degenerate cases, use RMatrixLUSolveM() function. Algorithm features: * O(M*N^2) complexity * fast algorithm without ANY additional checks, just triangular solver COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. Triangular solver is relatively easy to parallelize. ! However, parallelization will be efficient only for large number of ! right parts M. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: LUA - array[0..N-1,0..N-1], LU decomposition, RMatrixLU result P - array[0..N-1], pivots array, RMatrixLU result N - size of A B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS: Info - return code: * -3 matrix is exactly singular (ill conditioned matrices are not recognized). * -1 N<=0 was passed * 1 task is solved B - array[N,M]: * info>0 => overwritten by solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 18.03.2015 by Bochkanov Sergey *************************************************************************/void alglib::rmatrixlusolvemfast( real_2d_array lua, integer_1d_array p, ae_int_t n, real_2d_array& b, ae_int_t m, ae_int_t& info); void alglib::smp_rmatrixlusolvemfast( real_2d_array lua, integer_1d_array p, ae_int_t n, real_2d_array& b, ae_int_t m, ae_int_t& info);
rmatrixmixedsolve
function/************************************************************************* Dense solver. This subroutine solves a system A*x=b, where BOTH ORIGINAL A AND ITS LU DECOMPOSITION ARE KNOWN. You can use it if for some reasons you have both A and its LU decomposition. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(N^2) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix LUA - array[0..N-1,0..N-1], LU decomposition, RMatrixLU result P - array[0..N-1], pivots array, RMatrixLU result N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Info - return code: * -3 matrix is very badly conditioned or exactly singular. * -1 N<=0 was passed * 1 task is solved (but matrix A may be ill-conditioned, check R1/RInf parameters for condition numbers). Rep - additional report, following fields are set: * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N], it contains: * info>0 => solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::rmatrixmixedsolve( real_2d_array a, real_2d_array lua, integer_1d_array p, ae_int_t n, real_1d_array b, ae_int_t& info, densesolverreport& rep, real_1d_array& x);
rmatrixmixedsolvem
function/************************************************************************* Dense solver. Similar to RMatrixMixedSolve() but solves task with multiple right parts (where b and x are NxM matrices). Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(M*N^2) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix LUA - array[0..N-1,0..N-1], LU decomposition, RMatrixLU result P - array[0..N-1], pivots array, RMatrixLU result N - size of A B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - return code: * -3 matrix is very badly conditioned or exactly singular. * -1 N<=0 was passed * 1 task is solved (but matrix A may be ill-conditioned, check R1/RInf parameters for condition numbers). Rep - additional report, following fields are set: * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N,M], it contains: * info>0 => solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::rmatrixmixedsolvem( real_2d_array a, real_2d_array lua, integer_1d_array p, ae_int_t n, real_2d_array b, ae_int_t m, ae_int_t& info, densesolverreport& rep, real_2d_array& x);
rmatrixsolve
function/************************************************************************* Dense solver for A*x=b with N*N real matrix A and N*1 real vectorx x and b. This is "slow-but-feature rich" version of the linear solver. Faster version is RMatrixSolveFast() function. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(N^3) complexity IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system ! and performs iterative refinement, which results in ! significant performance penalty when compared with "fast" ! version which just performs LU decomposition and calls ! triangular solver. ! ! This performance penalty is especially visible in the ! multithreaded mode, because both condition number estimation ! and iterative refinement are inherently sequential ! calculations. It also very significant on small matrices. ! ! Thus, if you need high performance and if you are pretty sure ! that your system is well conditioned, we strongly recommend ! you to use faster solver, RMatrixSolveFast() function. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. We should note that LU decomposition is harder to ! parallelize than, say, matrix-matrix product - this algorithm has ! many internal synchronization points which can not be avoided. However ! parallelism starts to be profitable starting from N=1024, achieving ! near-linear speedup for N=4096 or higher. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Info - return code: * -3 matrix is very badly conditioned or exactly singular. * -1 N<=0 was passed * 1 task is solved (but matrix A may be ill-conditioned, check R1/RInf parameters for condition numbers). Rep - additional report, following fields are set: * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N], it contains: * info>0 => solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::rmatrixsolve( real_2d_array a, ae_int_t n, real_1d_array b, ae_int_t& info, densesolverreport& rep, real_1d_array& x); void alglib::smp_rmatrixsolve( real_2d_array a, ae_int_t n, real_1d_array b, ae_int_t& info, densesolverreport& rep, real_1d_array& x);
rmatrixsolvefast
function/************************************************************************* Dense solver. This subroutine solves a system A*x=b, where A is NxN non-denegerate real matrix, x and b are vectors. This is a "fast" version of linear solver which does NOT provide any additional functions like condition number estimation or iterative refinement. Algorithm features: * efficient algorithm O(N^3) complexity * no performance overhead from additional functionality If you need condition number estimation or iterative refinement, use more feature-rich version - RMatrixSolve(). COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. We should note that LU decomposition is harder to ! parallelize than, say, matrix-matrix product - this algorithm has ! many internal synchronization points which can not be avoided. However ! parallelism starts to be profitable starting from N=1024, achieving ! near-linear speedup for N=4096 or higher. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Info - return code: * -3 matrix is exactly singular (ill conditioned matrices are not recognized). * -1 N<=0 was passed * 1 task is solved B - array[N]: * info>0 => overwritten by solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 16.03.2015 by Bochkanov Sergey *************************************************************************/void alglib::rmatrixsolvefast( real_2d_array a, ae_int_t n, real_1d_array& b, ae_int_t& info); void alglib::smp_rmatrixsolvefast( real_2d_array a, ae_int_t n, real_1d_array& b, ae_int_t& info);
rmatrixsolvels
function/************************************************************************* Dense solver. This subroutine finds solution of the linear system A*X=B with non-square, possibly degenerate A. System is solved in the least squares sense, and general least squares solution X = X0 + CX*y which minimizes |A*X-B| is returned. If A is non-degenerate, solution in the usual sense is returned. Algorithm features: * automatic detection (and correct handling!) of degenerate cases * iterative refinement * O(N^3) complexity COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes one important improvement of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Generally, commercial ALGLIB is several times faster than open-source ! generic C edition, and many times faster than open-source C# edition. ! ! Multithreaded acceleration is only partially supported (some parts are ! optimized, but most - are not). ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS A - array[0..NRows-1,0..NCols-1], system matrix NRows - vertical size of A NCols - horizontal size of A B - array[0..NCols-1], right part Threshold- a number in [0,1]. Singular values beyond Threshold are considered zero. Set it to 0.0, if you don't understand what it means, so the solver will choose good value on its own. OUTPUT PARAMETERS Info - return code: * -4 SVD subroutine failed * -1 if NRows<=0 or NCols<=0 or Threshold<0 was passed * 1 if task is solved Rep - solver report, see below for more info X - array[0..N-1,0..M-1], it contains: * solution of A*X=B (even for singular A) * zeros, if SVD subroutine failed SOLVER REPORT Subroutine sets following fields of the Rep structure: * R2 reciprocal of condition number: 1/cond(A), 2-norm. * N = NCols * K dim(Null(A)) * CX array[0..N-1,0..K-1], kernel of A. Columns of CX store such vectors that A*CX[i]=0. -- ALGLIB -- Copyright 24.08.2009 by Bochkanov Sergey *************************************************************************/void alglib::rmatrixsolvels( real_2d_array a, ae_int_t nrows, ae_int_t ncols, real_1d_array b, double threshold, ae_int_t& info, densesolverlsreport& rep, real_1d_array& x); void alglib::smp_rmatrixsolvels( real_2d_array a, ae_int_t nrows, ae_int_t ncols, real_1d_array b, double threshold, ae_int_t& info, densesolverlsreport& rep, real_1d_array& x);
rmatrixsolvem
function/************************************************************************* Dense solver. Similar to RMatrixSolve() but solves task with multiple right parts (where b and x are NxM matrices). This is "slow-but-robust" version of linear solver with additional functionality like condition number estimation. There also exists faster version - RMatrixSolveMFast(). Algorithm features: * automatic detection of degenerate cases * condition number estimation * optional iterative refinement * O(N^3+M*N^2) complexity IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system ! and performs iterative refinement, which results in ! significant performance penalty when compared with "fast" ! version which just performs LU decomposition and calls ! triangular solver. ! ! This performance penalty is especially visible in the ! multithreaded mode, because both condition number estimation ! and iterative refinement are inherently sequential ! calculations. It also very significant on small matrices. ! ! Thus, if you need high performance and if you are pretty sure ! that your system is well conditioned, we strongly recommend ! you to use faster solver, RMatrixSolveMFast() function. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. We should note that LU decomposition is harder to ! parallelize than, say, matrix-matrix product - this algorithm has ! many internal synchronization points which can not be avoided. However ! parallelism starts to be profitable starting from N=1024, achieving ! near-linear speedup for N=4096 or higher. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1,0..M-1], right part M - right part size RFS - iterative refinement switch: * True - refinement is used. Less performance, more precision. * False - refinement is not used. More performance, less precision. OUTPUT PARAMETERS Info - return code: * -3 A is ill conditioned or singular. X is filled by zeros in such cases. * -1 N<=0 was passed * 1 task is solved (but matrix A may be ill-conditioned, check R1/RInf parameters for condition numbers). Rep - additional report, following fields are set: * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N], it contains: * info>0 => solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::rmatrixsolvem( real_2d_array a, ae_int_t n, real_2d_array b, ae_int_t m, bool rfs, ae_int_t& info, densesolverreport& rep, real_2d_array& x); void alglib::smp_rmatrixsolvem( real_2d_array a, ae_int_t n, real_2d_array b, ae_int_t m, bool rfs, ae_int_t& info, densesolverreport& rep, real_2d_array& x);
rmatrixsolvemfast
function/************************************************************************* Dense solver. Similar to RMatrixSolve() but solves task with multiple right parts (where b and x are NxM matrices). This is "fast" version of linear solver which does NOT offer additional functions like condition number estimation or iterative refinement. Algorithm features: * O(N^3+M*N^2) complexity * no additional functionality, highest performance COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. We should note that LU decomposition is harder to ! parallelize than, say, matrix-matrix product - this algorithm has ! many internal synchronization points which can not be avoided. However ! parallelism starts to be profitable starting from N=1024, achieving ! near-linear speedup for N=4096 or higher. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1,0..M-1], right part M - right part size RFS - iterative refinement switch: * True - refinement is used. Less performance, more precision. * False - refinement is not used. More performance, less precision. OUTPUT PARAMETERS Info - return code: * -3 matrix is exactly singular (ill conditioned matrices are not recognized). X is filled by zeros in such cases. * -1 N<=0 was passed * 1 task is solved Rep - additional report, following fields are set: * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm B - array[N]: * info>0 => overwritten by solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::rmatrixsolvemfast( real_2d_array a, ae_int_t n, real_2d_array& b, ae_int_t m, ae_int_t& info); void alglib::smp_rmatrixsolvemfast( real_2d_array a, ae_int_t n, real_2d_array& b, ae_int_t m, ae_int_t& info);
spdmatrixcholeskysolve
function/************************************************************************* Dense solver for A*x=b with N*N symmetric positive definite matrix A given by its Cholesky decomposition, and N*1 real vectors x and b. This is "slow- but-feature-rich" version of the solver which, in addition to the solution, performs condition number estimation. Algorithm features: * automatic detection of degenerate cases * O(N^2) complexity * condition number estimation * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in 10-15x performance penalty when compared ! with "fast" version which just calls triangular solver. ! ! This performance penalty is insignificant when compared with ! cost of large LU decomposition. However, if you call this ! function many times for the same left side, this overhead ! BECOMES significant. It also becomes significant for small- ! scale problems (N<50). ! ! In such cases we strongly recommend you to use faster solver, ! SPDMatrixCholeskySolveFast() function. INPUT PARAMETERS CHA - array[N,N], Cholesky decomposition, SPDMatrixCholesky result N - size of A IsUpper - what half of CHA is provided B - array[N], right part OUTPUT PARAMETERS Info - return code: * -3 A is is exactly singular or ill conditioned X is filled by zeros in such cases. * -1 N<=0 was passed * 1 task is solved Rep - additional report, following fields are set: * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N]: * for info>0 - solution * for info=-3 - filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::spdmatrixcholeskysolve( real_2d_array cha, ae_int_t n, bool isupper, real_1d_array b, ae_int_t& info, densesolverreport& rep, real_1d_array& x);
spdmatrixcholeskysolvefast
function/************************************************************************* Dense solver for A*x=b with N*N symmetric positive definite matrix A given by its Cholesky decomposition, and N*1 real vectors x and b. This is "fast- but-lightweight" version of the solver. Algorithm features: * O(N^2) complexity * matrix is represented by its upper or lower triangle * no additional features INPUT PARAMETERS CHA - array[N,N], Cholesky decomposition, SPDMatrixCholesky result N - size of A IsUpper - what half of CHA is provided B - array[N], right part OUTPUT PARAMETERS Info - return code: * -3 A is is exactly singular or ill conditioned X is filled by zeros in such cases. * -1 N<=0 was passed * 1 task is solved B - array[N]: * for info>0 - overwritten by solution * for info=-3 - filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::spdmatrixcholeskysolvefast( real_2d_array cha, ae_int_t n, bool isupper, real_1d_array& b, ae_int_t& info);
spdmatrixcholeskysolvem
function/************************************************************************* Dense solver for A*X=B with N*N symmetric positive definite matrix A given by its Cholesky decomposition, and N*M vectors X and B. It is "slow-but- feature-rich" version of the solver which estimates condition number of the system. Algorithm features: * automatic detection of degenerate cases * O(M*N^2) complexity * condition number estimation * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in significant performance penalty when ! compared with "fast" version which just calls triangular ! solver. Amount of overhead introduced depends on M (the ! larger - the more efficient). ! ! This performance penalty is insignificant when compared with ! cost of large LU decomposition. However, if you call this ! function many times for the same left side, this overhead ! BECOMES significant. It also becomes significant for small- ! scale problems (N<50). ! ! In such cases we strongly recommend you to use faster solver, ! SPDMatrixCholeskySolveMFast() function. INPUT PARAMETERS CHA - array[0..N-1,0..N-1], Cholesky decomposition, SPDMatrixCholesky result N - size of CHA IsUpper - what half of CHA is provided B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - return code: * -3 A is is exactly singular or badly conditioned X is filled by zeros in such cases. * -1 N<=0 was passed * 1 task was solved Rep - additional report, following fields are set: * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N]: * for info>0 contains solution * for info=-3 filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::spdmatrixcholeskysolvem( real_2d_array cha, ae_int_t n, bool isupper, real_2d_array b, ae_int_t m, ae_int_t& info, densesolverreport& rep, real_2d_array& x); void alglib::smp_spdmatrixcholeskysolvem( real_2d_array cha, ae_int_t n, bool isupper, real_2d_array b, ae_int_t m, ae_int_t& info, densesolverreport& rep, real_2d_array& x);
spdmatrixcholeskysolvemfast
function/************************************************************************* Dense solver for A*X=B with N*N symmetric positive definite matrix A given by its Cholesky decomposition, and N*M vectors X and B. It is "fast-but- lightweight" version of the solver which just solves linear system, without any additional functions. Algorithm features: * O(M*N^2) complexity * matrix is represented by its upper or lower triangle * no additional functionality INPUT PARAMETERS CHA - array[N,N], Cholesky decomposition, SPDMatrixCholesky result N - size of CHA IsUpper - what half of CHA is provided B - array[N,M], right part M - right part size OUTPUT PARAMETERS Info - return code: * -3 A is is exactly singular or badly conditioned X is filled by zeros in such cases. * -1 N<=0 was passed * 1 task was solved B - array[N]: * for info>0 overwritten by solution * for info=-3 filled by zeros -- ALGLIB -- Copyright 18.03.2015 by Bochkanov Sergey *************************************************************************/void alglib::spdmatrixcholeskysolvemfast( real_2d_array cha, ae_int_t n, bool isupper, real_2d_array& b, ae_int_t m, ae_int_t& info); void alglib::smp_spdmatrixcholeskysolvemfast( real_2d_array cha, ae_int_t n, bool isupper, real_2d_array& b, ae_int_t m, ae_int_t& info);
spdmatrixsolve
function/************************************************************************* Dense linear solver for A*x=b with N*N real symmetric positive definite matrix A, N*1 vectors x and b. "Slow-but-feature-rich" version of the solver. Algorithm features: * automatic detection of degenerate cases * condition number estimation * O(N^3) complexity * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in significant performance penalty when ! compared with "fast" version which just performs Cholesky ! decomposition and calls triangular solver. ! ! This performance penalty is especially visible in the ! multithreaded mode, because both condition number estimation ! and iterative refinement are inherently sequential ! calculations. ! ! Thus, if you need high performance and if you are pretty sure ! that your system is well conditioned, we strongly recommend ! you to use faster solver, SPDMatrixSolveFast() function. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. We should note that Cholesky decomposition is harder ! to parallelize than, say, matrix-matrix product - this algorithm has ! several synchronization points which can not be avoided. However, ! parallelism starts to be profitable starting from N=500. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1], right part OUTPUT PARAMETERS Info - return code: * -3 matrix is very badly conditioned or non-SPD. * -1 N<=0 was passed * 1 task is solved (but matrix A may be ill-conditioned, check R1/RInf parameters for condition numbers). Rep - additional report, following fields are set: * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N], it contains: * info>0 => solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::spdmatrixsolve( real_2d_array a, ae_int_t n, bool isupper, real_1d_array b, ae_int_t& info, densesolverreport& rep, real_1d_array& x); void alglib::smp_spdmatrixsolve( real_2d_array a, ae_int_t n, bool isupper, real_1d_array b, ae_int_t& info, densesolverreport& rep, real_1d_array& x);
spdmatrixsolvefast
function/************************************************************************* Dense linear solver for A*x=b with N*N real symmetric positive definite matrix A, N*1 vectors x and b. "Fast-but-lightweight" version of the solver. Algorithm features: * O(N^3) complexity * matrix is represented by its upper or lower triangle * no additional time consuming features like condition number estimation COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. We should note that Cholesky decomposition is harder ! to parallelize than, say, matrix-matrix product - this algorithm has ! several synchronization points which can not be avoided. However, ! parallelism starts to be profitable starting from N=500. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1], right part OUTPUT PARAMETERS Info - return code: * -3 A is is exactly singular or non-SPD * -1 N<=0 was passed * 1 task was solved B - array[N], it contains: * info>0 => solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 17.03.2015 by Bochkanov Sergey *************************************************************************/void alglib::spdmatrixsolvefast( real_2d_array a, ae_int_t n, bool isupper, real_1d_array& b, ae_int_t& info); void alglib::smp_spdmatrixsolvefast( real_2d_array a, ae_int_t n, bool isupper, real_1d_array& b, ae_int_t& info);
spdmatrixsolvem
function/************************************************************************* Dense solver for A*X=B with N*N symmetric positive definite matrix A, and N*M vectors X and B. It is "slow-but-feature-rich" version of the solver. Algorithm features: * automatic detection of degenerate cases * condition number estimation * O(N^3+M*N^2) complexity * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in significant performance penalty when ! compared with "fast" version which just performs Cholesky ! decomposition and calls triangular solver. ! ! This performance penalty is especially visible in the ! multithreaded mode, because both condition number estimation ! and iterative refinement are inherently sequential ! calculations. ! ! Thus, if you need high performance and if you are pretty sure ! that your system is well conditioned, we strongly recommend ! you to use faster solver, SPDMatrixSolveMFast() function. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. We should note that Cholesky decomposition is harder ! to parallelize than, say, matrix-matrix product - this algorithm has ! several synchronization points which can not be avoided. However, ! parallelism starts to be profitable starting from N=500. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - return code: * -3 matrix is very badly conditioned or non-SPD. * -1 N<=0 was passed * 1 task is solved (but matrix A may be ill-conditioned, check R1/RInf parameters for condition numbers). Rep - additional report, following fields are set: * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N,M], it contains: * info>0 => solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/void alglib::spdmatrixsolvem( real_2d_array a, ae_int_t n, bool isupper, real_2d_array b, ae_int_t m, ae_int_t& info, densesolverreport& rep, real_2d_array& x); void alglib::smp_spdmatrixsolvem( real_2d_array a, ae_int_t n, bool isupper, real_2d_array b, ae_int_t m, ae_int_t& info, densesolverreport& rep, real_2d_array& x);
spdmatrixsolvemfast
function/************************************************************************* Dense solver for A*X=B with N*N symmetric positive definite matrix A, and N*M vectors X and B. It is "fast-but-lightweight" version of the solver. Algorithm features: * O(N^3+M*N^2) complexity * matrix is represented by its upper or lower triangle * no additional time consuming features COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. We should note that Cholesky decomposition is harder ! to parallelize than, say, matrix-matrix product - this algorithm has ! several synchronization points which can not be avoided. However, ! parallelism starts to be profitable starting from N=500. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - return code: * -3 A is is exactly singular * -1 N<=0 was passed * 1 task was solved B - array[N,M], it contains: * info>0 => solution * info=-3 => filled by zeros -- ALGLIB -- Copyright 17.03.2015 by Bochkanov Sergey *************************************************************************/void alglib::spdmatrixsolvemfast( real_2d_array a, ae_int_t n, bool isupper, real_2d_array& b, ae_int_t m, ae_int_t& info); void alglib::smp_spdmatrixsolvemfast( real_2d_array a, ae_int_t n, bool isupper, real_2d_array& b, ae_int_t m, ae_int_t& info);
directsparsesolvers
subpackagesolvesks_d_1 | Solving positive definite sparse system using Skyline (SKS) solver |
sparsesolverreport
class/************************************************************************* This structure is a sparse solver report. Following fields can be accessed by users: *************************************************************************/class sparsesolverreport { ae_int_t terminationtype; };
sparsecholeskysolvesks
function/************************************************************************* Sparse linear solver for A*x=b with N*N real symmetric positive definite matrix A given by its Cholesky decomposition, and N*1 vectors x and b. IMPORTANT: this solver requires input matrix to be in the SKS (Skyline) sparse storage format. An exception will be generated if you pass matrix in some other format (HASH or CRS). INPUT PARAMETERS A - sparse NxN matrix stored in SKS format, must be NxN exactly N - size of A, N>0 IsUpper - which half of A is provided (another half is ignored) B - array[N], right part OUTPUT PARAMETERS Rep - solver report, following fields are set: * rep.terminationtype - solver status; >0 for success, set to -3 on failure (degenerate or non-SPD system). X - array[N], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros -- ALGLIB -- Copyright 26.12.2017 by Bochkanov Sergey *************************************************************************/void alglib::sparsecholeskysolvesks( sparsematrix a, ae_int_t n, bool isupper, real_1d_array b, sparsesolverreport& rep, real_1d_array& x);
Examples: [1]
sparsesolvesks
function/************************************************************************* Sparse linear solver for A*x=b with N*N sparse real symmetric positive definite matrix A, N*1 vectors x and b. This solver converts input matrix to SKS format, performs Cholesky factorization using SKS Cholesky subroutine (works well for limited bandwidth matrices) and uses sparse triangular solvers to get solution of the original system. INPUT PARAMETERS A - sparse matrix, must be NxN exactly N - size of A, N>0 IsUpper - which half of A is provided (another half is ignored) B - array[0..N-1], right part OUTPUT PARAMETERS Rep - solver report, following fields are set: * rep.terminationtype - solver status; >0 for success, set to -3 on failure (degenerate or non-SPD system). X - array[N], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros -- ALGLIB -- Copyright 26.12.2017 by Bochkanov Sergey *************************************************************************/void alglib::sparsesolvesks( sparsematrix a, ae_int_t n, bool isupper, real_1d_array b, sparsesolverreport& rep, real_1d_array& x);
Examples: [1]
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "solvers.h" using namespace alglib; int main(int argc, char **argv) { // // This example demonstrates creation/initialization of the sparse matrix // in the SKS (Skyline) storage format and solution using SKS-based direct // solver. // // First, we have to create matrix and initialize it. Matrix is created // in the SKS format, using fixed bandwidth initialization function. // Several points should be noted: // // 1. SKS sparse storage format also allows variable bandwidth matrices; // we just do not want to overcomplicate this example. // // 2. SKS format requires you to specify matrix geometry prior to // initialization of its elements with sparseset(). If you specified // bandwidth=1, you can not change your mind afterwards and call // sparseset() for non-existent elements. // // 3. Because SKS solver need just one triangle of SPD matrix, we can // omit initialization of the lower triangle of our matrix. // ae_int_t n = 4; ae_int_t bandwidth = 1; sparsematrix s; sparsecreatesksband(n, n, bandwidth, s); sparseset(s, 0, 0, 2.0); sparseset(s, 0, 1, 1.0); sparseset(s, 1, 1, 3.0); sparseset(s, 1, 2, 1.0); sparseset(s, 2, 2, 3.0); sparseset(s, 2, 3, 1.0); sparseset(s, 3, 3, 2.0); // // Now we have symmetric positive definite 4x4 system width bandwidth=1: // // [ 2 1 ] [ x0]] [ 4 ] // [ 1 3 1 ] [ x1 ] [ 10 ] // [ 1 3 1 ] * [ x2 ] = [ 15 ] // [ 1 2 ] [ x3 ] [ 11 ] // // After successful creation we can call SKS solver. // real_1d_array b = "[4,10,15,11]"; sparsesolverreport rep; real_1d_array x; bool isuppertriangle = true; sparsesolvesks(s, n, isuppertriangle, b, rep, x); printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [1.0000, 2.0000, 3.0000, 4.0000] return 0; }
elliptic
subpackageellipticintegrale
function/************************************************************************* Complete elliptic integral of the second kind Approximates the integral pi/2 - | | 2 E(m) = | sqrt( 1 - m sin t ) dt | | - 0 using the approximation P(x) - x log x Q(x). ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0, 1 10000 2.1e-16 7.3e-17 Cephes Math Library, Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/double alglib::ellipticintegrale(double m);
ellipticintegralk
function/************************************************************************* Complete elliptic integral of the first kind Approximates the integral pi/2 - | | | dt K(m) = | ------------------ | 2 | | sqrt( 1 - m sin t ) - 0 using the approximation P(x) - log x Q(x). ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,1 30000 2.5e-16 6.8e-17 Cephes Math Library, Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/double alglib::ellipticintegralk(double m);
ellipticintegralkhighprecision
function/************************************************************************* Complete elliptic integral of the first kind Approximates the integral pi/2 - | | | dt K(m) = | ------------------ | 2 | | sqrt( 1 - m sin t ) - 0 where m = 1 - m1, using the approximation P(x) - log x Q(x). The argument m1 is used rather than m so that the logarithmic singularity at m = 1 will be shifted to the origin; this preserves maximum accuracy. K(0) = pi/2. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,1 30000 2.5e-16 6.8e-17 Cephes Math Library, Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/double alglib::ellipticintegralkhighprecision(double m1);
incompleteellipticintegrale
function/************************************************************************* Incomplete elliptic integral of the second kind Approximates the integral phi - | | | 2 E(phi_\m) = | sqrt( 1 - m sin t ) dt | | | - 0 of amplitude phi and modulus m, using the arithmetic - geometric mean algorithm. ACCURACY: Tested at random arguments with phi in [-10, 10] and m in [0, 1]. Relative error: arithmetic domain # trials peak rms IEEE -10,10 150000 3.3e-15 1.4e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1993, 2000 by Stephen L. Moshier *************************************************************************/double alglib::incompleteellipticintegrale(double phi, double m);
incompleteellipticintegralk
function/************************************************************************* Incomplete elliptic integral of the first kind F(phi|m) Approximates the integral phi - | | | dt F(phi_\m) = | ------------------ | 2 | | sqrt( 1 - m sin t ) - 0 of amplitude phi and modulus m, using the arithmetic - geometric mean algorithm. ACCURACY: Tested at random points with m in [0, 1] and phi as indicated. Relative error: arithmetic domain # trials peak rms IEEE -10,10 200000 7.4e-16 1.0e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/double alglib::incompleteellipticintegralk(double phi, double m);
evd
subpackageeigsubspacereport
class/************************************************************************* This object stores state of the subspace iteration algorithm. You should use ALGLIB functions to work with this object. *************************************************************************/class eigsubspacereport { ae_int_t iterationscount; };
eigsubspacestate
class/************************************************************************* This object stores state of the subspace iteration algorithm. You should use ALGLIB functions to work with this object. *************************************************************************/class eigsubspacestate { };
eigsubspacecreate
function/************************************************************************* This function initializes subspace iteration solver. This solver is used to solve symmetric real eigenproblems where just a few (top K) eigenvalues and corresponding eigenvectors is required. This solver can be significantly faster than complete EVD decomposition in the following case: * when only just a small fraction of top eigenpairs of dense matrix is required. When K approaches N, this solver is slower than complete dense EVD * when problem matrix is sparse (and/or is not known explicitly, i.e. only matrix-matrix product can be performed) USAGE (explicit dense/sparse matrix): 1. User initializes algorithm state with eigsubspacecreate() call 2. [optional] User tunes solver parameters by calling eigsubspacesetcond() or other functions 3. User calls eigsubspacesolvedense() or eigsubspacesolvesparse() methods, which take algorithm state and 2D array or alglib.sparsematrix object. USAGE (out-of-core mode): 1. User initializes algorithm state with eigsubspacecreate() call 2. [optional] User tunes solver parameters by calling eigsubspacesetcond() or other functions 3. User activates out-of-core mode of the solver and repeatedly calls communication functions in a loop like below: > alglib.eigsubspaceoocstart(state) > while alglib.eigsubspaceooccontinue(state) do > alglib.eigsubspaceoocgetrequestinfo(state, out RequestType, out M) > alglib.eigsubspaceoocgetrequestdata(state, out X) > [calculate Y=A*X, with X=R^NxM] > alglib.eigsubspaceoocsendresult(state, in Y) > alglib.eigsubspaceoocstop(state, out W, out Z, out Report) INPUT PARAMETERS: N - problem dimensionality, N>0 K - number of top eigenvector to calculate, 0<K<=N. OUTPUT PARAMETERS: State - structure which stores algorithm state NOTE: if you solve many similar EVD problems you may find it useful to reuse previous subspace as warm-start point for new EVD problem. It can be done with eigsubspacesetwarmstart() function. -- ALGLIB -- Copyright 16.01.2017 by Bochkanov Sergey *************************************************************************/void alglib::eigsubspacecreate( ae_int_t n, ae_int_t k, eigsubspacestate& state);
eigsubspacecreatebuf
function/************************************************************************* Buffered version of constructor which aims to reuse previously allocated memory as much as possible. -- ALGLIB -- Copyright 16.01.2017 by Bochkanov Sergey *************************************************************************/void alglib::eigsubspacecreatebuf( ae_int_t n, ae_int_t k, eigsubspacestate state);
eigsubspaceooccontinue
function/************************************************************************* This function performs subspace iteration in the out-of-core mode. It should be used in conjunction with other out-of-core-related functions of this subspackage in a loop like below: > alglib.eigsubspaceoocstart(state) > while alglib.eigsubspaceooccontinue(state) do > alglib.eigsubspaceoocgetrequestinfo(state, out RequestType, out M) > alglib.eigsubspaceoocgetrequestdata(state, out X) > [calculate Y=A*X, with X=R^NxM] > alglib.eigsubspaceoocsendresult(state, in Y) > alglib.eigsubspaceoocstop(state, out W, out Z, out Report) -- ALGLIB -- Copyright 16.01.2017 by Bochkanov Sergey *************************************************************************/bool alglib::eigsubspaceooccontinue(eigsubspacestate state);
eigsubspaceoocgetrequestdata
function/************************************************************************* This function is used to retrieve information about out-of-core request sent by solver to user code: matrix X (array[N,RequestSize) which have to be multiplied by out-of-core matrix A in a product A*X. This function returns just request data; in order to get size of the data prior to processing requestm, use eigsubspaceoocgetrequestinfo(). It should be used in conjunction with other out-of-core-related functions of this subspackage in a loop like below: > alglib.eigsubspaceoocstart(state) > while alglib.eigsubspaceooccontinue(state) do > alglib.eigsubspaceoocgetrequestinfo(state, out RequestType, out M) > alglib.eigsubspaceoocgetrequestdata(state, out X) > [calculate Y=A*X, with X=R^NxM] > alglib.eigsubspaceoocsendresult(state, in Y) > alglib.eigsubspaceoocstop(state, out W, out Z, out Report) INPUT PARAMETERS: State - solver running in out-of-core mode X - possibly preallocated storage; reallocated if needed, left unchanged, if large enough to store request data. OUTPUT PARAMETERS: X - array[N,RequestSize] or larger, leading rectangle is filled with dense matrix X. -- ALGLIB -- Copyright 16.01.2017 by Bochkanov Sergey *************************************************************************/void alglib::eigsubspaceoocgetrequestdata( eigsubspacestate state, real_2d_array& x);
eigsubspaceoocgetrequestinfo
function/************************************************************************* This function is used to retrieve information about out-of-core request sent by solver to user code: request type (current version of the solver sends only requests for matrix-matrix products) and request size (size of the matrices being multiplied). This function returns just request metrics; in order to get contents of the matrices being multiplied, use eigsubspaceoocgetrequestdata(). It should be used in conjunction with other out-of-core-related functions of this subspackage in a loop like below: > alglib.eigsubspaceoocstart(state) > while alglib.eigsubspaceooccontinue(state) do > alglib.eigsubspaceoocgetrequestinfo(state, out RequestType, out M) > alglib.eigsubspaceoocgetrequestdata(state, out X) > [calculate Y=A*X, with X=R^NxM] > alglib.eigsubspaceoocsendresult(state, in Y) > alglib.eigsubspaceoocstop(state, out W, out Z, out Report) INPUT PARAMETERS: State - solver running in out-of-core mode OUTPUT PARAMETERS: RequestType - type of the request to process: * 0 - for matrix-matrix product A*X, with A being NxN matrix whose eigenvalues/vectors are needed, and X being NxREQUESTSIZE one which is returned by the eigsubspaceoocgetrequestdata(). RequestSize - size of the X matrix (number of columns), usually it is several times larger than number of vectors K requested by user. -- ALGLIB -- Copyright 16.01.2017 by Bochkanov Sergey *************************************************************************/void alglib::eigsubspaceoocgetrequestinfo( eigsubspacestate state, ae_int_t& requesttype, ae_int_t& requestsize);
eigsubspaceoocsendresult
function/************************************************************************* This function is used to send user reply to out-of-core request sent by solver. Usually it is product A*X for returned by solver matrix X. It should be used in conjunction with other out-of-core-related functions of this subspackage in a loop like below: > alglib.eigsubspaceoocstart(state) > while alglib.eigsubspaceooccontinue(state) do > alglib.eigsubspaceoocgetrequestinfo(state, out RequestType, out M) > alglib.eigsubspaceoocgetrequestdata(state, out X) > [calculate Y=A*X, with X=R^NxM] > alglib.eigsubspaceoocsendresult(state, in Y) > alglib.eigsubspaceoocstop(state, out W, out Z, out Report) INPUT PARAMETERS: State - solver running in out-of-core mode AX - array[N,RequestSize] or larger, leading rectangle is filled with product A*X. -- ALGLIB -- Copyright 16.01.2017 by Bochkanov Sergey *************************************************************************/void alglib::eigsubspaceoocsendresult( eigsubspacestate state, real_2d_array ax);
eigsubspaceoocstart
function/************************************************************************* This function initiates out-of-core mode of subspace eigensolver. It should be used in conjunction with other out-of-core-related functions of this subspackage in a loop like below: > alglib.eigsubspaceoocstart(state) > while alglib.eigsubspaceooccontinue(state) do > alglib.eigsubspaceoocgetrequestinfo(state, out RequestType, out M) > alglib.eigsubspaceoocgetrequestdata(state, out X) > [calculate Y=A*X, with X=R^NxM] > alglib.eigsubspaceoocsendresult(state, in Y) > alglib.eigsubspaceoocstop(state, out W, out Z, out Report) INPUT PARAMETERS: State - solver object MType - matrix type: * 0 for real symmetric matrix (solver assumes that matrix being processed is symmetric; symmetric direct eigensolver is used for smaller subproblems arising during solution of larger "full" task) Future versions of ALGLIB may introduce support for other matrix types; for now, only symmetric eigenproblems are supported. -- ALGLIB -- Copyright 16.01.2017 by Bochkanov Sergey *************************************************************************/void alglib::eigsubspaceoocstart(eigsubspacestate state, ae_int_t mtype);
eigsubspaceoocstop
function/************************************************************************* This function finalizes out-of-core mode of subspace eigensolver. It should be used in conjunction with other out-of-core-related functions of this subspackage in a loop like below: > alglib.eigsubspaceoocstart(state) > while alglib.eigsubspaceooccontinue(state) do > alglib.eigsubspaceoocgetrequestinfo(state, out RequestType, out M) > alglib.eigsubspaceoocgetrequestdata(state, out X) > [calculate Y=A*X, with X=R^NxM] > alglib.eigsubspaceoocsendresult(state, in Y) > alglib.eigsubspaceoocstop(state, out W, out Z, out Report) INPUT PARAMETERS: State - solver state OUTPUT PARAMETERS: W - array[K], depending on solver settings: * top K eigenvalues ordered by descending - if eigenvectors are returned in Z * zeros - if invariant subspace is returned in Z Z - array[N,K], depending on solver settings either: * matrix of eigenvectors found * orthogonal basis of K-dimensional invariant subspace Rep - report with additional parameters -- ALGLIB -- Copyright 16.01.2017 by Bochkanov Sergey *************************************************************************/void alglib::eigsubspaceoocstop( eigsubspacestate state, real_1d_array& w, real_2d_array& z, eigsubspacereport& rep);
eigsubspacesetcond
function/************************************************************************* This function sets stopping critera for the solver: * error in eigenvector/value allowed by solver * maximum number of iterations to perform INPUT PARAMETERS: State - solver structure Eps - eps>=0, with non-zero value used to tell solver that it can stop after all eigenvalues converged with error roughly proportional to eps*MAX(LAMBDA_MAX), where LAMBDA_MAX is a maximum eigenvalue. Zero value means that no check for precision is performed. MaxIts - maxits>=0, with non-zero value used to tell solver that it can stop after maxits steps (no matter how precise current estimate is) NOTE: passing eps=0 and maxits=0 results in automatic selection of moderate eps as stopping criteria (1.0E-6 in current implementation, but it may change without notice). NOTE: very small values of eps are possible (say, 1.0E-12), although the larger problem you solve (N and/or K), the harder it is to find precise eigenvectors because rounding errors tend to accumulate. NOTE: passing non-zero eps results in some performance penalty, roughly equal to 2N*(2K)^2 FLOPs per iteration. These additional computations are required in order to estimate current error in eigenvalues via Rayleigh-Ritz process. Most of this additional time is spent in construction of ~2Kx2K symmetric subproblem whose eigenvalues are checked with exact eigensolver. This additional time is negligible if you search for eigenvalues of the large dense matrix, but may become noticeable on highly sparse EVD problems, where cost of matrix-matrix product is low. If you set eps to exactly zero, Rayleigh-Ritz phase is completely turned off. -- ALGLIB -- Copyright 16.01.2017 by Bochkanov Sergey *************************************************************************/void alglib::eigsubspacesetcond( eigsubspacestate state, double eps, ae_int_t maxits);
eigsubspacesetwarmstart
function/************************************************************************* This function sets warm-start mode of the solver: next call to the solver will reuse previous subspace as warm-start point. It can significantly speed-up convergence when you solve many similar eigenproblems. INPUT PARAMETERS: State - solver structure UseWarmStart- either True or False -- ALGLIB -- Copyright 12.11.2017 by Bochkanov Sergey *************************************************************************/void alglib::eigsubspacesetwarmstart( eigsubspacestate state, bool usewarmstart);
eigsubspacesolvedenses
function/************************************************************************* This function runs eigensolver for dense NxN symmetric matrix A, given by upper or lower triangle. This function can not process nonsymmetric matrices. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * multithreading support ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! ! For a situation when you need just a few eigenvectors (~1-10), ! multithreading typically gives sublinear (wrt to cores count) speedup. ! For larger problems it may give you nearly linear increase in ! performance. ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. Best results are achieved for high-dimensional problems ! (NVars is at least 256). ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: State - solver state A - array[N,N], symmetric NxN matrix given by one of its triangles IsUpper - whether upper or lower triangle of A is given (the other one is not referenced at all). OUTPUT PARAMETERS: W - array[K], top K eigenvalues ordered by descending of their absolute values Z - array[N,K], matrix of eigenvectors found Rep - report with additional parameters NOTE: internally this function allocates a copy of NxN dense A. You should take it into account when working with very large matrices occupying almost all RAM. -- ALGLIB -- Copyright 16.01.2017 by Bochkanov Sergey *************************************************************************/void alglib::eigsubspacesolvedenses( eigsubspacestate state, real_2d_array a, bool isupper, real_1d_array& w, real_2d_array& z, eigsubspacereport& rep); void alglib::smp_eigsubspacesolvedenses( eigsubspacestate state, real_2d_array a, bool isupper, real_1d_array& w, real_2d_array& z, eigsubspacereport& rep);
eigsubspacesolvesparses
function/************************************************************************* This function runs eigensolver for dense NxN symmetric matrix A, given by upper or lower triangle. This function can not process nonsymmetric matrices. INPUT PARAMETERS: State - solver state A - NxN symmetric matrix given by one of its triangles IsUpper - whether upper or lower triangle of A is given (the other one is not referenced at all). OUTPUT PARAMETERS: W - array[K], top K eigenvalues ordered by descending of their absolute values Z - array[N,K], matrix of eigenvectors found Rep - report with additional parameters -- ALGLIB -- Copyright 16.01.2017 by Bochkanov Sergey *************************************************************************/void alglib::eigsubspacesolvesparses( eigsubspacestate state, sparsematrix a, bool isupper, real_1d_array& w, real_2d_array& z, eigsubspacereport& rep);
hmatrixevd
function/************************************************************************* Finding the eigenvalues and eigenvectors of a Hermitian matrix The algorithm finds eigen pairs of a Hermitian matrix by reducing it to real tridiagonal form and using the QL/QR algorithm. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes one important improvement of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Generally, commercial ALGLIB is several times faster than open-source ! generic C edition, and many times faster than open-source C# edition. ! ! Multithreaded acceleration is NOT supported for this function. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: A - Hermitian matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - storage format. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. Output parameters: D - eigenvalues in ascending order. Array whose index ranges within [0..N-1]. Z - if ZNeeded is equal to: * 0, Z hasn't changed; * 1, Z contains the eigenvectors. Array whose indexes range within [0..N-1, 0..N-1]. The eigenvectors are stored in the matrix columns. Result: True, if the algorithm has converged. False, if the algorithm hasn't converged (rare case). Note: eigenvectors of Hermitian matrix are defined up to multiplication by a complex number L, such that |L|=1. -- ALGLIB -- Copyright 2005, 23 March 2007 by Bochkanov Sergey *************************************************************************/bool alglib::hmatrixevd( complex_2d_array a, ae_int_t n, ae_int_t zneeded, bool isupper, real_1d_array& d, complex_2d_array& z);
hmatrixevdi
function/************************************************************************* Subroutine for finding the eigenvalues and eigenvectors of a Hermitian matrix with given indexes by using bisection and inverse iteration methods Input parameters: A - Hermitian matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. IsUpperA - storage format of matrix A. I1, I2 - index interval for searching (from I1 to I2). 0 <= I1 <= I2 <= N-1. Output parameters: W - array of the eigenvalues found. Array whose index ranges within [0..I2-I1]. Z - if ZNeeded is equal to: * 0, Z hasn't changed; * 1, Z contains eigenvectors. Array whose indexes range within [0..N-1, 0..I2-I1]. In that case, the eigenvectors are stored in the matrix columns. Result: True, if successful. W contains the eigenvalues, Z contains the eigenvectors (if needed). False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned. Note: eigen vectors of Hermitian matrix are defined up to multiplication by a complex number L, such as |L|=1. -- ALGLIB -- Copyright 07.01.2006, 24.03.2007 by Bochkanov Sergey. *************************************************************************/bool alglib::hmatrixevdi( complex_2d_array a, ae_int_t n, ae_int_t zneeded, bool isupper, ae_int_t i1, ae_int_t i2, real_1d_array& w, complex_2d_array& z);
hmatrixevdr
function/************************************************************************* Subroutine for finding the eigenvalues (and eigenvectors) of a Hermitian matrix in a given half-interval (A, B] by using a bisection and inverse iteration Input parameters: A - Hermitian matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. IsUpperA - storage format of matrix A. B1, B2 - half-interval (B1, B2] to search eigenvalues in. Output parameters: M - number of eigenvalues found in a given half-interval, M>=0 W - array of the eigenvalues found. Array whose index ranges within [0..M-1]. Z - if ZNeeded is equal to: * 0, Z hasn't changed; * 1, Z contains eigenvectors. Array whose indexes range within [0..N-1, 0..M-1]. The eigenvectors are stored in the matrix columns. Result: True, if successful. M contains the number of eigenvalues in the given half-interval (could be equal to 0), W contains the eigenvalues, Z contains the eigenvectors (if needed). False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned, M is equal to 0. Note: eigen vectors of Hermitian matrix are defined up to multiplication by a complex number L, such as |L|=1. -- ALGLIB -- Copyright 07.01.2006, 24.03.2007 by Bochkanov Sergey. *************************************************************************/bool alglib::hmatrixevdr( complex_2d_array a, ae_int_t n, ae_int_t zneeded, bool isupper, double b1, double b2, ae_int_t& m, real_1d_array& w, complex_2d_array& z);
rmatrixevd
function/************************************************************************* Finding eigenvalues and eigenvectors of a general (unsymmetric) matrix COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes one important improvement of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. Speed-up provided by MKL for this particular problem (EVD) ! is really high, because MKL uses combination of (a) better low-level ! optimizations, and (b) better EVD algorithms. ! ! On one particular SSE-capable machine for N=1024, commercial MKL- ! -capable ALGLIB was: ! * 7-10 times faster than open source "generic C" version ! * 15-18 times faster than "pure C#" version ! ! Multithreaded acceleration is NOT supported for this function. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. The algorithm finds eigenvalues and eigenvectors of a general matrix by using the QR algorithm with multiple shifts. The algorithm can find eigenvalues and both left and right eigenvectors. The right eigenvector is a vector x such that A*x = w*x, and the left eigenvector is a vector y such that y'*A = w*y' (here y' implies a complex conjugate transposition of vector y). Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. VNeeded - flag controlling whether eigenvectors are needed or not. If VNeeded is equal to: * 0, eigenvectors are not returned; * 1, right eigenvectors are returned; * 2, left eigenvectors are returned; * 3, both left and right eigenvectors are returned. Output parameters: WR - real parts of eigenvalues. Array whose index ranges within [0..N-1]. WR - imaginary parts of eigenvalues. Array whose index ranges within [0..N-1]. VL, VR - arrays of left and right eigenvectors (if they are needed). If WI[i]=0, the respective eigenvalue is a real number, and it corresponds to the column number I of matrices VL/VR. If WI[i]>0, we have a pair of complex conjugate numbers with positive and negative imaginary parts: the first eigenvalue WR[i] + sqrt(-1)*WI[i]; the second eigenvalue WR[i+1] + sqrt(-1)*WI[i+1]; WI[i]>0 WI[i+1] = -WI[i] < 0 In that case, the eigenvector corresponding to the first eigenvalue is located in i and i+1 columns of matrices VL/VR (the column number i contains the real part, and the column number i+1 contains the imaginary part), and the vector corresponding to the second eigenvalue is a complex conjugate to the first vector. Arrays whose indexes range within [0..N-1, 0..N-1]. Result: True, if the algorithm has converged. False, if the algorithm has not converged. Note 1: Some users may ask the following question: what if WI[N-1]>0? WI[N] must contain an eigenvalue which is complex conjugate to the N-th eigenvalue, but the array has only size N? The answer is as follows: such a situation cannot occur because the algorithm finds a pairs of eigenvalues, therefore, if WI[i]>0, I is strictly less than N-1. Note 2: The algorithm performance depends on the value of the internal parameter NS of the InternalSchurDecomposition subroutine which defines the number of shifts in the QR algorithm (similarly to the block width in block-matrix algorithms of linear algebra). If you require maximum performance on your machine, it is recommended to adjust this parameter manually. See also the InternalTREVC subroutine. The algorithm is based on the LAPACK 3.0 library. *************************************************************************/bool alglib::rmatrixevd( real_2d_array a, ae_int_t n, ae_int_t vneeded, real_1d_array& wr, real_1d_array& wi, real_2d_array& vl, real_2d_array& vr);
smatrixevd
function/************************************************************************* Finding the eigenvalues and eigenvectors of a symmetric matrix The algorithm finds eigen pairs of a symmetric matrix by reducing it to tridiagonal form and using the QL/QR algorithm. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes one important improvement of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Generally, commercial ALGLIB is several times faster than open-source ! generic C edition, and many times faster than open-source C# edition. ! ! Multithreaded acceleration is NOT supported for this function. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: A - symmetric matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. IsUpper - storage format. Output parameters: D - eigenvalues in ascending order. Array whose index ranges within [0..N-1]. Z - if ZNeeded is equal to: * 0, Z hasn't changed; * 1, Z contains the eigenvectors. Array whose indexes range within [0..N-1, 0..N-1]. The eigenvectors are stored in the matrix columns. Result: True, if the algorithm has converged. False, if the algorithm hasn't converged (rare case). -- ALGLIB -- Copyright 2005-2008 by Bochkanov Sergey *************************************************************************/bool alglib::smatrixevd( real_2d_array a, ae_int_t n, ae_int_t zneeded, bool isupper, real_1d_array& d, real_2d_array& z);
smatrixevdi
function/************************************************************************* Subroutine for finding the eigenvalues and eigenvectors of a symmetric matrix with given indexes by using bisection and inverse iteration methods. Input parameters: A - symmetric matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. IsUpperA - storage format of matrix A. I1, I2 - index interval for searching (from I1 to I2). 0 <= I1 <= I2 <= N-1. Output parameters: W - array of the eigenvalues found. Array whose index ranges within [0..I2-I1]. Z - if ZNeeded is equal to: * 0, Z hasn't changed; * 1, Z contains eigenvectors. Array whose indexes range within [0..N-1, 0..I2-I1]. In that case, the eigenvectors are stored in the matrix columns. Result: True, if successful. W contains the eigenvalues, Z contains the eigenvectors (if needed). False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned. -- ALGLIB -- Copyright 07.01.2006 by Bochkanov Sergey *************************************************************************/bool alglib::smatrixevdi( real_2d_array a, ae_int_t n, ae_int_t zneeded, bool isupper, ae_int_t i1, ae_int_t i2, real_1d_array& w, real_2d_array& z);
smatrixevdr
function/************************************************************************* Subroutine for finding the eigenvalues (and eigenvectors) of a symmetric matrix in a given half open interval (A, B] by using a bisection and inverse iteration Input parameters: A - symmetric matrix which is given by its upper or lower triangular part. Array [0..N-1, 0..N-1]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. IsUpperA - storage format of matrix A. B1, B2 - half open interval (B1, B2] to search eigenvalues in. Output parameters: M - number of eigenvalues found in a given half-interval (M>=0). W - array of the eigenvalues found. Array whose index ranges within [0..M-1]. Z - if ZNeeded is equal to: * 0, Z hasn't changed; * 1, Z contains eigenvectors. Array whose indexes range within [0..N-1, 0..M-1]. The eigenvectors are stored in the matrix columns. Result: True, if successful. M contains the number of eigenvalues in the given half-interval (could be equal to 0), W contains the eigenvalues, Z contains the eigenvectors (if needed). False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned, M is equal to 0. -- ALGLIB -- Copyright 07.01.2006 by Bochkanov Sergey *************************************************************************/bool alglib::smatrixevdr( real_2d_array a, ae_int_t n, ae_int_t zneeded, bool isupper, double b1, double b2, ae_int_t& m, real_1d_array& w, real_2d_array& z);
smatrixtdevd
function/************************************************************************* Finding the eigenvalues and eigenvectors of a tridiagonal symmetric matrix The algorithm finds the eigen pairs of a tridiagonal symmetric matrix by using an QL/QR algorithm with implicit shifts. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes one important improvement of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Generally, commercial ALGLIB is several times faster than open-source ! generic C edition, and many times faster than open-source C# edition. ! ! Multithreaded acceleration is NOT supported for this function. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: D - the main diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-1]. E - the secondary diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-2]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not needed; * 1, the eigenvectors of a tridiagonal matrix are multiplied by the square matrix Z. It is used if the tridiagonal matrix is obtained by the similarity transformation of a symmetric matrix; * 2, the eigenvectors of a tridiagonal matrix replace the square matrix Z; * 3, matrix Z contains the first row of the eigenvectors matrix. Z - if ZNeeded=1, Z contains the square matrix by which the eigenvectors are multiplied. Array whose indexes range within [0..N-1, 0..N-1]. Output parameters: D - eigenvalues in ascending order. Array whose index ranges within [0..N-1]. Z - if ZNeeded is equal to: * 0, Z hasn't changed; * 1, Z contains the product of a given matrix (from the left) and the eigenvectors matrix (from the right); * 2, Z contains the eigenvectors. * 3, Z contains the first row of the eigenvectors matrix. If ZNeeded<3, Z is the array whose indexes range within [0..N-1, 0..N-1]. In that case, the eigenvectors are stored in the matrix columns. If ZNeeded=3, Z is the array whose indexes range within [0..0, 0..N-1]. Result: True, if the algorithm has converged. False, if the algorithm hasn't converged. -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University September 30, 1994 *************************************************************************/bool alglib::smatrixtdevd( real_1d_array& d, real_1d_array e, ae_int_t n, ae_int_t zneeded, real_2d_array& z);
smatrixtdevdi
function/************************************************************************* Subroutine for finding tridiagonal matrix eigenvalues/vectors with given indexes (in ascending order) by using the bisection and inverse iteraion. Input parameters: D - the main diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-1]. E - the secondary diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-2]. N - size of matrix. N>=0. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not needed; * 1, the eigenvectors of a tridiagonal matrix are multiplied by the square matrix Z. It is used if the tridiagonal matrix is obtained by the similarity transformation of a symmetric matrix. * 2, the eigenvectors of a tridiagonal matrix replace matrix Z. I1, I2 - index interval for searching (from I1 to I2). 0 <= I1 <= I2 <= N-1. Z - if ZNeeded is equal to: * 0, Z isn't used and remains unchanged; * 1, Z contains the square matrix (array whose indexes range within [0..N-1, 0..N-1]) which reduces the given symmetric matrix to tridiagonal form; * 2, Z isn't used (but changed on the exit). Output parameters: D - array of the eigenvalues found. Array whose index ranges within [0..I2-I1]. Z - if ZNeeded is equal to: * 0, doesn't contain any information; * 1, contains the product of a given NxN matrix Z (from the left) and Nx(I2-I1) matrix of the eigenvectors found (from the right). Array whose indexes range within [0..N-1, 0..I2-I1]. * 2, contains the matrix of the eigenvalues found. Array whose indexes range within [0..N-1, 0..I2-I1]. Result: True, if successful. In that case, D contains the eigenvalues, Z contains the eigenvectors (if needed). It should be noted that the subroutine changes the size of arrays D and Z. False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned. -- ALGLIB -- Copyright 25.12.2005 by Bochkanov Sergey *************************************************************************/bool alglib::smatrixtdevdi( real_1d_array& d, real_1d_array e, ae_int_t n, ae_int_t zneeded, ae_int_t i1, ae_int_t i2, real_2d_array& z);
smatrixtdevdr
function/************************************************************************* Subroutine for finding the tridiagonal matrix eigenvalues/vectors in a given half-interval (A, B] by using bisection and inverse iteration. Input parameters: D - the main diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-1]. E - the secondary diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-2]. N - size of matrix, N>=0. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not needed; * 1, the eigenvectors of a tridiagonal matrix are multiplied by the square matrix Z. It is used if the tridiagonal matrix is obtained by the similarity transformation of a symmetric matrix. * 2, the eigenvectors of a tridiagonal matrix replace matrix Z. A, B - half-interval (A, B] to search eigenvalues in. Z - if ZNeeded is equal to: * 0, Z isn't used and remains unchanged; * 1, Z contains the square matrix (array whose indexes range within [0..N-1, 0..N-1]) which reduces the given symmetric matrix to tridiagonal form; * 2, Z isn't used (but changed on the exit). Output parameters: D - array of the eigenvalues found. Array whose index ranges within [0..M-1]. M - number of eigenvalues found in the given half-interval (M>=0). Z - if ZNeeded is equal to: * 0, doesn't contain any information; * 1, contains the product of a given NxN matrix Z (from the left) and NxM matrix of the eigenvectors found (from the right). Array whose indexes range within [0..N-1, 0..M-1]. * 2, contains the matrix of the eigenvectors found. Array whose indexes range within [0..N-1, 0..M-1]. Result: True, if successful. In that case, M contains the number of eigenvalues in the given half-interval (could be equal to 0), D contains the eigenvalues, Z contains the eigenvectors (if needed). It should be noted that the subroutine changes the size of arrays D and Z. False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned, M is equal to 0. -- ALGLIB -- Copyright 31.03.2008 by Bochkanov Sergey *************************************************************************/bool alglib::smatrixtdevdr( real_1d_array& d, real_1d_array e, ae_int_t n, ae_int_t zneeded, double a, double b, ae_int_t& m, real_2d_array& z);
expintegrals
subpackageexponentialintegralei
function/************************************************************************* Exponential integral Ei(x) x - t | | e Ei(x) = -|- --- dt . | | t - -inf Not defined for x <= 0. See also expn.c. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,100 50000 8.6e-16 1.3e-16 Cephes Math Library Release 2.8: May, 1999 Copyright 1999 by Stephen L. Moshier *************************************************************************/double alglib::exponentialintegralei(double x);
exponentialintegralen
function/************************************************************************* Exponential integral En(x) Evaluates the exponential integral inf. - | | -xt | e E (x) = | ---- dt. n | n | | t - 1 Both n and x must be nonnegative. The routine employs either a power series, a continued fraction, or an asymptotic formula depending on the relative values of n and x. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0, 30 10000 1.7e-15 3.6e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1985, 2000 by Stephen L. Moshier *************************************************************************/double alglib::exponentialintegralen(double x, ae_int_t n);
fdistr
subpackagefcdistribution
function/************************************************************************* Complemented F distribution Returns the area from x to infinity under the F density function (also known as Snedcor's density or the variance ratio density). inf. - 1 | | a-1 b-1 1-P(x) = ------ | t (1-t) dt B(a,b) | | - x The incomplete beta integral is used, according to the formula P(x) = incbet( df2/2, df1/2, (df2/(df2 + df1*x) ). ACCURACY: Tested at random points (a,b,x) in the indicated intervals. x a,b Relative error: arithmetic domain domain # trials peak rms IEEE 0,1 1,100 100000 3.7e-14 5.9e-16 IEEE 1,5 1,100 100000 8.0e-15 1.6e-15 IEEE 0,1 1,10000 100000 1.8e-11 3.5e-13 IEEE 1,5 1,10000 100000 2.0e-11 3.0e-12 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/double alglib::fcdistribution(ae_int_t a, ae_int_t b, double x);
fdistribution
function/************************************************************************* F distribution Returns the area from zero to x under the F density function (also known as Snedcor's density or the variance ratio density). This is the density of x = (u1/df1)/(u2/df2), where u1 and u2 are random variables having Chi square distributions with df1 and df2 degrees of freedom, respectively. The incomplete beta integral is used, according to the formula P(x) = incbet( df1/2, df2/2, (df1*x/(df2 + df1*x) ). The arguments a and b are greater than zero, and x is nonnegative. ACCURACY: Tested at random points (a,b,x). x a,b Relative error: arithmetic domain domain # trials peak rms IEEE 0,1 0,100 100000 9.8e-15 1.7e-15 IEEE 1,5 0,100 100000 6.5e-15 3.5e-16 IEEE 0,1 1,10000 100000 2.2e-11 3.3e-12 IEEE 1,5 1,10000 100000 1.1e-11 1.7e-13 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/double alglib::fdistribution(ae_int_t a, ae_int_t b, double x);
invfdistribution
function/************************************************************************* Inverse of complemented F distribution Finds the F density argument x such that the integral from x to infinity of the F density is equal to the given probability p. This is accomplished using the inverse beta integral function and the relations z = incbi( df2/2, df1/2, p ) x = df2 (1-z) / (df1 z). Note: the following relations hold for the inverse of the uncomplemented F distribution: z = incbi( df1/2, df2/2, p ) x = df2 z / (df1 (1-z)). ACCURACY: Tested at random points (a,b,p). a,b Relative error: arithmetic domain # trials peak rms For p between .001 and 1: IEEE 1,100 100000 8.3e-15 4.7e-16 IEEE 1,10000 100000 2.1e-11 1.4e-13 For p between 10^-6 and 10^-3: IEEE 1,100 50000 1.3e-12 8.4e-15 IEEE 1,10000 50000 3.0e-12 4.8e-14 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/double alglib::invfdistribution(ae_int_t a, ae_int_t b, double y);
fft
subpackagefft_complex_d1 | Complex FFT: simple example | |
fft_complex_d2 | Complex FFT: advanced example | |
fft_real_d1 | Real FFT: simple example | |
fft_real_d2 | Real FFT: advanced example |
fftc1d
function/************************************************************************* 1-dimensional complex FFT. Array size N may be arbitrary number (composite or prime). Composite N's are handled with cache-oblivious variation of a Cooley-Tukey algorithm. Small prime-factors are transformed using hard coded codelets (similar to FFTW codelets, but without low-level optimization), large prime-factors are handled with Bluestein's algorithm. Fastests transforms are for smooth N's (prime factors are 2, 3, 5 only), most fast for powers of 2. When N have prime factors larger than these, but orders of magnitude smaller than N, computations will be about 4 times slower than for nearby highly composite N's. When N itself is prime, speed will be 6 times lower. Algorithm has O(N*logN) complexity for any N (composite or prime). INPUT PARAMETERS A - array[0..N-1] - complex function to be transformed N - problem size OUTPUT PARAMETERS A - DFT of a input array, array[0..N-1] A_out[j] = SUM(A_in[k]*exp(-2*pi*sqrt(-1)*j*k/N), k = 0..N-1) -- ALGLIB -- Copyright 29.05.2009 by Bochkanov Sergey *************************************************************************/void alglib::fftc1d(complex_1d_array& a); void alglib::fftc1d(complex_1d_array& a, ae_int_t n);
fftc1dinv
function/************************************************************************* 1-dimensional complex inverse FFT. Array size N may be arbitrary number (composite or prime). Algorithm has O(N*logN) complexity for any N (composite or prime). See FFTC1D() description for more information about algorithm performance. INPUT PARAMETERS A - array[0..N-1] - complex array to be transformed N - problem size OUTPUT PARAMETERS A - inverse DFT of a input array, array[0..N-1] A_out[j] = SUM(A_in[k]/N*exp(+2*pi*sqrt(-1)*j*k/N), k = 0..N-1) -- ALGLIB -- Copyright 29.05.2009 by Bochkanov Sergey *************************************************************************/void alglib::fftc1dinv(complex_1d_array& a); void alglib::fftc1dinv(complex_1d_array& a, ae_int_t n);
fftr1d
function/************************************************************************* 1-dimensional real FFT. Algorithm has O(N*logN) complexity for any N (composite or prime). INPUT PARAMETERS A - array[0..N-1] - real function to be transformed N - problem size OUTPUT PARAMETERS F - DFT of a input array, array[0..N-1] F[j] = SUM(A[k]*exp(-2*pi*sqrt(-1)*j*k/N), k = 0..N-1) NOTE: F[] satisfies symmetry property F[k] = conj(F[N-k]), so just one half of array is usually needed. But for convinience subroutine returns full complex array (with frequencies above N/2), so its result may be used by other FFT-related subroutines. -- ALGLIB -- Copyright 01.06.2009 by Bochkanov Sergey *************************************************************************/void alglib::fftr1d(real_1d_array a, complex_1d_array& f); void alglib::fftr1d(real_1d_array a, ae_int_t n, complex_1d_array& f);
fftr1dinv
function/************************************************************************* 1-dimensional real inverse FFT. Algorithm has O(N*logN) complexity for any N (composite or prime). INPUT PARAMETERS F - array[0..floor(N/2)] - frequencies from forward real FFT N - problem size OUTPUT PARAMETERS A - inverse DFT of a input array, array[0..N-1] NOTE: F[] should satisfy symmetry property F[k] = conj(F[N-k]), so just one half of frequencies array is needed - elements from 0 to floor(N/2). F[0] is ALWAYS real. If N is even F[floor(N/2)] is real too. If N is odd, then F[floor(N/2)] has no special properties. Relying on properties noted above, FFTR1DInv subroutine uses only elements from 0th to floor(N/2)-th. It ignores imaginary part of F[0], and in case N is even it ignores imaginary part of F[floor(N/2)] too. When you call this function using full arguments list - "FFTR1DInv(F,N,A)" - you can pass either either frequencies array with N elements or reduced array with roughly N/2 elements - subroutine will successfully transform both. If you call this function using reduced arguments list - "FFTR1DInv(F,A)" - you must pass FULL array with N elements (although higher N/2 are still not used) because array size is used to automatically determine FFT length -- ALGLIB -- Copyright 01.06.2009 by Bochkanov Sergey *************************************************************************/void alglib::fftr1dinv(complex_1d_array f, real_1d_array& a); void alglib::fftr1dinv(complex_1d_array f, ae_int_t n, real_1d_array& a);
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "fasttransforms.h" using namespace alglib; int main(int argc, char **argv) { // // first we demonstrate forward FFT: // [1i,1i,1i,1i] is converted to [4i, 0, 0, 0] // complex_1d_array z = "[1i,1i,1i,1i]"; fftc1d(z); printf("%s\n", z.tostring(3).c_str()); // EXPECTED: [4i,0,0,0] // // now we convert [4i, 0, 0, 0] back to [1i,1i,1i,1i] // with backward FFT // fftc1dinv(z); printf("%s\n", z.tostring(3).c_str()); // EXPECTED: [1i,1i,1i,1i] return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "fasttransforms.h" using namespace alglib; int main(int argc, char **argv) { // // first we demonstrate forward FFT: // [0,1,0,1i] is converted to [1+1i, -1-1i, -1-1i, 1+1i] // complex_1d_array z = "[0,1,0,1i]"; fftc1d(z); printf("%s\n", z.tostring(3).c_str()); // EXPECTED: [1+1i, -1-1i, -1-1i, 1+1i] // // now we convert result back with backward FFT // fftc1dinv(z); printf("%s\n", z.tostring(3).c_str()); // EXPECTED: [0,1,0,1i] return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "fasttransforms.h" using namespace alglib; int main(int argc, char **argv) { // // first we demonstrate forward FFT: // [1,1,1,1] is converted to [4, 0, 0, 0] // real_1d_array x = "[1,1,1,1]"; complex_1d_array f; real_1d_array x2; fftr1d(x, f); printf("%s\n", f.tostring(3).c_str()); // EXPECTED: [4,0,0,0] // // now we convert [4, 0, 0, 0] back to [1,1,1,1] // with backward FFT // fftr1dinv(f, x2); printf("%s\n", x2.tostring(3).c_str()); // EXPECTED: [1,1,1,1] return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "fasttransforms.h" using namespace alglib; int main(int argc, char **argv) { // // first we demonstrate forward FFT: // [1,2,3,4] is converted to [10, -2+2i, -2, -2-2i] // // note that output array is self-adjoint: // * f[0] = conj(f[0]) // * f[1] = conj(f[3]) // * f[2] = conj(f[2]) // real_1d_array x = "[1,2,3,4]"; complex_1d_array f; real_1d_array x2; fftr1d(x, f); printf("%s\n", f.tostring(3).c_str()); // EXPECTED: [10, -2+2i, -2, -2-2i] // // now we convert [10, -2+2i, -2, -2-2i] back to [1,2,3,4] // fftr1dinv(f, x2); printf("%s\n", x2.tostring(3).c_str()); // EXPECTED: [1,2,3,4] // // remember that F is self-adjoint? It means that we can pass just half // (slightly larger than half) of F to inverse real FFT and still get our result. // // I.e. instead [10, -2+2i, -2, -2-2i] we pass just [10, -2+2i, -2] and everything works! // // NOTE: in this case we should explicitly pass array length (which is 4) to ALGLIB; // if not, it will automatically use array length to determine FFT size and // will erroneously make half-length FFT. // f = "[10, -2+2i, -2]"; fftr1dinv(f, 4, x2); printf("%s\n", x2.tostring(3).c_str()); // EXPECTED: [1,2,3,4] return 0; }
fht
subpackagefhtr1d
function/************************************************************************* 1-dimensional Fast Hartley Transform. Algorithm has O(N*logN) complexity for any N (composite or prime). INPUT PARAMETERS A - array[0..N-1] - real function to be transformed N - problem size OUTPUT PARAMETERS A - FHT of a input array, array[0..N-1], A_out[k] = sum(A_in[j]*(cos(2*pi*j*k/N)+sin(2*pi*j*k/N)), j=0..N-1) -- ALGLIB -- Copyright 04.06.2009 by Bochkanov Sergey *************************************************************************/void alglib::fhtr1d(real_1d_array& a, ae_int_t n);
fhtr1dinv
function/************************************************************************* 1-dimensional inverse FHT. Algorithm has O(N*logN) complexity for any N (composite or prime). INPUT PARAMETERS A - array[0..N-1] - complex array to be transformed N - problem size OUTPUT PARAMETERS A - inverse FHT of a input array, array[0..N-1] -- ALGLIB -- Copyright 29.05.2009 by Bochkanov Sergey *************************************************************************/void alglib::fhtr1dinv(real_1d_array& a, ae_int_t n);
filters
subpackagefilters_d_ema | EMA(alpha) filter | |
filters_d_lrma | LRMA(k) filter | |
filters_d_sma | SMA(k) filter |
filterema
function/************************************************************************* Filters: exponential moving averages. This filter replaces array by results of EMA(alpha) filter. EMA(alpha) is defined as filter which replaces X[] by S[]: S[0] = X[0] S[t] = alpha*X[t] + (1-alpha)*S[t-1] INPUT PARAMETERS: X - array[N], array to process. It can be larger than N, in this case only first N points are processed. N - points count, N>=0 alpha - 0<alpha<=1, smoothing parameter. OUTPUT PARAMETERS: X - array, whose first N elements were processed with EMA(alpha) NOTE 1: this function uses efficient in-place algorithm which does not allocate temporary arrays. NOTE 2: this algorithm uses BOTH previous points and current one, i.e. new value of X[i] depends on BOTH previous point and X[i] itself. NOTE 3: technical analytis users quite often work with EMA coefficient expressed in DAYS instead of fractions. If you want to calculate EMA(N), where N is a number of days, you can use alpha=2/(N+1). -- ALGLIB -- Copyright 25.10.2011 by Bochkanov Sergey *************************************************************************/void alglib::filterema(real_1d_array& x, double alpha); void alglib::filterema(real_1d_array& x, ae_int_t n, double alpha);
Examples: [1]
filterlrma
function/************************************************************************* Filters: linear regression moving averages. This filter replaces array by results of LRMA(K) filter. LRMA(K) is defined as filter which, for each data point, builds linear regression model using K prevous points (point itself is included in these K points) and calculates value of this linear model at the point in question. INPUT PARAMETERS: X - array[N], array to process. It can be larger than N, in this case only first N points are processed. N - points count, N>=0 K - K>=1 (K can be larger than N , such cases will be correctly handled). Window width. K=1 corresponds to identity transformation (nothing changes). OUTPUT PARAMETERS: X - array, whose first N elements were processed with SMA(K) NOTE 1: this function uses efficient in-place algorithm which does not allocate temporary arrays. NOTE 2: this algorithm makes only one pass through array and uses running sum to speed-up calculation of the averages. Additional measures are taken to ensure that running sum on a long sequence of zero elements will be correctly reset to zero even in the presence of round-off error. NOTE 3: this is unsymmetric version of the algorithm, which does NOT averages points after the current one. Only X[i], X[i-1], ... are used when calculating new value of X[i]. We should also note that this algorithm uses BOTH previous points and current one, i.e. new value of X[i] depends on BOTH previous point and X[i] itself. -- ALGLIB -- Copyright 25.10.2011 by Bochkanov Sergey *************************************************************************/void alglib::filterlrma(real_1d_array& x, ae_int_t k); void alglib::filterlrma(real_1d_array& x, ae_int_t n, ae_int_t k);
Examples: [1]
filtersma
function/************************************************************************* Filters: simple moving averages (unsymmetric). This filter replaces array by results of SMA(K) filter. SMA(K) is defined as filter which averages at most K previous points (previous - not points AROUND central point) - or less, in case of the first K-1 points. INPUT PARAMETERS: X - array[N], array to process. It can be larger than N, in this case only first N points are processed. N - points count, N>=0 K - K>=1 (K can be larger than N , such cases will be correctly handled). Window width. K=1 corresponds to identity transformation (nothing changes). OUTPUT PARAMETERS: X - array, whose first N elements were processed with SMA(K) NOTE 1: this function uses efficient in-place algorithm which does not allocate temporary arrays. NOTE 2: this algorithm makes only one pass through array and uses running sum to speed-up calculation of the averages. Additional measures are taken to ensure that running sum on a long sequence of zero elements will be correctly reset to zero even in the presence of round-off error. NOTE 3: this is unsymmetric version of the algorithm, which does NOT averages points after the current one. Only X[i], X[i-1], ... are used when calculating new value of X[i]. We should also note that this algorithm uses BOTH previous points and current one, i.e. new value of X[i] depends on BOTH previous point and X[i] itself. -- ALGLIB -- Copyright 25.10.2011 by Bochkanov Sergey *************************************************************************/void alglib::filtersma(real_1d_array& x, ae_int_t k); void alglib::filtersma(real_1d_array& x, ae_int_t n, ae_int_t k);
Examples: [1]
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "dataanalysis.h" using namespace alglib; int main(int argc, char **argv) { // // Here we demonstrate EMA(0.5) filtering for time series. // real_1d_array x = "[5,6,7,8]"; // // Apply filter. // We should get [5, 5.5, 6.25, 7.125] as result // filterema(x, 0.5); printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [5,5.5,6.25,7.125] return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "dataanalysis.h" using namespace alglib; int main(int argc, char **argv) { // // Here we demonstrate LRMA(3) filtering for time series. // real_1d_array x = "[7,8,8,9,12,12]"; // // Apply filter. // We should get [7.0000, 8.0000, 8.1667, 8.8333, 11.6667, 12.5000] as result // filterlrma(x, 3); printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [7.0000,8.0000,8.1667,8.8333,11.6667,12.5000] return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "dataanalysis.h" using namespace alglib; int main(int argc, char **argv) { // // Here we demonstrate SMA(k) filtering for time series. // real_1d_array x = "[5,6,7,8]"; // // Apply filter. // We should get [5, 5.5, 6.5, 7.5] as result // filtersma(x, 2); printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [5,5.5,6.5,7.5] return 0; }
fresnel
subpackagefresnelintegral
function/************************************************************************* Fresnel integral Evaluates the Fresnel integrals x - | | C(x) = | cos(pi/2 t**2) dt, | | - 0 x - | | S(x) = | sin(pi/2 t**2) dt. | | - 0 The integrals are evaluated by a power series for x < 1. For x >= 1 auxiliary functions f(x) and g(x) are employed such that C(x) = 0.5 + f(x) sin( pi/2 x**2 ) - g(x) cos( pi/2 x**2 ) S(x) = 0.5 - f(x) cos( pi/2 x**2 ) - g(x) sin( pi/2 x**2 ) ACCURACY: Relative error. Arithmetic function domain # trials peak rms IEEE S(x) 0, 10 10000 2.0e-15 3.2e-16 IEEE C(x) 0, 10 10000 1.8e-15 3.3e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/void alglib::fresnelintegral(double x, double& c, double& s);
gammafunc
subpackagegammafunction
function/************************************************************************* Gamma function Input parameters: X - argument Domain: 0 < X < 171.6 -170 < X < 0, X is not an integer. Relative error: arithmetic domain # trials peak rms IEEE -170,-33 20000 2.3e-15 3.3e-16 IEEE -33, 33 20000 9.4e-16 2.2e-16 IEEE 33, 171.6 20000 2.3e-15 3.2e-16 Cephes Math Library Release 2.8: June, 2000 Original copyright 1984, 1987, 1989, 1992, 2000 by Stephen L. Moshier Translated to AlgoPascal by Bochkanov Sergey (2005, 2006, 2007). *************************************************************************/double alglib::gammafunction(double x);
lngamma
function/************************************************************************* Natural logarithm of gamma function Input parameters: X - argument Result: logarithm of the absolute value of the Gamma(X). Output parameters: SgnGam - sign(Gamma(X)) Domain: 0 < X < 2.55e305 -2.55e305 < X < 0, X is not an integer. ACCURACY: arithmetic domain # trials peak rms IEEE 0, 3 28000 5.4e-16 1.1e-16 IEEE 2.718, 2.556e305 40000 3.5e-16 8.3e-17 The error criterion was relative when the function magnitude was greater than one but absolute when it was less than one. The following test used the relative error criterion, though at certain points the relative error could be much higher than indicated. IEEE -200, -4 10000 4.8e-16 1.3e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 1992, 2000 by Stephen L. Moshier Translated to AlgoPascal by Bochkanov Sergey (2005, 2006, 2007). *************************************************************************/double alglib::lngamma(double x, double& sgngam);
gkq
subpackagegkqgenerategaussjacobi
function/************************************************************************* Returns Gauss and Gauss-Kronrod nodes/weights for Gauss-Jacobi quadrature on [-1,1] with weight function W(x)=Power(1-x,Alpha)*Power(1+x,Beta). INPUT PARAMETERS: N - number of Kronrod nodes, must be odd number, >=3. Alpha - power-law coefficient, Alpha>-1 Beta - power-law coefficient, Beta>-1 OUTPUT PARAMETERS: Info - error code: * -5 no real and positive Gauss-Kronrod formula can be created for such a weight function with a given number of nodes. * -4 an error was detected when calculating weights/nodes. Alpha or Beta are too close to -1 to obtain weights/nodes with high enough accuracy, or, may be, N is too large. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N was passed * +1 OK * +2 OK, but quadrature rule have exterior nodes, x[0]<-1 or x[n-1]>+1 X - array[0..N-1] - array of quadrature nodes, ordered in ascending order. WKronrod - array[0..N-1] - Kronrod weights WGauss - array[0..N-1] - Gauss weights (interleaved with zeros corresponding to extended Kronrod nodes). -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/void alglib::gkqgenerategaussjacobi( ae_int_t n, double alpha, double beta, ae_int_t& info, real_1d_array& x, real_1d_array& wkronrod, real_1d_array& wgauss);
gkqgenerategausslegendre
function/************************************************************************* Returns Gauss and Gauss-Kronrod nodes/weights for Gauss-Legendre quadrature with N points. GKQLegendreCalc (calculation) or GKQLegendreTbl (precomputed table) is used depending on machine precision and number of nodes. INPUT PARAMETERS: N - number of Kronrod nodes, must be odd number, >=3. OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. N is too large to obtain weights/nodes with high enough accuracy. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, ordered in ascending order. WKronrod - array[0..N-1] - Kronrod weights WGauss - array[0..N-1] - Gauss weights (interleaved with zeros corresponding to extended Kronrod nodes). -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/void alglib::gkqgenerategausslegendre( ae_int_t n, ae_int_t& info, real_1d_array& x, real_1d_array& wkronrod, real_1d_array& wgauss);
gkqgeneraterec
function/************************************************************************* Computation of nodes and weights of a Gauss-Kronrod quadrature formula The algorithm generates the N-point Gauss-Kronrod quadrature formula with weight function given by coefficients alpha and beta of a recurrence relation which generates a system of orthogonal polynomials: P-1(x) = 0 P0(x) = 1 Pn+1(x) = (x-alpha(n))*Pn(x) - beta(n)*Pn-1(x) and zero moment Mu0 Mu0 = integral(W(x)dx,a,b) INPUT PARAMETERS: Alpha - alpha coefficients, array[0..floor(3*K/2)]. Beta - beta coefficients, array[0..ceil(3*K/2)]. Beta[0] is not used and may be arbitrary. Beta[I]>0. Mu0 - zeroth moment of the weight function. N - number of nodes of the Gauss-Kronrod quadrature formula, N >= 3, N = 2*K+1. OUTPUT PARAMETERS: Info - error code: * -5 no real and positive Gauss-Kronrod formula can be created for such a weight function with a given number of nodes. * -4 N is too large, task may be ill conditioned - x[i]=x[i+1] found. * -3 internal eigenproblem solver hasn't converged * -2 Beta[i]<=0 * -1 incorrect N was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. WKronrod - array[0..N-1] - Kronrod weights WGauss - array[0..N-1] - Gauss weights (interleaved with zeros corresponding to extended Kronrod nodes). -- ALGLIB -- Copyright 08.05.2009 by Bochkanov Sergey *************************************************************************/void alglib::gkqgeneraterec( real_1d_array alpha, real_1d_array beta, double mu0, ae_int_t n, ae_int_t& info, real_1d_array& x, real_1d_array& wkronrod, real_1d_array& wgauss);
gkqlegendrecalc
function/************************************************************************* Returns Gauss and Gauss-Kronrod nodes for quadrature with N points. Reduction to tridiagonal eigenproblem is used. INPUT PARAMETERS: N - number of Kronrod nodes, must be odd number, >=3. OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. N is too large to obtain weights/nodes with high enough accuracy. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, ordered in ascending order. WKronrod - array[0..N-1] - Kronrod weights WGauss - array[0..N-1] - Gauss weights (interleaved with zeros corresponding to extended Kronrod nodes). -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/void alglib::gkqlegendrecalc( ae_int_t n, ae_int_t& info, real_1d_array& x, real_1d_array& wkronrod, real_1d_array& wgauss);
gkqlegendretbl
function/************************************************************************* Returns Gauss and Gauss-Kronrod nodes for quadrature with N points using pre-calculated table. Nodes/weights were computed with accuracy up to 1.0E-32 (if MPFR version of ALGLIB is used). In standard double precision accuracy reduces to something about 2.0E-16 (depending on your compiler's handling of long floating point constants). INPUT PARAMETERS: N - number of Kronrod nodes. N can be 15, 21, 31, 41, 51, 61. OUTPUT PARAMETERS: X - array[0..N-1] - array of quadrature nodes, ordered in ascending order. WKronrod - array[0..N-1] - Kronrod weights WGauss - array[0..N-1] - Gauss weights (interleaved with zeros corresponding to extended Kronrod nodes). -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/void alglib::gkqlegendretbl( ae_int_t n, real_1d_array& x, real_1d_array& wkronrod, real_1d_array& wgauss, double& eps);
gq
subpackagegqgenerategausshermite
function/************************************************************************* Returns nodes/weights for Gauss-Hermite quadrature on (-inf,+inf) with weight function W(x)=Exp(-x*x) INPUT PARAMETERS: N - number of nodes, >=1 OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. May be, N is too large. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N/Alpha was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/void alglib::gqgenerategausshermite( ae_int_t n, ae_int_t& info, real_1d_array& x, real_1d_array& w);
gqgenerategaussjacobi
function/************************************************************************* Returns nodes/weights for Gauss-Jacobi quadrature on [-1,1] with weight function W(x)=Power(1-x,Alpha)*Power(1+x,Beta). INPUT PARAMETERS: N - number of nodes, >=1 Alpha - power-law coefficient, Alpha>-1 Beta - power-law coefficient, Beta>-1 OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. Alpha or Beta are too close to -1 to obtain weights/nodes with high enough accuracy, or, may be, N is too large. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N/Alpha/Beta was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/void alglib::gqgenerategaussjacobi( ae_int_t n, double alpha, double beta, ae_int_t& info, real_1d_array& x, real_1d_array& w);
gqgenerategausslaguerre
function/************************************************************************* Returns nodes/weights for Gauss-Laguerre quadrature on [0,+inf) with weight function W(x)=Power(x,Alpha)*Exp(-x) INPUT PARAMETERS: N - number of nodes, >=1 Alpha - power-law coefficient, Alpha>-1 OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. Alpha is too close to -1 to obtain weights/nodes with high enough accuracy or, may be, N is too large. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N/Alpha was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/void alglib::gqgenerategausslaguerre( ae_int_t n, double alpha, ae_int_t& info, real_1d_array& x, real_1d_array& w);
gqgenerategausslegendre
function/************************************************************************* Returns nodes/weights for Gauss-Legendre quadrature on [-1,1] with N nodes. INPUT PARAMETERS: N - number of nodes, >=1 OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. N is too large to obtain weights/nodes with high enough accuracy. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/void alglib::gqgenerategausslegendre( ae_int_t n, ae_int_t& info, real_1d_array& x, real_1d_array& w);
gqgenerategausslobattorec
function/************************************************************************* Computation of nodes and weights for a Gauss-Lobatto quadrature formula The algorithm generates the N-point Gauss-Lobatto quadrature formula with weight function given by coefficients alpha and beta of a recurrence which generates a system of orthogonal polynomials. P-1(x) = 0 P0(x) = 1 Pn+1(x) = (x-alpha(n))*Pn(x) - beta(n)*Pn-1(x) and zeroth moment Mu0 Mu0 = integral(W(x)dx,a,b) INPUT PARAMETERS: Alpha - array[0..N-2], alpha coefficients Beta - array[0..N-2], beta coefficients. Zero-indexed element is not used, may be arbitrary. Beta[I]>0 Mu0 - zeroth moment of the weighting function. A - left boundary of the integration interval. B - right boundary of the integration interval. N - number of nodes of the quadrature formula, N>=3 (including the left and right boundary nodes). OUTPUT PARAMETERS: Info - error code: * -3 internal eigenproblem solver hasn't converged * -2 Beta[i]<=0 * -1 incorrect N was passed * 1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 2005-2009 by Bochkanov Sergey *************************************************************************/void alglib::gqgenerategausslobattorec( real_1d_array alpha, real_1d_array beta, double mu0, double a, double b, ae_int_t n, ae_int_t& info, real_1d_array& x, real_1d_array& w);
gqgenerategaussradaurec
function/************************************************************************* Computation of nodes and weights for a Gauss-Radau quadrature formula The algorithm generates the N-point Gauss-Radau quadrature formula with weight function given by the coefficients alpha and beta of a recurrence which generates a system of orthogonal polynomials. P-1(x) = 0 P0(x) = 1 Pn+1(x) = (x-alpha(n))*Pn(x) - beta(n)*Pn-1(x) and zeroth moment Mu0 Mu0 = integral(W(x)dx,a,b) INPUT PARAMETERS: Alpha - array[0..N-2], alpha coefficients. Beta - array[0..N-1], beta coefficients Zero-indexed element is not used. Beta[I]>0 Mu0 - zeroth moment of the weighting function. A - left boundary of the integration interval. N - number of nodes of the quadrature formula, N>=2 (including the left boundary node). OUTPUT PARAMETERS: Info - error code: * -3 internal eigenproblem solver hasn't converged * -2 Beta[i]<=0 * -1 incorrect N was passed * 1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 2005-2009 by Bochkanov Sergey *************************************************************************/void alglib::gqgenerategaussradaurec( real_1d_array alpha, real_1d_array beta, double mu0, double a, ae_int_t n, ae_int_t& info, real_1d_array& x, real_1d_array& w);
gqgeneraterec
function/************************************************************************* Computation of nodes and weights for a Gauss quadrature formula The algorithm generates the N-point Gauss quadrature formula with weight function given by coefficients alpha and beta of a recurrence relation which generates a system of orthogonal polynomials: P-1(x) = 0 P0(x) = 1 Pn+1(x) = (x-alpha(n))*Pn(x) - beta(n)*Pn-1(x) and zeroth moment Mu0 Mu0 = integral(W(x)dx,a,b) INPUT PARAMETERS: Alpha - array[0..N-1], alpha coefficients Beta - array[0..N-1], beta coefficients Zero-indexed element is not used and may be arbitrary. Beta[I]>0. Mu0 - zeroth moment of the weight function. N - number of nodes of the quadrature formula, N>=1 OUTPUT PARAMETERS: Info - error code: * -3 internal eigenproblem solver hasn't converged * -2 Beta[i]<=0 * -1 incorrect N was passed * 1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 2005-2009 by Bochkanov Sergey *************************************************************************/void alglib::gqgeneraterec( real_1d_array alpha, real_1d_array beta, double mu0, ae_int_t n, ae_int_t& info, real_1d_array& x, real_1d_array& w);
hermite
subpackagehermitecalculate
function/************************************************************************* Calculation of the value of the Hermite polynomial. Parameters: n - degree, n>=0 x - argument Result: the value of the Hermite polynomial Hn at x *************************************************************************/double alglib::hermitecalculate(ae_int_t n, double x);
hermitecoefficients
function/************************************************************************* Representation of Hn as C[0] + C[1]*X + ... + C[N]*X^N Input parameters: N - polynomial degree, n>=0 Output parameters: C - coefficients *************************************************************************/void alglib::hermitecoefficients(ae_int_t n, real_1d_array& c);
hermitesum
function/************************************************************************* Summation of Hermite polynomials using Clenshaw's recurrence formula. This routine calculates c[0]*H0(x) + c[1]*H1(x) + ... + c[N]*HN(x) Parameters: n - degree, n>=0 x - argument Result: the value of the Hermite polynomial at x *************************************************************************/double alglib::hermitesum(real_1d_array c, ae_int_t n, double x);
hqrnd
subpackagehqrndstate
class/************************************************************************* Portable high quality random number generator state. Initialized with HQRNDRandomize() or HQRNDSeed(). Fields: S1, S2 - seed values V - precomputed value MagicV - 'magic' value used to determine whether State structure was correctly initialized. *************************************************************************/class hqrndstate { };
hqrndcontinuous
function/************************************************************************* This function generates random number from continuous distribution given by finite sample X. INPUT PARAMETERS State - high quality random number generator, must be initialized with HQRNDRandomize() or HQRNDSeed(). X - finite sample, array[N] (can be larger, in this case only leading N elements are used). THIS ARRAY MUST BE SORTED BY ASCENDING. N - number of elements to use, N>=1 RESULT this function returns random number from continuous distribution which tries to approximate X as mush as possible. min(X)<=Result<=max(X). -- ALGLIB -- Copyright 08.11.2011 by Bochkanov Sergey *************************************************************************/double alglib::hqrndcontinuous( hqrndstate state, real_1d_array x, ae_int_t n);
hqrnddiscrete
function/************************************************************************* This function generates random number from discrete distribution given by finite sample X. INPUT PARAMETERS State - high quality random number generator, must be initialized with HQRNDRandomize() or HQRNDSeed(). X - finite sample N - number of elements to use, N>=1 RESULT this function returns one of the X[i] for random i=0..N-1 -- ALGLIB -- Copyright 08.11.2011 by Bochkanov Sergey *************************************************************************/double alglib::hqrnddiscrete( hqrndstate state, real_1d_array x, ae_int_t n);
hqrndexponential
function/************************************************************************* Random number generator: exponential distribution State structure must be initialized with HQRNDRandomize() or HQRNDSeed(). -- ALGLIB -- Copyright 11.08.2007 by Bochkanov Sergey *************************************************************************/double alglib::hqrndexponential(hqrndstate state, double lambdav);
hqrndnormal
function/************************************************************************* Random number generator: normal numbers This function generates one random number from normal distribution. Its performance is equal to that of HQRNDNormal2() State structure must be initialized with HQRNDRandomize() or HQRNDSeed(). -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/double alglib::hqrndnormal(hqrndstate state);
hqrndnormal2
function/************************************************************************* Random number generator: normal numbers This function generates two independent random numbers from normal distribution. Its performance is equal to that of HQRNDNormal() State structure must be initialized with HQRNDRandomize() or HQRNDSeed(). -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/void alglib::hqrndnormal2(hqrndstate state, double& x1, double& x2);
hqrndrandomize
function/************************************************************************* HQRNDState initialization with random values which come from standard RNG. -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/void alglib::hqrndrandomize(hqrndstate& state);
hqrndseed
function/************************************************************************* HQRNDState initialization with seed values -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/void alglib::hqrndseed(ae_int_t s1, ae_int_t s2, hqrndstate& state);
hqrnduniformi
function/************************************************************************* This function generates random integer number in [0, N) 1. State structure must be initialized with HQRNDRandomize() or HQRNDSeed() 2. N can be any positive number except for very large numbers: * close to 2^31 on 32-bit systems * close to 2^62 on 64-bit systems An exception will be generated if N is too large. -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/ae_int_t alglib::hqrnduniformi(hqrndstate state, ae_int_t n);
hqrnduniformr
function/************************************************************************* This function generates random real number in (0,1), not including interval boundaries State structure must be initialized with HQRNDRandomize() or HQRNDSeed(). -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/double alglib::hqrnduniformr(hqrndstate state);
hqrndunit2
function/************************************************************************* Random number generator: random X and Y such that X^2+Y^2=1 State structure must be initialized with HQRNDRandomize() or HQRNDSeed(). -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/void alglib::hqrndunit2(hqrndstate state, double& x, double& y);
ibetaf
subpackageincompletebeta
function/************************************************************************* Incomplete beta integral Returns incomplete beta integral of the arguments, evaluated from zero to x. The function is defined as x - - | (a+b) | | a-1 b-1 ----------- | t (1-t) dt. - - | | | (a) | (b) - 0 The domain of definition is 0 <= x <= 1. In this implementation a and b are restricted to positive values. The integral from x to 1 may be obtained by the symmetry relation 1 - incbet( a, b, x ) = incbet( b, a, 1-x ). The integral is evaluated by a continued fraction expansion or, when b*x is small, by a power series. ACCURACY: Tested at uniformly distributed random points (a,b,x) with a and b in "domain" and x between 0 and 1. Relative error arithmetic domain # trials peak rms IEEE 0,5 10000 6.9e-15 4.5e-16 IEEE 0,85 250000 2.2e-13 1.7e-14 IEEE 0,1000 30000 5.3e-12 6.3e-13 IEEE 0,10000 250000 9.3e-11 7.1e-12 IEEE 0,100000 10000 8.7e-10 4.8e-11 Outputs smaller than the IEEE gradual underflow threshold were excluded from these statistics. Cephes Math Library, Release 2.8: June, 2000 Copyright 1984, 1995, 2000 by Stephen L. Moshier *************************************************************************/double alglib::incompletebeta(double a, double b, double x);
invincompletebeta
function/************************************************************************* Inverse of imcomplete beta integral Given y, the function finds x such that incbet( a, b, x ) = y . The routine performs interval halving or Newton iterations to find the root of incbet(a,b,x) - y = 0. ACCURACY: Relative error: x a,b arithmetic domain domain # trials peak rms IEEE 0,1 .5,10000 50000 5.8e-12 1.3e-13 IEEE 0,1 .25,100 100000 1.8e-13 3.9e-15 IEEE 0,1 0,5 50000 1.1e-12 5.5e-15 With a and b constrained to half-integer or integer values: IEEE 0,1 .5,10000 50000 5.8e-12 1.1e-13 IEEE 0,1 .5,100 100000 1.7e-14 7.9e-16 With a = .5, b constrained to half-integer or integer values: IEEE 0,1 .5,10000 10000 8.3e-11 1.0e-11 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1996, 2000 by Stephen L. Moshier *************************************************************************/double alglib::invincompletebeta(double a, double b, double y);
idwint
subpackageidwinterpolant
class/************************************************************************* IDW interpolant. *************************************************************************/class idwinterpolant { };
idwbuildmodifiedshepard
function/************************************************************************* IDW interpolant using modified Shepard method for uniform point distributions. INPUT PARAMETERS: XY - X and Y values, array[0..N-1,0..NX]. First NX columns contain X-values, last column contain Y-values. N - number of nodes, N>0. NX - space dimension, NX>=1. D - nodal function type, either: * 0 constant model. Just for demonstration only, worst model ever. * 1 linear model, least squares fitting. Simpe model for datasets too small for quadratic models * 2 quadratic model, least squares fitting. Best model available (if your dataset is large enough). * -1 "fast" linear model, use with caution!!! It is significantly faster than linear/quadratic and better than constant model. But it is less robust (especially in the presence of noise). NQ - number of points used to calculate nodal functions (ignored for constant models). NQ should be LARGER than: * max(1.5*(1+NX),2^NX+1) for linear model, * max(3/4*(NX+2)*(NX+1),2^NX+1) for quadratic model. Values less than this threshold will be silently increased. NW - number of points used to calculate weights and to interpolate. Required: >=2^NX+1, values less than this threshold will be silently increased. Recommended value: about 2*NQ OUTPUT PARAMETERS: Z - IDW interpolant. NOTES: * best results are obtained with quadratic models, worst - with constant models * when N is large, NQ and NW must be significantly smaller than N both to obtain optimal performance and to obtain optimal accuracy. In 2 or 3-dimensional tasks NQ=15 and NW=25 are good values to start with. * NQ and NW may be greater than N. In such cases they will be automatically decreased. * this subroutine is always succeeds (as long as correct parameters are passed). * see 'Multivariate Interpolation of Large Sets of Scattered Data' by Robert J. Renka for more information on this algorithm. * this subroutine assumes that point distribution is uniform at the small scales. If it isn't - for example, points are concentrated along "lines", but "lines" distribution is uniform at the larger scale - then you should use IDWBuildModifiedShepardR() -- ALGLIB PROJECT -- Copyright 02.03.2010 by Bochkanov Sergey *************************************************************************/void alglib::idwbuildmodifiedshepard( real_2d_array xy, ae_int_t n, ae_int_t nx, ae_int_t d, ae_int_t nq, ae_int_t nw, idwinterpolant& z);
idwbuildmodifiedshepardr
function/************************************************************************* IDW interpolant using modified Shepard method for non-uniform datasets. This type of model uses constant nodal functions and interpolates using all nodes which are closer than user-specified radius R. It may be used when points distribution is non-uniform at the small scale, but it is at the distances as large as R. INPUT PARAMETERS: XY - X and Y values, array[0..N-1,0..NX]. First NX columns contain X-values, last column contain Y-values. N - number of nodes, N>0. NX - space dimension, NX>=1. R - radius, R>0 OUTPUT PARAMETERS: Z - IDW interpolant. NOTES: * if there is less than IDWKMin points within R-ball, algorithm selects IDWKMin closest ones, so that continuity properties of interpolant are preserved even far from points. -- ALGLIB PROJECT -- Copyright 11.04.2010 by Bochkanov Sergey *************************************************************************/void alglib::idwbuildmodifiedshepardr( real_2d_array xy, ae_int_t n, ae_int_t nx, double r, idwinterpolant& z);
idwbuildnoisy
function/************************************************************************* IDW model for noisy data. This subroutine may be used to handle noisy data, i.e. data with noise in OUTPUT values. It differs from IDWBuildModifiedShepard() in the following aspects: * nodal functions are not constrained to pass through nodes: Qi(xi)<>yi, i.e. we have fitting instead of interpolation. * weights which are used during least squares fitting stage are all equal to 1.0 (independently of distance) * "fast"-linear or constant nodal functions are not supported (either not robust enough or too rigid) This problem require far more complex tuning than interpolation problems. Below you can find some recommendations regarding this problem: * focus on tuning NQ; it controls noise reduction. As for NW, you can just make it equal to 2*NQ. * you can use cross-validation to determine optimal NQ. * optimal NQ is a result of complex tradeoff between noise level (more noise = larger NQ required) and underlying function complexity (given fixed N, larger NQ means smoothing of compex features in the data). For example, NQ=N will reduce noise to the minimum level possible, but you will end up with just constant/linear/quadratic (depending on D) least squares model for the whole dataset. INPUT PARAMETERS: XY - X and Y values, array[0..N-1,0..NX]. First NX columns contain X-values, last column contain Y-values. N - number of nodes, N>0. NX - space dimension, NX>=1. D - nodal function degree, either: * 1 linear model, least squares fitting. Simpe model for datasets too small for quadratic models (or for very noisy problems). * 2 quadratic model, least squares fitting. Best model available (if your dataset is large enough). NQ - number of points used to calculate nodal functions. NQ should be significantly larger than 1.5 times the number of coefficients in a nodal function to overcome effects of noise: * larger than 1.5*(1+NX) for linear model, * larger than 3/4*(NX+2)*(NX+1) for quadratic model. Values less than this threshold will be silently increased. NW - number of points used to calculate weights and to interpolate. Required: >=2^NX+1, values less than this threshold will be silently increased. Recommended value: about 2*NQ or larger OUTPUT PARAMETERS: Z - IDW interpolant. NOTES: * best results are obtained with quadratic models, linear models are not recommended to use unless you are pretty sure that it is what you want * this subroutine is always succeeds (as long as correct parameters are passed). * see 'Multivariate Interpolation of Large Sets of Scattered Data' by Robert J. Renka for more information on this algorithm. -- ALGLIB PROJECT -- Copyright 02.03.2010 by Bochkanov Sergey *************************************************************************/void alglib::idwbuildnoisy( real_2d_array xy, ae_int_t n, ae_int_t nx, ae_int_t d, ae_int_t nq, ae_int_t nw, idwinterpolant& z);
idwcalc
function/************************************************************************* IDW interpolation INPUT PARAMETERS: Z - IDW interpolant built with one of model building subroutines. X - array[0..NX-1], interpolation point Result: IDW interpolant Z(X) -- ALGLIB -- Copyright 02.03.2010 by Bochkanov Sergey *************************************************************************/double alglib::idwcalc(idwinterpolant z, real_1d_array x);
igammaf
subpackageincompletegamma
function/************************************************************************* Incomplete gamma integral The function is defined by x - 1 | | -t a-1 igam(a,x) = ----- | e t dt. - | | | (a) - 0 In this implementation both arguments must be positive. The integral is evaluated by either a power series or continued fraction expansion, depending on the relative values of a and x. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,30 200000 3.6e-14 2.9e-15 IEEE 0,100 300000 9.9e-14 1.5e-14 Cephes Math Library Release 2.8: June, 2000 Copyright 1985, 1987, 2000 by Stephen L. Moshier *************************************************************************/double alglib::incompletegamma(double a, double x);
incompletegammac
function/************************************************************************* Complemented incomplete gamma integral The function is defined by igamc(a,x) = 1 - igam(a,x) inf. - 1 | | -t a-1 = ----- | e t dt. - | | | (a) - x In this implementation both arguments must be positive. The integral is evaluated by either a power series or continued fraction expansion, depending on the relative values of a and x. ACCURACY: Tested at random a, x. a x Relative error: arithmetic domain domain # trials peak rms IEEE 0.5,100 0,100 200000 1.9e-14 1.7e-15 IEEE 0.01,0.5 0,100 200000 1.4e-13 1.6e-15 Cephes Math Library Release 2.8: June, 2000 Copyright 1985, 1987, 2000 by Stephen L. Moshier *************************************************************************/double alglib::incompletegammac(double a, double x);
invincompletegammac
function/************************************************************************* Inverse of complemented imcomplete gamma integral Given p, the function finds x such that igamc( a, x ) = p. Starting with the approximate value 3 x = a t where t = 1 - d - ndtri(p) sqrt(d) and d = 1/9a, the routine performs up to 10 Newton iterations to find the root of igamc(a,x) - p = 0. ACCURACY: Tested at random a, p in the intervals indicated. a p Relative error: arithmetic domain domain # trials peak rms IEEE 0.5,100 0,0.5 100000 1.0e-14 1.7e-15 IEEE 0.01,0.5 0,0.5 100000 9.0e-14 3.4e-15 IEEE 0.5,10000 0,0.5 20000 2.3e-13 3.8e-14 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/double alglib::invincompletegammac(double a, double y0);
inverseupdate
subpackagermatrixinvupdatecolumn
function/************************************************************************* Inverse matrix update by the Sherman-Morrison formula The algorithm updates matrix A^-1 when adding a vector to a column of matrix A. Input parameters: InvA - inverse of matrix A. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. UpdColumn - the column of A whose vector U was added. 0 <= UpdColumn <= N-1 U - the vector to be added to a column. Array whose index ranges within [0..N-1]. Output parameters: InvA - inverse of modified matrix A. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/void alglib::rmatrixinvupdatecolumn( real_2d_array& inva, ae_int_t n, ae_int_t updcolumn, real_1d_array u);
rmatrixinvupdaterow
function/************************************************************************* Inverse matrix update by the Sherman-Morrison formula The algorithm updates matrix A^-1 when adding a vector to a row of matrix A. Input parameters: InvA - inverse of matrix A. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. UpdRow - the row of A whose vector V was added. 0 <= Row <= N-1 V - the vector to be added to a row. Array whose index ranges within [0..N-1]. Output parameters: InvA - inverse of modified matrix A. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/void alglib::rmatrixinvupdaterow( real_2d_array& inva, ae_int_t n, ae_int_t updrow, real_1d_array v);
rmatrixinvupdatesimple
function/************************************************************************* Inverse matrix update by the Sherman-Morrison formula The algorithm updates matrix A^-1 when adding a number to an element of matrix A. Input parameters: InvA - inverse of matrix A. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. UpdRow - row where the element to be updated is stored. UpdColumn - column where the element to be updated is stored. UpdVal - a number to be added to the element. Output parameters: InvA - inverse of modified matrix A. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/void alglib::rmatrixinvupdatesimple( real_2d_array& inva, ae_int_t n, ae_int_t updrow, ae_int_t updcolumn, double updval);
rmatrixinvupdateuv
function/************************************************************************* Inverse matrix update by the Sherman-Morrison formula The algorithm computes the inverse of matrix A+u*v' by using the given matrix A^-1 and the vectors u and v. Input parameters: InvA - inverse of matrix A. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. U - the vector modifying the matrix. Array whose index ranges within [0..N-1]. V - the vector modifying the matrix. Array whose index ranges within [0..N-1]. Output parameters: InvA - inverse of matrix A + u*v'. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/void alglib::rmatrixinvupdateuv( real_2d_array& inva, ae_int_t n, real_1d_array u, real_1d_array v);
jacobianelliptic
subpackagejacobianellipticfunctions
function/************************************************************************* Jacobian Elliptic Functions Evaluates the Jacobian elliptic functions sn(u|m), cn(u|m), and dn(u|m) of parameter m between 0 and 1, and real argument u. These functions are periodic, with quarter-period on the real axis equal to the complete elliptic integral ellpk(1.0-m). Relation to incomplete elliptic integral: If u = ellik(phi,m), then sn(u|m) = sin(phi), and cn(u|m) = cos(phi). Phi is called the amplitude of u. Computation is by means of the arithmetic-geometric mean algorithm, except when m is within 1e-9 of 0 or 1. In the latter case with m close to 1, the approximation applies only for phi < pi/2. ACCURACY: Tested at random points with u between 0 and 10, m between 0 and 1. Absolute error (* = relative error): arithmetic function # trials peak rms IEEE phi 10000 9.2e-16* 1.4e-16* IEEE sn 50000 4.1e-15 4.6e-16 IEEE cn 40000 3.6e-15 4.4e-16 IEEE dn 10000 1.3e-12 1.8e-14 Peak error observed in consistency check using addition theorem for sn(u+v) was 4e-16 (absolute). Also tested by the above relation to the incomplete elliptic integral. Accuracy deteriorates when u is large. Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/void alglib::jacobianellipticfunctions( double u, double m, double& sn, double& cn, double& dn, double& ph);
jarquebera
subpackagejarqueberatest
function/************************************************************************* Jarque-Bera test This test checks hypotheses about the fact that a given sample X is a sample of normal random variable. Requirements: * the number of elements in the sample is not less than 5. Input parameters: X - sample. Array whose index goes from 0 to N-1. N - size of the sample. N>=5 Output parameters: P - p-value for the test Accuracy of the approximation used (5<=N<=1951): p-value relative error (5<=N<=1951) [1, 0.1] < 1% [0.1, 0.01] < 2% [0.01, 0.001] < 6% [0.001, 0] wasn't measured For N>1951 accuracy wasn't measured but it shouldn't be sharply different from table values. -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/void alglib::jarqueberatest(real_1d_array x, ae_int_t n, double& p);
laguerre
subpackagelaguerrecalculate
function/************************************************************************* Calculation of the value of the Laguerre polynomial. Parameters: n - degree, n>=0 x - argument Result: the value of the Laguerre polynomial Ln at x *************************************************************************/double alglib::laguerrecalculate(ae_int_t n, double x);
laguerrecoefficients
function/************************************************************************* Representation of Ln as C[0] + C[1]*X + ... + C[N]*X^N Input parameters: N - polynomial degree, n>=0 Output parameters: C - coefficients *************************************************************************/void alglib::laguerrecoefficients(ae_int_t n, real_1d_array& c);
laguerresum
function/************************************************************************* Summation of Laguerre polynomials using Clenshaw's recurrence formula. This routine calculates c[0]*L0(x) + c[1]*L1(x) + ... + c[N]*LN(x) Parameters: n - degree, n>=0 x - argument Result: the value of the Laguerre polynomial at x *************************************************************************/double alglib::laguerresum(real_1d_array c, ae_int_t n, double x);
lda
subpackagefisherlda
function/************************************************************************* Multiclass Fisher LDA Subroutine finds coefficients of linear combination which optimally separates training set on classes. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multithreading support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. Best results are achieved for high-dimensional problems ! (NVars is at least 256). ! ! Multithreading is used to accelerate initial phase of LDA, which ! includes calculation of products of large matrices. Again, for best ! efficiency problem must be high-dimensional. ! ! Generally, commercial ALGLIB is several times faster than open-source ! generic C edition, and many times faster than open-source C# edition. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: XY - training set, array[0..NPoints-1,0..NVars]. First NVars columns store values of independent variables, next column stores number of class (from 0 to NClasses-1) which dataset element belongs to. Fractional values are rounded to nearest integer. NPoints - training set size, NPoints>=0 NVars - number of independent variables, NVars>=1 NClasses - number of classes, NClasses>=2 OUTPUT PARAMETERS: Info - return code: * -4, if internal EVD subroutine hasn't converged * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<0, NVars<1, NClasses<2) * 1, if task has been solved * 2, if there was a multicollinearity in training set, but task has been solved. W - linear combination coefficients, array[0..NVars-1] -- ALGLIB -- Copyright 31.05.2008 by Bochkanov Sergey *************************************************************************/void alglib::fisherlda( real_2d_array xy, ae_int_t npoints, ae_int_t nvars, ae_int_t nclasses, ae_int_t& info, real_1d_array& w);
fisherldan
function/************************************************************************* N-dimensional multiclass Fisher LDA Subroutine finds coefficients of linear combinations which optimally separates training set on classes. It returns N-dimensional basis whose vector are sorted by quality of training set separation (in descending order). COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multithreading support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. Best results are achieved for high-dimensional problems ! (NVars is at least 256). ! ! Multithreading is used to accelerate initial phase of LDA, which ! includes calculation of products of large matrices. Again, for best ! efficiency problem must be high-dimensional. ! ! Generally, commercial ALGLIB is several times faster than open-source ! generic C edition, and many times faster than open-source C# edition. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: XY - training set, array[0..NPoints-1,0..NVars]. First NVars columns store values of independent variables, next column stores number of class (from 0 to NClasses-1) which dataset element belongs to. Fractional values are rounded to nearest integer. NPoints - training set size, NPoints>=0 NVars - number of independent variables, NVars>=1 NClasses - number of classes, NClasses>=2 OUTPUT PARAMETERS: Info - return code: * -4, if internal EVD subroutine hasn't converged * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<0, NVars<1, NClasses<2) * 1, if task has been solved * 2, if there was a multicollinearity in training set, but task has been solved. W - basis, array[0..NVars-1,0..NVars-1] columns of matrix stores basis vectors, sorted by quality of training set separation (in descending order) -- ALGLIB -- Copyright 31.05.2008 by Bochkanov Sergey *************************************************************************/void alglib::fisherldan( real_2d_array xy, ae_int_t npoints, ae_int_t nvars, ae_int_t nclasses, ae_int_t& info, real_2d_array& w); void alglib::smp_fisherldan( real_2d_array xy, ae_int_t npoints, ae_int_t nvars, ae_int_t nclasses, ae_int_t& info, real_2d_array& w);
legendre
subpackagelegendrecalculate
function/************************************************************************* Calculation of the value of the Legendre polynomial Pn. Parameters: n - degree, n>=0 x - argument Result: the value of the Legendre polynomial Pn at x *************************************************************************/double alglib::legendrecalculate(ae_int_t n, double x);
legendrecoefficients
function/************************************************************************* Representation of Pn as C[0] + C[1]*X + ... + C[N]*X^N Input parameters: N - polynomial degree, n>=0 Output parameters: C - coefficients *************************************************************************/void alglib::legendrecoefficients(ae_int_t n, real_1d_array& c);
legendresum
function/************************************************************************* Summation of Legendre polynomials using Clenshaw's recurrence formula. This routine calculates c[0]*P0(x) + c[1]*P1(x) + ... + c[N]*PN(x) Parameters: n - degree, n>=0 x - argument Result: the value of the Legendre polynomial at x *************************************************************************/double alglib::legendresum(real_1d_array c, ae_int_t n, double x);
lincg
subpackagelincg_d_1 | Solution of sparse linear systems with CG |
lincgreport
class/************************************************************************* *************************************************************************/class lincgreport { ae_int_t iterationscount; ae_int_t nmv; ae_int_t terminationtype; double r2; };
lincgstate
class/************************************************************************* This object stores state of the linear CG method. You should use ALGLIB functions to work with this object. Never try to access its fields directly! *************************************************************************/class lincgstate { };
lincgcreate
function/************************************************************************* This function initializes linear CG Solver. This solver is used to solve symmetric positive definite problems. If you want to solve nonsymmetric (or non-positive definite) problem you may use LinLSQR solver provided by ALGLIB. USAGE: 1. User initializes algorithm state with LinCGCreate() call 2. User tunes solver parameters with LinCGSetCond() and other functions 3. Optionally, user sets starting point with LinCGSetStartingPoint() 4. User calls LinCGSolveSparse() function which takes algorithm state and SparseMatrix object. 5. User calls LinCGResults() to get solution 6. Optionally, user may call LinCGSolveSparse() again to solve another problem with different matrix and/or right part without reinitializing LinCGState structure. INPUT PARAMETERS: N - problem dimension, N>0 OUTPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 14.11.2011 by Bochkanov Sergey *************************************************************************/void alglib::lincgcreate(ae_int_t n, lincgstate& state);
Examples: [1]
lincgresults
function/************************************************************************* CG-solver: results. This function must be called after LinCGSolve INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: X - array[N], solution Rep - optimization report: * Rep.TerminationType completetion code: * -5 input matrix is either not positive definite, too large or too small * -4 overflow/underflow during solution (ill conditioned problem) * 1 ||residual||<=EpsF*||b|| * 5 MaxIts steps was taken * 7 rounding errors prevent further progress, best point found is returned * Rep.IterationsCount contains iterations count * NMV countains number of matrix-vector calculations -- ALGLIB -- Copyright 14.11.2011 by Bochkanov Sergey *************************************************************************/void alglib::lincgresults( lincgstate state, real_1d_array& x, lincgreport& rep);
Examples: [1]
lincgsetcond
function/************************************************************************* This function sets stopping criteria. INPUT PARAMETERS: EpsF - algorithm will be stopped if norm of residual is less than EpsF*||b||. MaxIts - algorithm will be stopped if number of iterations is more than MaxIts. OUTPUT PARAMETERS: State - structure which stores algorithm state NOTES: If both EpsF and MaxIts are zero then small EpsF will be set to small value. -- ALGLIB -- Copyright 14.11.2011 by Bochkanov Sergey *************************************************************************/void alglib::lincgsetcond(lincgstate state, double epsf, ae_int_t maxits);
lincgsetprecdiag
function/************************************************************************* This function changes preconditioning settings of LinCGSolveSparse() function. LinCGSolveSparse() will use diagonal of the system matrix as preconditioner. This preconditioning mode is active by default. INPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 19.11.2012 by Bochkanov Sergey *************************************************************************/void alglib::lincgsetprecdiag(lincgstate state);
lincgsetprecunit
function/************************************************************************* This function changes preconditioning settings of LinCGSolveSparse() function. By default, SolveSparse() uses diagonal preconditioner, but if you want to use solver without preconditioning, you can call this function which forces solver to use unit matrix for preconditioning. INPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 19.11.2012 by Bochkanov Sergey *************************************************************************/void alglib::lincgsetprecunit(lincgstate state);
lincgsetrestartfreq
function/************************************************************************* This function sets restart frequency. By default, algorithm is restarted after N subsequent iterations. -- ALGLIB -- Copyright 14.11.2011 by Bochkanov Sergey *************************************************************************/void alglib::lincgsetrestartfreq(lincgstate state, ae_int_t srf);
lincgsetrupdatefreq
function/************************************************************************* This function sets frequency of residual recalculations. Algorithm updates residual r_k using iterative formula, but recalculates it from scratch after each 10 iterations. It is done to avoid accumulation of numerical errors and to stop algorithm when r_k starts to grow. Such low update frequence (1/10) gives very little overhead, but makes algorithm a bit more robust against numerical errors. However, you may change it INPUT PARAMETERS: Freq - desired update frequency, Freq>=0. Zero value means that no updates will be done. -- ALGLIB -- Copyright 14.11.2011 by Bochkanov Sergey *************************************************************************/void alglib::lincgsetrupdatefreq(lincgstate state, ae_int_t freq);
lincgsetstartingpoint
function/************************************************************************* This function sets starting point. By default, zero starting point is used. INPUT PARAMETERS: X - starting point, array[N] OUTPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 14.11.2011 by Bochkanov Sergey *************************************************************************/void alglib::lincgsetstartingpoint(lincgstate state, real_1d_array x);
lincgsetxrep
function/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not If NeedXRep is True, algorithm will call rep() callback function if it is provided to MinCGOptimize(). -- ALGLIB -- Copyright 14.11.2011 by Bochkanov Sergey *************************************************************************/void alglib::lincgsetxrep(lincgstate state, bool needxrep);
lincgsolvesparse
function/************************************************************************* Procedure for solution of A*x=b with sparse A. INPUT PARAMETERS: State - algorithm state A - sparse matrix in the CRS format (you MUST contvert it to CRS format by calling SparseConvertToCRS() function). IsUpper - whether upper or lower triangle of A is used: * IsUpper=True => only upper triangle is used and lower triangle is not referenced at all * IsUpper=False => only lower triangle is used and upper triangle is not referenced at all B - right part, array[N] RESULT: This function returns no result. You can get solution by calling LinCGResults() NOTE: this function uses lightweight preconditioning - multiplication by inverse of diag(A). If you want, you can turn preconditioning off by calling LinCGSetPrecUnit(). However, preconditioning cost is low and preconditioner is very important for solution of badly scaled problems. -- ALGLIB -- Copyright 14.11.2011 by Bochkanov Sergey *************************************************************************/void alglib::lincgsolvesparse( lincgstate state, sparsematrix a, bool isupper, real_1d_array b);
Examples: [1]
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "solvers.h" using namespace alglib; int main(int argc, char **argv) { // // This example illustrates solution of sparse linear systems with // conjugate gradient method. // // Suppose that we have linear system A*x=b with sparse symmetric // positive definite A (represented by sparsematrix object) // [ 5 1 ] // [ 1 7 2 ] // A = [ 2 8 1 ] // [ 1 4 1 ] // [ 1 4 ] // and right part b // [ 7 ] // [ 17 ] // b = [ 14 ] // [ 10 ] // [ 6 ] // and we want to solve this system using sparse linear CG. In order // to do so, we have to create left part (sparsematrix object) and // right part (dense array). // // Initially, sparse matrix is created in the Hash-Table format, // which allows easy initialization, but do not allow matrix to be // used in the linear solvers. So after construction you should convert // sparse matrix to CRS format (one suited for linear operations). // // It is important to note that in our example we initialize full // matrix A, both lower and upper triangles. However, it is symmetric // and sparse solver needs just one half of the matrix. So you may // save about half of the space by filling only one of the triangles. // sparsematrix a; sparsecreate(5, 5, a); sparseset(a, 0, 0, 5.0); sparseset(a, 0, 1, 1.0); sparseset(a, 1, 0, 1.0); sparseset(a, 1, 1, 7.0); sparseset(a, 1, 2, 2.0); sparseset(a, 2, 1, 2.0); sparseset(a, 2, 2, 8.0); sparseset(a, 2, 3, 1.0); sparseset(a, 3, 2, 1.0); sparseset(a, 3, 3, 4.0); sparseset(a, 3, 4, 1.0); sparseset(a, 4, 3, 1.0); sparseset(a, 4, 4, 4.0); // // Now our matrix is fully initialized, but we have to do one more // step - convert it from Hash-Table format to CRS format (see // documentation on sparse matrices for more information about these // formats). // // If you omit this call, ALGLIB will generate exception on the first // attempt to use A in linear operations. // sparseconverttocrs(a); // // Initialization of the right part // real_1d_array b = "[7,17,14,10,6]"; // // Now we have to create linear solver object and to use it for the // solution of the linear system. // // NOTE: lincgsolvesparse() accepts additional parameter which tells // what triangle of the symmetric matrix should be used - upper // or lower. Because we've filled both parts of the matrix, we // can use any part - upper or lower. // lincgstate s; lincgreport rep; real_1d_array x; lincgcreate(5, s); lincgsolvesparse(s, a, true, b); lincgresults(s, x, rep); printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1 printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [1.000,2.000,1.000,2.000,1.000] return 0; }
linlsqr
subpackagelinlsqr_d_1 | Solution of sparse linear systems with CG |
linlsqrreport
class/************************************************************************* *************************************************************************/class linlsqrreport { ae_int_t iterationscount; ae_int_t nmv; ae_int_t terminationtype; };
linlsqrstate
class/************************************************************************* This object stores state of the LinLSQR method. You should use ALGLIB functions to work with this object. *************************************************************************/class linlsqrstate { };
linlsqrcreate
function/************************************************************************* This function initializes linear LSQR Solver. This solver is used to solve non-symmetric (and, possibly, non-square) problems. Least squares solution is returned for non-compatible systems. USAGE: 1. User initializes algorithm state with LinLSQRCreate() call 2. User tunes solver parameters with LinLSQRSetCond() and other functions 3. User calls LinLSQRSolveSparse() function which takes algorithm state and SparseMatrix object. 4. User calls LinLSQRResults() to get solution 5. Optionally, user may call LinLSQRSolveSparse() again to solve another problem with different matrix and/or right part without reinitializing LinLSQRState structure. INPUT PARAMETERS: M - number of rows in A N - number of variables, N>0 OUTPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 30.11.2011 by Bochkanov Sergey *************************************************************************/void alglib::linlsqrcreate(ae_int_t m, ae_int_t n, linlsqrstate& state);
Examples: [1]
linlsqrresults
function/************************************************************************* LSQR solver: results. This function must be called after LinLSQRSolve INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: X - array[N], solution Rep - optimization report: * Rep.TerminationType completetion code: * 1 ||Rk||<=EpsB*||B|| * 4 ||A^T*Rk||/(||A||*||Rk||)<=EpsA * 5 MaxIts steps was taken * 7 rounding errors prevent further progress, X contains best point found so far. (sometimes returned on singular systems) * Rep.IterationsCount contains iterations count * NMV countains number of matrix-vector calculations -- ALGLIB -- Copyright 30.11.2011 by Bochkanov Sergey *************************************************************************/void alglib::linlsqrresults( linlsqrstate state, real_1d_array& x, linlsqrreport& rep);
Examples: [1]
linlsqrsetcond
function/************************************************************************* This function sets stopping criteria. INPUT PARAMETERS: EpsA - algorithm will be stopped if ||A^T*Rk||/(||A||*||Rk||)<=EpsA. EpsB - algorithm will be stopped if ||Rk||<=EpsB*||B|| MaxIts - algorithm will be stopped if number of iterations more than MaxIts. OUTPUT PARAMETERS: State - structure which stores algorithm state NOTE: if EpsA,EpsB,EpsC and MaxIts are zero then these variables will be setted as default values. -- ALGLIB -- Copyright 30.11.2011 by Bochkanov Sergey *************************************************************************/void alglib::linlsqrsetcond( linlsqrstate state, double epsa, double epsb, ae_int_t maxits);
linlsqrsetlambdai
function/************************************************************************* This function sets optional Tikhonov regularization coefficient. It is zero by default. INPUT PARAMETERS: LambdaI - regularization factor, LambdaI>=0 OUTPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 30.11.2011 by Bochkanov Sergey *************************************************************************/void alglib::linlsqrsetlambdai(linlsqrstate state, double lambdai);
linlsqrsetprecdiag
function/************************************************************************* This function changes preconditioning settings of LinCGSolveSparse() function. LinCGSolveSparse() will use diagonal of the system matrix as preconditioner. This preconditioning mode is active by default. INPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 19.11.2012 by Bochkanov Sergey *************************************************************************/void alglib::linlsqrsetprecdiag(linlsqrstate state);
linlsqrsetprecunit
function/************************************************************************* This function changes preconditioning settings of LinLSQQSolveSparse() function. By default, SolveSparse() uses diagonal preconditioner, but if you want to use solver without preconditioning, you can call this function which forces solver to use unit matrix for preconditioning. INPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 19.11.2012 by Bochkanov Sergey *************************************************************************/void alglib::linlsqrsetprecunit(linlsqrstate state);
linlsqrsetxrep
function/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not If NeedXRep is True, algorithm will call rep() callback function if it is provided to MinCGOptimize(). -- ALGLIB -- Copyright 30.11.2011 by Bochkanov Sergey *************************************************************************/void alglib::linlsqrsetxrep(linlsqrstate state, bool needxrep);
linlsqrsolvesparse
function/************************************************************************* Procedure for solution of A*x=b with sparse A. INPUT PARAMETERS: State - algorithm state A - sparse M*N matrix in the CRS format (you MUST contvert it to CRS format by calling SparseConvertToCRS() function BEFORE you pass it to this function). B - right part, array[M] RESULT: This function returns no result. You can get solution by calling LinCGResults() NOTE: this function uses lightweight preconditioning - multiplication by inverse of diag(A). If you want, you can turn preconditioning off by calling LinLSQRSetPrecUnit(). However, preconditioning cost is low and preconditioner is very important for solution of badly scaled problems. -- ALGLIB -- Copyright 30.11.2011 by Bochkanov Sergey *************************************************************************/void alglib::linlsqrsolvesparse( linlsqrstate state, sparsematrix a, real_1d_array b);
Examples: [1]
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "solvers.h" using namespace alglib; int main(int argc, char **argv) { // // This example illustrates solution of sparse linear least squares problem // with LSQR algorithm. // // Suppose that we have least squares problem min|A*x-b| with sparse A // represented by sparsematrix object // [ 1 1 ] // [ 1 1 ] // A = [ 2 1 ] // [ 1 ] // [ 1 ] // and right part b // [ 4 ] // [ 2 ] // b = [ 4 ] // [ 1 ] // [ 2 ] // and we want to solve this system in the least squares sense using // LSQR algorithm. In order to do so, we have to create left part // (sparsematrix object) and right part (dense array). // // Initially, sparse matrix is created in the Hash-Table format, // which allows easy initialization, but do not allow matrix to be // used in the linear solvers. So after construction you should convert // sparse matrix to CRS format (one suited for linear operations). // sparsematrix a; sparsecreate(5, 2, a); sparseset(a, 0, 0, 1.0); sparseset(a, 0, 1, 1.0); sparseset(a, 1, 0, 1.0); sparseset(a, 1, 1, 1.0); sparseset(a, 2, 0, 2.0); sparseset(a, 2, 1, 1.0); sparseset(a, 3, 0, 1.0); sparseset(a, 4, 1, 1.0); // // Now our matrix is fully initialized, but we have to do one more // step - convert it from Hash-Table format to CRS format (see // documentation on sparse matrices for more information about these // formats). // // If you omit this call, ALGLIB will generate exception on the first // attempt to use A in linear operations. // sparseconverttocrs(a); // // Initialization of the right part // real_1d_array b = "[4,2,4,1,2]"; // // Now we have to create linear solver object and to use it for the // solution of the linear system. // linlsqrstate s; linlsqrreport rep; real_1d_array x; linlsqrcreate(5, 2, s); linlsqrsolvesparse(s, a, b); linlsqrresults(s, x, rep); printf("%d\n", int(rep.terminationtype)); // EXPECTED: 4 printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [1.000,2.000] return 0; }
linreg
subpackagelinreg_d_basic | Linear regression used to build the very basic model and unpack coefficients |
linearmodel
class/************************************************************************* *************************************************************************/class linearmodel { };
lrreport
class/************************************************************************* LRReport structure contains additional information about linear model: * C - covariation matrix, array[0..NVars,0..NVars]. C[i,j] = Cov(A[i],A[j]) * RMSError - root mean square error on a training set * AvgError - average error on a training set * AvgRelError - average relative error on a training set (excluding observations with zero function value). * CVRMSError - leave-one-out cross-validation estimate of generalization error. Calculated using fast algorithm with O(NVars*NPoints) complexity. * CVAvgError - cross-validation estimate of average error * CVAvgRelError - cross-validation estimate of average relative error All other fields of the structure are intended for internal use and should not be used outside ALGLIB. *************************************************************************/class lrreport { real_2d_array c; double rmserror; double avgerror; double avgrelerror; double cvrmserror; double cvavgerror; double cvavgrelerror; ae_int_t ncvdefects; integer_1d_array cvdefects; };
lravgerror
function/************************************************************************* Average error on the test set INPUT PARAMETERS: LM - linear model XY - test set NPoints - test set size RESULT: average error. -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/double alglib::lravgerror( linearmodel lm, real_2d_array xy, ae_int_t npoints);
lravgrelerror
function/************************************************************************* RMS error on the test set INPUT PARAMETERS: LM - linear model XY - test set NPoints - test set size RESULT: average relative error. -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/double alglib::lravgrelerror( linearmodel lm, real_2d_array xy, ae_int_t npoints);
lrbuild
function/************************************************************************* Linear regression Subroutine builds model: Y = A(0)*X[0] + ... + A(N-1)*X[N-1] + A(N) and model found in ALGLIB format, covariation matrix, training set errors (rms, average, average relative) and leave-one-out cross-validation estimate of the generalization error. CV estimate calculated using fast algorithm with O(NPoints*NVars) complexity. When covariation matrix is calculated standard deviations of function values are assumed to be equal to RMS error on the training set. INPUT PARAMETERS: XY - training set, array [0..NPoints-1,0..NVars]: * NVars columns - independent variables * last column - dependent variable NPoints - training set size, NPoints>NVars+1 NVars - number of independent variables OUTPUT PARAMETERS: Info - return code: * -255, in case of unknown internal error * -4, if internal SVD subroutine haven't converged * -1, if incorrect parameters was passed (NPoints<NVars+2, NVars<1). * 1, if subroutine successfully finished LM - linear model in the ALGLIB format. Use subroutines of this unit to work with the model. AR - additional results -- ALGLIB -- Copyright 02.08.2008 by Bochkanov Sergey *************************************************************************/void alglib::lrbuild( real_2d_array xy, ae_int_t npoints, ae_int_t nvars, ae_int_t& info, linearmodel& lm, lrreport& ar);
Examples: [1]
lrbuilds
function/************************************************************************* Linear regression Variant of LRBuild which uses vector of standatd deviations (errors in function values). INPUT PARAMETERS: XY - training set, array [0..NPoints-1,0..NVars]: * NVars columns - independent variables * last column - dependent variable S - standard deviations (errors in function values) array[0..NPoints-1], S[i]>0. NPoints - training set size, NPoints>NVars+1 NVars - number of independent variables OUTPUT PARAMETERS: Info - return code: * -255, in case of unknown internal error * -4, if internal SVD subroutine haven't converged * -1, if incorrect parameters was passed (NPoints<NVars+2, NVars<1). * -2, if S[I]<=0 * 1, if subroutine successfully finished LM - linear model in the ALGLIB format. Use subroutines of this unit to work with the model. AR - additional results -- ALGLIB -- Copyright 02.08.2008 by Bochkanov Sergey *************************************************************************/void alglib::lrbuilds( real_2d_array xy, real_1d_array s, ae_int_t npoints, ae_int_t nvars, ae_int_t& info, linearmodel& lm, lrreport& ar);
lrbuildz
function/************************************************************************* Like LRBuild but builds model Y = A(0)*X[0] + ... + A(N-1)*X[N-1] i.e. with zero constant term. -- ALGLIB -- Copyright 30.10.2008 by Bochkanov Sergey *************************************************************************/void alglib::lrbuildz( real_2d_array xy, ae_int_t npoints, ae_int_t nvars, ae_int_t& info, linearmodel& lm, lrreport& ar);
lrbuildzs
function/************************************************************************* Like LRBuildS, but builds model Y = A(0)*X[0] + ... + A(N-1)*X[N-1] i.e. with zero constant term. -- ALGLIB -- Copyright 30.10.2008 by Bochkanov Sergey *************************************************************************/void alglib::lrbuildzs( real_2d_array xy, real_1d_array s, ae_int_t npoints, ae_int_t nvars, ae_int_t& info, linearmodel& lm, lrreport& ar);
lrpack
function/************************************************************************* "Packs" coefficients and creates linear model in ALGLIB format (LRUnpack reversed). INPUT PARAMETERS: V - coefficients, array[0..NVars] NVars - number of independent variables OUTPUT PAREMETERS: LM - linear model. -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/void alglib::lrpack(real_1d_array v, ae_int_t nvars, linearmodel& lm);
lrprocess
function/************************************************************************* Procesing INPUT PARAMETERS: LM - linear model X - input vector, array[0..NVars-1]. Result: value of linear model regression estimate -- ALGLIB -- Copyright 03.09.2008 by Bochkanov Sergey *************************************************************************/double alglib::lrprocess(linearmodel lm, real_1d_array x);
lrrmserror
function/************************************************************************* RMS error on the test set INPUT PARAMETERS: LM - linear model XY - test set NPoints - test set size RESULT: root mean square error. -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/double alglib::lrrmserror( linearmodel lm, real_2d_array xy, ae_int_t npoints);
lrunpack
function/************************************************************************* Unpacks coefficients of linear model. INPUT PARAMETERS: LM - linear model in ALGLIB format OUTPUT PARAMETERS: V - coefficients, array[0..NVars] constant term (intercept) is stored in the V[NVars]. NVars - number of independent variables (one less than number of coefficients) -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/void alglib::lrunpack(linearmodel lm, real_1d_array& v, ae_int_t& nvars);
Examples: [1]
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "dataanalysis.h" using namespace alglib; int main(int argc, char **argv) { // // In this example we demonstrate linear fitting by f(x|a) = a*exp(0.5*x). // // We have: // * xy - matrix of basic function values (exp(0.5*x)) and expected values // real_2d_array xy = "[[0.606531,1.133719],[0.670320,1.306522],[0.740818,1.504604],[0.818731,1.554663],[0.904837,1.884638],[1.000000,2.072436],[1.105171,2.257285],[1.221403,2.534068],[1.349859,2.622017],[1.491825,2.897713],[1.648721,3.219371]]"; ae_int_t info; ae_int_t nvars; linearmodel model; lrreport rep; real_1d_array c; lrbuildz(xy, 11, 1, info, model, rep); printf("%d\n", int(info)); // EXPECTED: 1 lrunpack(model, c, nvars); printf("%s\n", c.tostring(4).c_str()); // EXPECTED: [1.98650,0.00000] return 0; }
logit
subpackagelogitmodel
class/************************************************************************* *************************************************************************/class logitmodel { };
mnlreport
class/************************************************************************* MNLReport structure contains information about training process: * NGrad - number of gradient calculations * NHess - number of Hessian calculations *************************************************************************/class mnlreport { ae_int_t ngrad; ae_int_t nhess; };
mnlavgce
function/************************************************************************* Average cross-entropy (in bits per element) on the test set INPUT PARAMETERS: LM - logit model XY - test set NPoints - test set size RESULT: CrossEntropy/(NPoints*ln(2)). -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/double alglib::mnlavgce( logitmodel lm, real_2d_array xy, ae_int_t npoints);
mnlavgerror
function/************************************************************************* Average error on the test set INPUT PARAMETERS: LM - logit model XY - test set NPoints - test set size RESULT: average error (error when estimating posterior probabilities). -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/double alglib::mnlavgerror( logitmodel lm, real_2d_array xy, ae_int_t npoints);
mnlavgrelerror
function/************************************************************************* Average relative error on the test set INPUT PARAMETERS: LM - logit model XY - test set NPoints - test set size RESULT: average relative error (error when estimating posterior probabilities). -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/double alglib::mnlavgrelerror( logitmodel lm, real_2d_array xy, ae_int_t ssize);
mnlclserror
function/************************************************************************* Classification error on test set = MNLRelClsError*NPoints -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/ae_int_t alglib::mnlclserror( logitmodel lm, real_2d_array xy, ae_int_t npoints);
mnlpack
function/************************************************************************* "Packs" coefficients and creates logit model in ALGLIB format (MNLUnpack reversed). INPUT PARAMETERS: A - model (see MNLUnpack) NVars - number of independent variables NClasses - number of classes OUTPUT PARAMETERS: LM - logit model. -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/void alglib::mnlpack( real_2d_array a, ae_int_t nvars, ae_int_t nclasses, logitmodel& lm);
mnlprocess
function/************************************************************************* Procesing INPUT PARAMETERS: LM - logit model, passed by non-constant reference (some fields of structure are used as temporaries when calculating model output). X - input vector, array[0..NVars-1]. Y - (possibly) preallocated buffer; if size of Y is less than NClasses, it will be reallocated.If it is large enough, it is NOT reallocated, so we can save some time on reallocation. OUTPUT PARAMETERS: Y - result, array[0..NClasses-1] Vector of posterior probabilities for classification task. -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/void alglib::mnlprocess(logitmodel lm, real_1d_array x, real_1d_array& y);
mnlprocessi
function/************************************************************************* 'interactive' variant of MNLProcess for languages like Python which support constructs like "Y = MNLProcess(LM,X)" and interactive mode of the interpreter This function allocates new array on each call, so it is significantly slower than its 'non-interactive' counterpart, but it is more convenient when you call it from command line. -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/void alglib::mnlprocessi( logitmodel lm, real_1d_array x, real_1d_array& y);
mnlrelclserror
function/************************************************************************* Relative classification error on the test set INPUT PARAMETERS: LM - logit model XY - test set NPoints - test set size RESULT: percent of incorrectly classified cases. -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/double alglib::mnlrelclserror( logitmodel lm, real_2d_array xy, ae_int_t npoints);
mnlrmserror
function/************************************************************************* RMS error on the test set INPUT PARAMETERS: LM - logit model XY - test set NPoints - test set size RESULT: root mean square error (error when estimating posterior probabilities). -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/double alglib::mnlrmserror( logitmodel lm, real_2d_array xy, ae_int_t npoints);
mnltrainh
function/************************************************************************* This subroutine trains logit model. INPUT PARAMETERS: XY - training set, array[0..NPoints-1,0..NVars] First NVars columns store values of independent variables, next column stores number of class (from 0 to NClasses-1) which dataset element belongs to. Fractional values are rounded to nearest integer. NPoints - training set size, NPoints>=1 NVars - number of independent variables, NVars>=1 NClasses - number of classes, NClasses>=2 OUTPUT PARAMETERS: Info - return code: * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<NVars+2, NVars<1, NClasses<2). * 1, if task has been solved LM - model built Rep - training report -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/void alglib::mnltrainh( real_2d_array xy, ae_int_t npoints, ae_int_t nvars, ae_int_t nclasses, ae_int_t& info, logitmodel& lm, mnlreport& rep);
mnlunpack
function/************************************************************************* Unpacks coefficients of logit model. Logit model have form: P(class=i) = S(i) / (S(0) + S(1) + ... +S(M-1)) S(i) = Exp(A[i,0]*X[0] + ... + A[i,N-1]*X[N-1] + A[i,N]), when i<M-1 S(M-1) = 1 INPUT PARAMETERS: LM - logit model in ALGLIB format OUTPUT PARAMETERS: V - coefficients, array[0..NClasses-2,0..NVars] NVars - number of independent variables NClasses - number of classes -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/void alglib::mnlunpack( logitmodel lm, real_2d_array& a, ae_int_t& nvars, ae_int_t& nclasses);
lsfit
subpackagelsfit_d_lin | Unconstrained (general) linear least squares fitting with and without weights | |
lsfit_d_linc | Constrained (general) linear least squares fitting with and without weights | |
lsfit_d_nlf | Nonlinear fitting using function value only | |
lsfit_d_nlfb | Bound contstrained nonlinear fitting using function value only | |
lsfit_d_nlfg | Nonlinear fitting using gradient | |
lsfit_d_nlfgh | Nonlinear fitting using gradient and Hessian | |
lsfit_d_nlscale | Nonlinear fitting with custom scaling and bound constraints | |
lsfit_d_pol | Unconstrained polynomial fitting | |
lsfit_d_polc | Constrained polynomial fitting | |
lsfit_d_spline | Unconstrained fitting by penalized regression spline | |
lsfit_t_4pl | 4-parameter logistic fitting | |
lsfit_t_5pl | 5-parameter logistic fitting |
barycentricfitreport
class/************************************************************************* Barycentric fitting report: RMSError RMS error AvgError average error AvgRelError average relative error (for non-zero Y[I]) MaxError maximum error TaskRCond reciprocal of task's condition number *************************************************************************/class barycentricfitreport { double taskrcond; ae_int_t dbest; double rmserror; double avgerror; double avgrelerror; double maxerror; };
lsfitreport
class/************************************************************************* Least squares fitting report. This structure contains informational fields which are set by fitting functions provided by this unit. Different functions initialize different sets of fields, so you should read documentation on specific function you used in order to know which fields are initialized. TaskRCond reciprocal of task's condition number IterationsCount number of internal iterations VarIdx if user-supplied gradient contains errors which were detected by nonlinear fitter, this field is set to index of the first component of gradient which is suspected to be spoiled by bugs. RMSError RMS error AvgError average error AvgRelError average relative error (for non-zero Y[I]) MaxError maximum error WRMSError weighted RMS error CovPar covariance matrix for parameters, filled by some solvers ErrPar vector of errors in parameters, filled by some solvers ErrCurve vector of fit errors - variability of the best-fit curve, filled by some solvers. Noise vector of per-point noise estimates, filled by some solvers. R2 coefficient of determination (non-weighted, non-adjusted), filled by some solvers. *************************************************************************/class lsfitreport { double taskrcond; ae_int_t iterationscount; ae_int_t varidx; double rmserror; double avgerror; double avgrelerror; double maxerror; double wrmserror; real_2d_array covpar; real_1d_array errpar; real_1d_array errcurve; real_1d_array noise; double r2; };
lsfitstate
class/************************************************************************* Nonlinear fitter. You should use ALGLIB functions to work with fitter. Never try to access its fields directly! *************************************************************************/class lsfitstate { };
polynomialfitreport
class/************************************************************************* Polynomial fitting report: TaskRCond reciprocal of task's condition number RMSError RMS error AvgError average error AvgRelError average relative error (for non-zero Y[I]) MaxError maximum error *************************************************************************/class polynomialfitreport { double taskrcond; double rmserror; double avgerror; double avgrelerror; double maxerror; };
barycentricfitfloaterhormann
function/************************************************************************* Rational least squares fitting using Floater-Hormann rational functions with optimal D chosen from [0,9]. Equidistant grid with M node on [min(x),max(x)] is used to build basis functions. Different values of D are tried, optimal D (least root mean square error) is chosen. Task is linear, so linear least squares solver is used. Complexity of this computational scheme is O(N*M^2) (mostly dominated by the least squares solver). COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multithreading support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Speed-up provided by multithreading greatly depends on problem size ! - only large problems (number of coefficients is more than 500) can be ! efficiently multithreaded. ! ! Generally, commercial ALGLIB is several times faster than open-source ! generic C edition, and many times faster than open-source C# edition. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. N - number of points, N>0. M - number of basis functions ( = number_of_nodes), M>=2. OUTPUT PARAMETERS: Info- same format as in LSFitLinearWC() subroutine. * Info>0 task is solved * Info<=0 an error occured: -4 means inconvergence of internal SVD -3 means inconsistent constraints B - barycentric interpolant. Rep - report, same format as in LSFitLinearWC() subroutine. Following fields are set: * DBest best value of the D parameter * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/void alglib::barycentricfitfloaterhormann( real_1d_array x, real_1d_array y, ae_int_t n, ae_int_t m, ae_int_t& info, barycentricinterpolant& b, barycentricfitreport& rep); void alglib::smp_barycentricfitfloaterhormann( real_1d_array x, real_1d_array y, ae_int_t n, ae_int_t m, ae_int_t& info, barycentricinterpolant& b, barycentricfitreport& rep);
barycentricfitfloaterhormannwc
function/************************************************************************* Weghted rational least squares fitting using Floater-Hormann rational functions with optimal D chosen from [0,9], with constraints and individual weights. Equidistant grid with M node on [min(x),max(x)] is used to build basis functions. Different values of D are tried, optimal D (least WEIGHTED root mean square error) is chosen. Task is linear, so linear least squares solver is used. Complexity of this computational scheme is O(N*M^2) (mostly dominated by the least squares solver). SEE ALSO * BarycentricFitFloaterHormann(), "lightweight" fitting without invididual weights and constraints. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multithreading support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Speed-up provided by multithreading greatly depends on problem size ! - only large problems (number of coefficients is more than 500) can be ! efficiently multithreaded. ! ! Generally, commercial ALGLIB is several times faster than open-source ! generic C edition, and many times faster than open-source C# edition. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. W - weights, array[0..N-1] Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. Fill it by 1's if you don't want to solve weighted task. N - number of points, N>0. XC - points where function values/derivatives are constrained, array[0..K-1]. YC - values of constraints, array[0..K-1] DC - array[0..K-1], types of constraints: * DC[i]=0 means that S(XC[i])=YC[i] * DC[i]=1 means that S'(XC[i])=YC[i] SEE BELOW FOR IMPORTANT INFORMATION ON CONSTRAINTS K - number of constraints, 0<=K<M. K=0 means no constraints (XC/YC/DC are not used in such cases) M - number of basis functions ( = number_of_nodes), M>=2. OUTPUT PARAMETERS: Info- same format as in LSFitLinearWC() subroutine. * Info>0 task is solved * Info<=0 an error occured: -4 means inconvergence of internal SVD -3 means inconsistent constraints -1 means another errors in parameters passed (N<=0, for example) B - barycentric interpolant. Rep - report, same format as in LSFitLinearWC() subroutine. Following fields are set: * DBest best value of the D parameter * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroutine doesn't calculate task's condition number for K<>0. SETTING CONSTRAINTS - DANGERS AND OPPORTUNITIES: Setting constraints can lead to undesired results, like ill-conditioned behavior, or inconsistency being detected. From the other side, it allows us to improve quality of the fit. Here we summarize our experience with constrained barycentric interpolants: * excessive constraints can be inconsistent. Floater-Hormann basis functions aren't as flexible as splines (although they are very smooth). * the more evenly constraints are spread across [min(x),max(x)], the more chances that they will be consistent * the greater is M (given fixed constraints), the more chances that constraints will be consistent * in the general case, consistency of constraints IS NOT GUARANTEED. * in the several special cases, however, we CAN guarantee consistency. * one of this cases is constraints on the function VALUES at the interval boundaries. Note that consustency of the constraints on the function DERIVATIVES is NOT guaranteed (you can use in such cases cubic splines which are more flexible). * another special case is ONE constraint on the function value (OR, but not AND, derivative) anywhere in the interval Our final recommendation is to use constraints WHEN AND ONLY WHEN you can't solve your task without them. Anything beyond special cases given above is not guaranteed and may result in inconsistency. -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/void alglib::barycentricfitfloaterhormannwc( real_1d_array x, real_1d_array y, real_1d_array w, ae_int_t n, real_1d_array xc, real_1d_array yc, integer_1d_array dc, ae_int_t k, ae_int_t m, ae_int_t& info, barycentricinterpolant& b, barycentricfitreport& rep); void alglib::smp_barycentricfitfloaterhormannwc( real_1d_array x, real_1d_array y, real_1d_array w, ae_int_t n, real_1d_array xc, real_1d_array yc, integer_1d_array dc, ae_int_t k, ae_int_t m, ae_int_t& info, barycentricinterpolant& b, barycentricfitreport& rep);
logisticcalc4
function/************************************************************************* This function calculates value of four-parameter logistic (4PL) model at specified point X. 4PL model has following form: F(x|A,B,C,D) = D+(A-D)/(1+Power(x/C,B)) INPUT PARAMETERS: X - current point, X>=0: * zero X is correctly handled even for B<=0 * negative X results in exception. A, B, C, D- parameters of 4PL model: * A is unconstrained * B is unconstrained; zero or negative values are handled correctly. * C>0, non-positive value results in exception * D is unconstrained RESULT: model value at X NOTE: if B=0, denominator is assumed to be equal to 2.0 even for zero X (strictly speaking, 0^0 is undefined). NOTE: this function also throws exception if all input parameters are correct, but overflow was detected during calculations. NOTE: this function performs a lot of checks; if you need really high performance, consider evaluating model yourself, without checking for degenerate cases. -- ALGLIB PROJECT -- Copyright 14.05.2014 by Bochkanov Sergey *************************************************************************/double alglib::logisticcalc4( double x, double a, double b, double c, double d);
Examples: [1]
logisticcalc5
function/************************************************************************* This function calculates value of five-parameter logistic (5PL) model at specified point X. 5PL model has following form: F(x|A,B,C,D,G) = D+(A-D)/Power(1+Power(x/C,B),G) INPUT PARAMETERS: X - current point, X>=0: * zero X is correctly handled even for B<=0 * negative X results in exception. A, B, C, D, G- parameters of 5PL model: * A is unconstrained * B is unconstrained; zero or negative values are handled correctly. * C>0, non-positive value results in exception * D is unconstrained * G>0, non-positive value results in exception RESULT: model value at X NOTE: if B=0, denominator is assumed to be equal to Power(2.0,G) even for zero X (strictly speaking, 0^0 is undefined). NOTE: this function also throws exception if all input parameters are correct, but overflow was detected during calculations. NOTE: this function performs a lot of checks; if you need really high performance, consider evaluating model yourself, without checking for degenerate cases. -- ALGLIB PROJECT -- Copyright 14.05.2014 by Bochkanov Sergey *************************************************************************/double alglib::logisticcalc5( double x, double a, double b, double c, double d, double g);
Examples: [1]
logisticfit4
function/************************************************************************* This function fits four-parameter logistic (4PL) model to data provided by user. 4PL model has following form: F(x|A,B,C,D) = D+(A-D)/(1+Power(x/C,B)) Here: * A, D - unconstrained (see LogisticFit4EC() for constrained 4PL) * B>=0 * C>0 IMPORTANT: output of this function is constrained in such way that B>0. Because 4PL model is symmetric with respect to B, there is no need to explore B<0. Constraining B makes algorithm easier to stabilize and debug. Users who for some reason prefer to work with negative B's should transform output themselves (swap A and D, replace B by -B). 4PL fitting is implemented as follows: * we perform small number of restarts from random locations which helps to solve problem of bad local extrema. Locations are only partially random - we use input data to determine good initial guess, but we include controlled amount of randomness. * we perform Levenberg-Marquardt fitting with very tight constraints on parameters B and C - it allows us to find good initial guess for the second stage without risk of running into "flat spot". * second Levenberg-Marquardt round is performed without excessive constraints. Results from the previous round are used as initial guess. * after fitting is done, we compare results with best values found so far, rewrite "best solution" if needed, and move to next random location. Overall algorithm is very stable and is not prone to bad local extrema. Furthermore, it automatically scales when input data have very large or very small range. INPUT PARAMETERS: X - array[N], stores X-values. MUST include only non-negative numbers (but may include zero values). Can be unsorted. Y - array[N], values to fit. N - number of points. If N is less than length of X/Y, only leading N elements are used. OUTPUT PARAMETERS: A, B, C, D- parameters of 4PL model Rep - fitting report. This structure has many fields, but ONLY ONES LISTED BELOW ARE SET: * Rep.IterationsCount - number of iterations performed * Rep.RMSError - root-mean-square error * Rep.AvgError - average absolute error * Rep.AvgRelError - average relative error (calculated for non-zero Y-values) * Rep.MaxError - maximum absolute error * Rep.R2 - coefficient of determination, R-squared. This coefficient is calculated as R2=1-RSS/TSS (in case of nonlinear regression there are multiple ways to define R2, each of them giving different results). NOTE: after you obtained coefficients, you can evaluate model with LogisticCalc4() function. NOTE: if you need better control over fitting process than provided by this function, you may use LogisticFit45X(). NOTE: step is automatically scaled according to scale of parameters being fitted before we compare its length with EpsX. Thus, this function can be used to fit data with very small or very large values without changing EpsX. -- ALGLIB PROJECT -- Copyright 14.02.2014 by Bochkanov Sergey *************************************************************************/void alglib::logisticfit4( real_1d_array x, real_1d_array y, ae_int_t n, double& a, double& b, double& c, double& d, lsfitreport& rep);
Examples: [1]
logisticfit45x
function/************************************************************************* This is "expert" 4PL/5PL fitting function, which can be used if you need better control over fitting process than provided by LogisticFit4() or LogisticFit5(). This function fits model of the form F(x|A,B,C,D) = D+(A-D)/(1+Power(x/C,B)) (4PL model) or F(x|A,B,C,D,G) = D+(A-D)/Power(1+Power(x/C,B),G) (5PL model) Here: * A, D - unconstrained * B>=0 for 4PL, unconstrained for 5PL * C>0 * G>0 (if present) INPUT PARAMETERS: X - array[N], stores X-values. MUST include only non-negative numbers (but may include zero values). Can be unsorted. Y - array[N], values to fit. N - number of points. If N is less than length of X/Y, only leading N elements are used. CnstrLeft- optional equality constraint for model value at the left boundary (at X=0). Specify NAN (Not-a-Number) if you do not need constraint on the model value at X=0 (in C++ you can pass alglib::fp_nan as parameter, in C# it will be Double.NaN). See below, section "EQUALITY CONSTRAINTS" for more information about constraints. CnstrRight- optional equality constraint for model value at X=infinity. Specify NAN (Not-a-Number) if you do not need constraint on the model value (in C++ you can pass alglib::fp_nan as parameter, in C# it will be Double.NaN). See below, section "EQUALITY CONSTRAINTS" for more information about constraints. Is4PL - whether 4PL or 5PL models are fitted LambdaV - regularization coefficient, LambdaV>=0. Set it to zero unless you know what you are doing. EpsX - stopping condition (step size), EpsX>=0. Zero value means that small step is automatically chosen. See notes below for more information. RsCnt - number of repeated restarts from random points. 4PL/5PL models are prone to problem of bad local extrema. Utilizing multiple random restarts allows us to improve algorithm convergence. RsCnt>=0. Zero value means that function automatically choose small amount of restarts (recommended). OUTPUT PARAMETERS: A, B, C, D- parameters of 4PL model G - parameter of 5PL model; for Is4PL=True, G=1 is returned. Rep - fitting report. This structure has many fields, but ONLY ONES LISTED BELOW ARE SET: * Rep.IterationsCount - number of iterations performed * Rep.RMSError - root-mean-square error * Rep.AvgError - average absolute error * Rep.AvgRelError - average relative error (calculated for non-zero Y-values) * Rep.MaxError - maximum absolute error * Rep.R2 - coefficient of determination, R-squared. This coefficient is calculated as R2=1-RSS/TSS (in case of nonlinear regression there are multiple ways to define R2, each of them giving different results). NOTE: after you obtained coefficients, you can evaluate model with LogisticCalc5() function. NOTE: step is automatically scaled according to scale of parameters being fitted before we compare its length with EpsX. Thus, this function can be used to fit data with very small or very large values without changing EpsX. EQUALITY CONSTRAINTS ON PARAMETERS 4PL/5PL solver supports equality constraints on model values at the left boundary (X=0) and right boundary (X=infinity). These constraints are completely optional and you can specify both of them, only one - or no constraints at all. Parameter CnstrLeft contains left constraint (or NAN for unconstrained fitting), and CnstrRight contains right one. For 4PL, left constraint ALWAYS corresponds to parameter A, and right one is ALWAYS constraint on D. That's because 4PL model is normalized in such way that B>=0. For 5PL model things are different. Unlike 4PL one, 5PL model is NOT symmetric with respect to change in sign of B. Thus, negative B's are possible, and left constraint may constrain parameter A (for positive B's) - or parameter D (for negative B's). Similarly changes meaning of right constraint. You do not have to decide what parameter to constrain - algorithm will automatically determine correct parameters as fitting progresses. However, question highlighted above is important when you interpret fitting results. -- ALGLIB PROJECT -- Copyright 14.02.2014 by Bochkanov Sergey *************************************************************************/void alglib::logisticfit45x( real_1d_array x, real_1d_array y, ae_int_t n, double cnstrleft, double cnstrright, bool is4pl, double lambdav, double epsx, ae_int_t rscnt, double& a, double& b, double& c, double& d, double& g, lsfitreport& rep);
logisticfit4ec
function/************************************************************************* This function fits four-parameter logistic (4PL) model to data provided by user, with optional constraints on parameters A and D. 4PL model has following form: F(x|A,B,C,D) = D+(A-D)/(1+Power(x/C,B)) Here: * A, D - with optional equality constraints * B>=0 * C>0 IMPORTANT: output of this function is constrained in such way that B>0. Because 4PL model is symmetric with respect to B, there is no need to explore B<0. Constraining B makes algorithm easier to stabilize and debug. Users who for some reason prefer to work with negative B's should transform output themselves (swap A and D, replace B by -B). 4PL fitting is implemented as follows: * we perform small number of restarts from random locations which helps to solve problem of bad local extrema. Locations are only partially random - we use input data to determine good initial guess, but we include controlled amount of randomness. * we perform Levenberg-Marquardt fitting with very tight constraints on parameters B and C - it allows us to find good initial guess for the second stage without risk of running into "flat spot". * second Levenberg-Marquardt round is performed without excessive constraints. Results from the previous round are used as initial guess. * after fitting is done, we compare results with best values found so far, rewrite "best solution" if needed, and move to next random location. Overall algorithm is very stable and is not prone to bad local extrema. Furthermore, it automatically scales when input data have very large or very small range. INPUT PARAMETERS: X - array[N], stores X-values. MUST include only non-negative numbers (but may include zero values). Can be unsorted. Y - array[N], values to fit. N - number of points. If N is less than length of X/Y, only leading N elements are used. CnstrLeft- optional equality constraint for model value at the left boundary (at X=0). Specify NAN (Not-a-Number) if you do not need constraint on the model value at X=0 (in C++ you can pass alglib::fp_nan as parameter, in C# it will be Double.NaN). See below, section "EQUALITY CONSTRAINTS" for more information about constraints. CnstrRight- optional equality constraint for model value at X=infinity. Specify NAN (Not-a-Number) if you do not need constraint on the model value (in C++ you can pass alglib::fp_nan as parameter, in C# it will be Double.NaN). See below, section "EQUALITY CONSTRAINTS" for more information about constraints. OUTPUT PARAMETERS: A, B, C, D- parameters of 4PL model Rep - fitting report. This structure has many fields, but ONLY ONES LISTED BELOW ARE SET: * Rep.IterationsCount - number of iterations performed * Rep.RMSError - root-mean-square error * Rep.AvgError - average absolute error * Rep.AvgRelError - average relative error (calculated for non-zero Y-values) * Rep.MaxError - maximum absolute error * Rep.R2 - coefficient of determination, R-squared. This coefficient is calculated as R2=1-RSS/TSS (in case of nonlinear regression there are multiple ways to define R2, each of them giving different results). NOTE: after you obtained coefficients, you can evaluate model with LogisticCalc4() function. NOTE: if you need better control over fitting process than provided by this function, you may use LogisticFit45X(). NOTE: step is automatically scaled according to scale of parameters being fitted before we compare its length with EpsX. Thus, this function can be used to fit data with very small or very large values without changing EpsX. EQUALITY CONSTRAINTS ON PARAMETERS 4PL/5PL solver supports equality constraints on model values at the left boundary (X=0) and right boundary (X=infinity). These constraints are completely optional and you can specify both of them, only one - or no constraints at all. Parameter CnstrLeft contains left constraint (or NAN for unconstrained fitting), and CnstrRight contains right one. For 4PL, left constraint ALWAYS corresponds to parameter A, and right one is ALWAYS constraint on D. That's because 4PL model is normalized in such way that B>=0. -- ALGLIB PROJECT -- Copyright 14.02.2014 by Bochkanov Sergey *************************************************************************/void alglib::logisticfit4ec( real_1d_array x, real_1d_array y, ae_int_t n, double cnstrleft, double cnstrright, double& a, double& b, double& c, double& d, lsfitreport& rep);
logisticfit5
function/************************************************************************* This function fits five-parameter logistic (5PL) model to data provided by user. 5PL model has following form: F(x|A,B,C,D,G) = D+(A-D)/Power(1+Power(x/C,B),G) Here: * A, D - unconstrained * B - unconstrained * C>0 * G>0 IMPORTANT: unlike in 4PL fitting, output of this function is NOT constrained in such way that B is guaranteed to be positive. Furthermore, unlike 4PL, 5PL model is NOT symmetric with respect to B, so you can NOT transform model to equivalent one, with B having desired sign (>0 or <0). 5PL fitting is implemented as follows: * we perform small number of restarts from random locations which helps to solve problem of bad local extrema. Locations are only partially random - we use input data to determine good initial guess, but we include controlled amount of randomness. * we perform Levenberg-Marquardt fitting with very tight constraints on parameters B and C - it allows us to find good initial guess for the second stage without risk of running into "flat spot". Parameter G is fixed at G=1. * second Levenberg-Marquardt round is performed without excessive constraints on B and C, but with G still equal to 1. Results from the previous round are used as initial guess. * third Levenberg-Marquardt round relaxes constraints on G and tries two different models - one with B>0 and one with B<0. * after fitting is done, we compare results with best values found so far, rewrite "best solution" if needed, and move to next random location. Overall algorithm is very stable and is not prone to bad local extrema. Furthermore, it automatically scales when input data have very large or very small range. INPUT PARAMETERS: X - array[N], stores X-values. MUST include only non-negative numbers (but may include zero values). Can be unsorted. Y - array[N], values to fit. N - number of points. If N is less than length of X/Y, only leading N elements are used. OUTPUT PARAMETERS: A,B,C,D,G- parameters of 5PL model Rep - fitting report. This structure has many fields, but ONLY ONES LISTED BELOW ARE SET: * Rep.IterationsCount - number of iterations performed * Rep.RMSError - root-mean-square error * Rep.AvgError - average absolute error * Rep.AvgRelError - average relative error (calculated for non-zero Y-values) * Rep.MaxError - maximum absolute error * Rep.R2 - coefficient of determination, R-squared. This coefficient is calculated as R2=1-RSS/TSS (in case of nonlinear regression there are multiple ways to define R2, each of them giving different results). NOTE: after you obtained coefficients, you can evaluate model with LogisticCalc5() function. NOTE: if you need better control over fitting process than provided by this function, you may use LogisticFit45X(). NOTE: step is automatically scaled according to scale of parameters being fitted before we compare its length with EpsX. Thus, this function can be used to fit data with very small or very large values without changing EpsX. -- ALGLIB PROJECT -- Copyright 14.02.2014 by Bochkanov Sergey *************************************************************************/void alglib::logisticfit5( real_1d_array x, real_1d_array y, ae_int_t n, double& a, double& b, double& c, double& d, double& g, lsfitreport& rep);
Examples: [1]
logisticfit5ec
function/************************************************************************* This function fits five-parameter logistic (5PL) model to data provided by user, subject to optional equality constraints on parameters A and D. 5PL model has following form: F(x|A,B,C,D,G) = D+(A-D)/Power(1+Power(x/C,B),G) Here: * A, D - with optional equality constraints * B - unconstrained * C>0 * G>0 IMPORTANT: unlike in 4PL fitting, output of this function is NOT constrained in such way that B is guaranteed to be positive. Furthermore, unlike 4PL, 5PL model is NOT symmetric with respect to B, so you can NOT transform model to equivalent one, with B having desired sign (>0 or <0). 5PL fitting is implemented as follows: * we perform small number of restarts from random locations which helps to solve problem of bad local extrema. Locations are only partially random - we use input data to determine good initial guess, but we include controlled amount of randomness. * we perform Levenberg-Marquardt fitting with very tight constraints on parameters B and C - it allows us to find good initial guess for the second stage without risk of running into "flat spot". Parameter G is fixed at G=1. * second Levenberg-Marquardt round is performed without excessive constraints on B and C, but with G still equal to 1. Results from the previous round are used as initial guess. * third Levenberg-Marquardt round relaxes constraints on G and tries two different models - one with B>0 and one with B<0. * after fitting is done, we compare results with best values found so far, rewrite "best solution" if needed, and move to next random location. Overall algorithm is very stable and is not prone to bad local extrema. Furthermore, it automatically scales when input data have very large or very small range. INPUT PARAMETERS: X - array[N], stores X-values. MUST include only non-negative numbers (but may include zero values). Can be unsorted. Y - array[N], values to fit. N - number of points. If N is less than length of X/Y, only leading N elements are used. CnstrLeft- optional equality constraint for model value at the left boundary (at X=0). Specify NAN (Not-a-Number) if you do not need constraint on the model value at X=0 (in C++ you can pass alglib::fp_nan as parameter, in C# it will be Double.NaN). See below, section "EQUALITY CONSTRAINTS" for more information about constraints. CnstrRight- optional equality constraint for model value at X=infinity. Specify NAN (Not-a-Number) if you do not need constraint on the model value (in C++ you can pass alglib::fp_nan as parameter, in C# it will be Double.NaN). See below, section "EQUALITY CONSTRAINTS" for more information about constraints. OUTPUT PARAMETERS: A,B,C,D,G- parameters of 5PL model Rep - fitting report. This structure has many fields, but ONLY ONES LISTED BELOW ARE SET: * Rep.IterationsCount - number of iterations performed * Rep.RMSError - root-mean-square error * Rep.AvgError - average absolute error * Rep.AvgRelError - average relative error (calculated for non-zero Y-values) * Rep.MaxError - maximum absolute error * Rep.R2 - coefficient of determination, R-squared. This coefficient is calculated as R2=1-RSS/TSS (in case of nonlinear regression there are multiple ways to define R2, each of them giving different results). NOTE: after you obtained coefficients, you can evaluate model with LogisticCalc5() function. NOTE: if you need better control over fitting process than provided by this function, you may use LogisticFit45X(). NOTE: step is automatically scaled according to scale of parameters being fitted before we compare its length with EpsX. Thus, this function can be used to fit data with very small or very large values without changing EpsX. EQUALITY CONSTRAINTS ON PARAMETERS 5PL solver supports equality constraints on model values at the left boundary (X=0) and right boundary (X=infinity). These constraints are completely optional and you can specify both of them, only one - or no constraints at all. Parameter CnstrLeft contains left constraint (or NAN for unconstrained fitting), and CnstrRight contains right one. Unlike 4PL one, 5PL model is NOT symmetric with respect to change in sign of B. Thus, negative B's are possible, and left constraint may constrain parameter A (for positive B's) - or parameter D (for negative B's). Similarly changes meaning of right constraint. You do not have to decide what parameter to constrain - algorithm will automatically determine correct parameters as fitting progresses. However, question highlighted above is important when you interpret fitting results. -- ALGLIB PROJECT -- Copyright 14.02.2014 by Bochkanov Sergey *************************************************************************/void alglib::logisticfit5ec( real_1d_array x, real_1d_array y, ae_int_t n, double cnstrleft, double cnstrright, double& a, double& b, double& c, double& d, double& g, lsfitreport& rep);
lsfitcreatef
function/************************************************************************* Nonlinear least squares fitting using function values only. Combination of numerical differentiation and secant updates is used to obtain function Jacobian. Nonlinear task min(F(c)) is solved, where F(c) = (f(c,x[0])-y[0])^2 + ... + (f(c,x[n-1])-y[n-1])^2, * N is a number of points, * M is a dimension of a space points belong to, * K is a dimension of a space of parameters being fitted, * w is an N-dimensional vector of weight coefficients, * x is a set of N points, each of them is an M-dimensional vector, * c is a K-dimensional vector of parameters being fitted This subroutine uses only f(c,x[i]). INPUT PARAMETERS: X - array[0..N-1,0..M-1], points (one row = one point) Y - array[0..N-1], function values. C - array[0..K-1], initial approximation to the solution, N - number of points, N>1 M - dimension of space K - number of parameters being fitted DiffStep- numerical differentiation step; should not be very small or large; large = loss of accuracy small = growth of round-off errors OUTPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 18.10.2008 by Bochkanov Sergey *************************************************************************/void alglib::lsfitcreatef( real_2d_array x, real_1d_array y, real_1d_array c, double diffstep, lsfitstate& state); void alglib::lsfitcreatef( real_2d_array x, real_1d_array y, real_1d_array c, ae_int_t n, ae_int_t m, ae_int_t k, double diffstep, lsfitstate& state);
lsfitcreatefg
function/************************************************************************* Nonlinear least squares fitting using gradient only, without individual weights. Nonlinear task min(F(c)) is solved, where F(c) = ((f(c,x[0])-y[0]))^2 + ... + ((f(c,x[n-1])-y[n-1]))^2, * N is a number of points, * M is a dimension of a space points belong to, * K is a dimension of a space of parameters being fitted, * x is a set of N points, each of them is an M-dimensional vector, * c is a K-dimensional vector of parameters being fitted This subroutine uses only f(c,x[i]) and its gradient. INPUT PARAMETERS: X - array[0..N-1,0..M-1], points (one row = one point) Y - array[0..N-1], function values. C - array[0..K-1], initial approximation to the solution, N - number of points, N>1 M - dimension of space K - number of parameters being fitted CheapFG - boolean flag, which is: * True if both function and gradient calculation complexity are less than O(M^2). An improved algorithm can be used which corresponds to FGJ scheme from MINLM unit. * False otherwise. Standard Jacibian-bases Levenberg-Marquardt algo will be used (FJ scheme). OUTPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void alglib::lsfitcreatefg( real_2d_array x, real_1d_array y, real_1d_array c, bool cheapfg, lsfitstate& state); void alglib::lsfitcreatefg( real_2d_array x, real_1d_array y, real_1d_array c, ae_int_t n, ae_int_t m, ae_int_t k, bool cheapfg, lsfitstate& state);
Examples: [1]
lsfitcreatefgh
function/************************************************************************* Nonlinear least squares fitting using gradient/Hessian, without individial weights. Nonlinear task min(F(c)) is solved, where F(c) = ((f(c,x[0])-y[0]))^2 + ... + ((f(c,x[n-1])-y[n-1]))^2, * N is a number of points, * M is a dimension of a space points belong to, * K is a dimension of a space of parameters being fitted, * x is a set of N points, each of them is an M-dimensional vector, * c is a K-dimensional vector of parameters being fitted This subroutine uses f(c,x[i]), its gradient and its Hessian. INPUT PARAMETERS: X - array[0..N-1,0..M-1], points (one row = one point) Y - array[0..N-1], function values. C - array[0..K-1], initial approximation to the solution, N - number of points, N>1 M - dimension of space K - number of parameters being fitted OUTPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void alglib::lsfitcreatefgh( real_2d_array x, real_1d_array y, real_1d_array c, lsfitstate& state); void alglib::lsfitcreatefgh( real_2d_array x, real_1d_array y, real_1d_array c, ae_int_t n, ae_int_t m, ae_int_t k, lsfitstate& state);
Examples: [1]
lsfitcreatewf
function/************************************************************************* Weighted nonlinear least squares fitting using function values only. Combination of numerical differentiation and secant updates is used to obtain function Jacobian. Nonlinear task min(F(c)) is solved, where F(c) = (w[0]*(f(c,x[0])-y[0]))^2 + ... + (w[n-1]*(f(c,x[n-1])-y[n-1]))^2, * N is a number of points, * M is a dimension of a space points belong to, * K is a dimension of a space of parameters being fitted, * w is an N-dimensional vector of weight coefficients, * x is a set of N points, each of them is an M-dimensional vector, * c is a K-dimensional vector of parameters being fitted This subroutine uses only f(c,x[i]). INPUT PARAMETERS: X - array[0..N-1,0..M-1], points (one row = one point) Y - array[0..N-1], function values. W - weights, array[0..N-1] C - array[0..K-1], initial approximation to the solution, N - number of points, N>1 M - dimension of space K - number of parameters being fitted DiffStep- numerical differentiation step; should not be very small or large; large = loss of accuracy small = growth of round-off errors OUTPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 18.10.2008 by Bochkanov Sergey *************************************************************************/void alglib::lsfitcreatewf( real_2d_array x, real_1d_array y, real_1d_array w, real_1d_array c, double diffstep, lsfitstate& state); void alglib::lsfitcreatewf( real_2d_array x, real_1d_array y, real_1d_array w, real_1d_array c, ae_int_t n, ae_int_t m, ae_int_t k, double diffstep, lsfitstate& state);
lsfitcreatewfg
function/************************************************************************* Weighted nonlinear least squares fitting using gradient only. Nonlinear task min(F(c)) is solved, where F(c) = (w[0]*(f(c,x[0])-y[0]))^2 + ... + (w[n-1]*(f(c,x[n-1])-y[n-1]))^2, * N is a number of points, * M is a dimension of a space points belong to, * K is a dimension of a space of parameters being fitted, * w is an N-dimensional vector of weight coefficients, * x is a set of N points, each of them is an M-dimensional vector, * c is a K-dimensional vector of parameters being fitted This subroutine uses only f(c,x[i]) and its gradient. INPUT PARAMETERS: X - array[0..N-1,0..M-1], points (one row = one point) Y - array[0..N-1], function values. W - weights, array[0..N-1] C - array[0..K-1], initial approximation to the solution, N - number of points, N>1 M - dimension of space K - number of parameters being fitted CheapFG - boolean flag, which is: * True if both function and gradient calculation complexity are less than O(M^2). An improved algorithm can be used which corresponds to FGJ scheme from MINLM unit. * False otherwise. Standard Jacibian-bases Levenberg-Marquardt algo will be used (FJ scheme). OUTPUT PARAMETERS: State - structure which stores algorithm state See also: LSFitResults LSFitCreateFG (fitting without weights) LSFitCreateWFGH (fitting using Hessian) LSFitCreateFGH (fitting using Hessian, without weights) -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void alglib::lsfitcreatewfg( real_2d_array x, real_1d_array y, real_1d_array w, real_1d_array c, bool cheapfg, lsfitstate& state); void alglib::lsfitcreatewfg( real_2d_array x, real_1d_array y, real_1d_array w, real_1d_array c, ae_int_t n, ae_int_t m, ae_int_t k, bool cheapfg, lsfitstate& state);
Examples: [1]
lsfitcreatewfgh
function/************************************************************************* Weighted nonlinear least squares fitting using gradient/Hessian. Nonlinear task min(F(c)) is solved, where F(c) = (w[0]*(f(c,x[0])-y[0]))^2 + ... + (w[n-1]*(f(c,x[n-1])-y[n-1]))^2, * N is a number of points, * M is a dimension of a space points belong to, * K is a dimension of a space of parameters being fitted, * w is an N-dimensional vector of weight coefficients, * x is a set of N points, each of them is an M-dimensional vector, * c is a K-dimensional vector of parameters being fitted This subroutine uses f(c,x[i]), its gradient and its Hessian. INPUT PARAMETERS: X - array[0..N-1,0..M-1], points (one row = one point) Y - array[0..N-1], function values. W - weights, array[0..N-1] C - array[0..K-1], initial approximation to the solution, N - number of points, N>1 M - dimension of space K - number of parameters being fitted OUTPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void alglib::lsfitcreatewfgh( real_2d_array x, real_1d_array y, real_1d_array w, real_1d_array c, lsfitstate& state); void alglib::lsfitcreatewfgh( real_2d_array x, real_1d_array y, real_1d_array w, real_1d_array c, ae_int_t n, ae_int_t m, ae_int_t k, lsfitstate& state);
Examples: [1]
lsfitfit
function/************************************************************************* This family of functions is used to launcn iterations of nonlinear fitter These functions accept following parameters: state - algorithm state func - callback which calculates function (or merit function) value func at given point x grad - callback which calculates function (or merit function) value func and gradient grad at given point x hess - callback which calculates function (or merit function) value func, gradient grad and Hessian hess at given point x rep - optional callback which is called after each iteration can be NULL ptr - optional pointer which is passed to func/grad/hess/jac/rep can be NULL NOTES: 1. this algorithm is somewhat unusual because it works with parameterized function f(C,X), where X is a function argument (we have many points which are characterized by different argument values), and C is a parameter to fit. For example, if we want to do linear fit by f(c0,c1,x) = c0*x+c1, then x will be argument, and {c0,c1} will be parameters. It is important to understand that this algorithm finds minimum in the space of function PARAMETERS (not arguments), so it needs derivatives of f() with respect to C, not X. In the example above it will need f=c0*x+c1 and {df/dc0,df/dc1} = {x,1} instead of {df/dx} = {c0}. 2. Callback functions accept C as the first parameter, and X as the second 3. If state was created with LSFitCreateFG(), algorithm needs just function and its gradient, but if state was created with LSFitCreateFGH(), algorithm will need function, gradient and Hessian. According to the said above, there ase several versions of this function, which accept different sets of callbacks. This flexibility opens way to subtle errors - you may create state with LSFitCreateFGH() (optimization using Hessian), but call function which does not accept Hessian. So when algorithm will request Hessian, there will be no callback to call. In this case exception will be thrown. Be careful to avoid such errors because there is no way to find them at compile time - you can see them at runtime only. -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void lsfitfit(lsfitstate &state, void (*func)(const real_1d_array &c, const real_1d_array &x, double &func, void *ptr), void (*rep)(const real_1d_array &c, double func, void *ptr) = NULL, void *ptr = NULL); void lsfitfit(lsfitstate &state, void (*func)(const real_1d_array &c, const real_1d_array &x, double &func, void *ptr), void (*grad)(const real_1d_array &c, const real_1d_array &x, double &func, real_1d_array &grad, void *ptr), void (*rep)(const real_1d_array &c, double func, void *ptr) = NULL, void *ptr = NULL); void lsfitfit(lsfitstate &state, void (*func)(const real_1d_array &c, const real_1d_array &x, double &func, void *ptr), void (*grad)(const real_1d_array &c, const real_1d_array &x, double &func, real_1d_array &grad, void *ptr), void (*hess)(const real_1d_array &c, const real_1d_array &x, double &func, real_1d_array &grad, real_2d_array &hess, void *ptr), void (*rep)(const real_1d_array &c, double func, void *ptr) = NULL, void *ptr = NULL);
lsfitlinear
function/************************************************************************* Linear least squares fitting. QR decomposition is used to reduce task to MxM, then triangular solver or SVD-based solver is used depending on condition number of the system. It allows to maximize speed and retain decent accuracy. IMPORTANT: if you want to perform polynomial fitting, it may be more convenient to use PolynomialFit() function. This function gives best results on polynomial problems and solves numerical stability issues which arise when you fit high-degree polynomials to your data. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multithreading support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Speed-up provided by multithreading greatly depends on problem size ! - only large problems (number of coefficients is more than 500) can be ! efficiently multithreaded. ! ! Generally, commercial ALGLIB is several times faster than open-source ! generic C edition, and many times faster than open-source C# edition. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Y - array[0..N-1] Function values in N points. FMatrix - a table of basis functions values, array[0..N-1, 0..M-1]. FMatrix[I, J] - value of J-th basis function in I-th point. N - number of points used. N>=1. M - number of basis functions, M>=1. OUTPUT PARAMETERS: Info - error code: * -4 internal SVD decomposition subroutine failed (very rare and for degenerate systems only) * 1 task is solved C - decomposition coefficients, array[0..M-1] Rep - fitting report. Following fields are set: * Rep.TaskRCond reciprocal of condition number * R2 non-adjusted coefficient of determination (non-weighted) * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED ERRORS IN PARAMETERS This solver also calculates different kinds of errors in parameters and fills corresponding fields of report: * Rep.CovPar covariance matrix for parameters, array[K,K]. * Rep.ErrPar errors in parameters, array[K], errpar = sqrt(diag(CovPar)) * Rep.ErrCurve vector of fit errors - standard deviations of empirical best-fit curve from "ideal" best-fit curve built with infinite number of samples, array[N]. errcurve = sqrt(diag(F*CovPar*F')), where F is functions matrix. * Rep.Noise vector of per-point estimates of noise, array[N] NOTE: noise in the data is estimated as follows: * for fitting without user-supplied weights all points are assumed to have same level of noise, which is estimated from the data * for fitting with user-supplied weights we assume that noise level in I-th point is inversely proportional to Ith weight. Coefficient of proportionality is estimated from the data. NOTE: we apply small amount of regularization when we invert squared Jacobian and calculate covariance matrix. It guarantees that algorithm won't divide by zero during inversion, but skews error estimates a bit (fractional error is about 10^-9). However, we believe that this difference is insignificant for all practical purposes except for the situation when you want to compare ALGLIB results with "reference" implementation up to the last significant digit. NOTE: covariance matrix is estimated using correction for degrees of freedom (covariances are divided by N-M instead of dividing by N). -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void alglib::lsfitlinear( real_1d_array y, real_2d_array fmatrix, ae_int_t& info, real_1d_array& c, lsfitreport& rep); void alglib::lsfitlinear( real_1d_array y, real_2d_array fmatrix, ae_int_t n, ae_int_t m, ae_int_t& info, real_1d_array& c, lsfitreport& rep); void alglib::smp_lsfitlinear( real_1d_array y, real_2d_array fmatrix, ae_int_t& info, real_1d_array& c, lsfitreport& rep); void alglib::smp_lsfitlinear( real_1d_array y, real_2d_array fmatrix, ae_int_t n, ae_int_t m, ae_int_t& info, real_1d_array& c, lsfitreport& rep);
Examples: [1]
lsfitlinearc
function/************************************************************************* Constained linear least squares fitting. This is variation of LSFitLinear(), which searchs for min|A*x=b| given that K additional constaints C*x=bc are satisfied. It reduces original task to modified one: min|B*y-d| WITHOUT constraints, then LSFitLinear() is called. IMPORTANT: if you want to perform polynomial fitting, it may be more convenient to use PolynomialFit() function. This function gives best results on polynomial problems and solves numerical stability issues which arise when you fit high-degree polynomials to your data. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multithreading support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Speed-up provided by multithreading greatly depends on problem size ! - only large problems (number of coefficients is more than 500) can be ! efficiently multithreaded. ! ! Generally, commercial ALGLIB is several times faster than open-source ! generic C edition, and many times faster than open-source C# edition. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Y - array[0..N-1] Function values in N points. FMatrix - a table of basis functions values, array[0..N-1, 0..M-1]. FMatrix[I,J] - value of J-th basis function in I-th point. CMatrix - a table of constaints, array[0..K-1,0..M]. I-th row of CMatrix corresponds to I-th linear constraint: CMatrix[I,0]*C[0] + ... + CMatrix[I,M-1]*C[M-1] = CMatrix[I,M] N - number of points used. N>=1. M - number of basis functions, M>=1. K - number of constraints, 0 <= K < M K=0 corresponds to absence of constraints. OUTPUT PARAMETERS: Info - error code: * -4 internal SVD decomposition subroutine failed (very rare and for degenerate systems only) * -3 either too many constraints (M or more), degenerate constraints (some constraints are repetead twice) or inconsistent constraints were specified. * 1 task is solved C - decomposition coefficients, array[0..M-1] Rep - fitting report. Following fields are set: * R2 non-adjusted coefficient of determination (non-weighted) * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroitine doesn't calculate task's condition number for K<>0. ERRORS IN PARAMETERS This solver also calculates different kinds of errors in parameters and fills corresponding fields of report: * Rep.CovPar covariance matrix for parameters, array[K,K]. * Rep.ErrPar errors in parameters, array[K], errpar = sqrt(diag(CovPar)) * Rep.ErrCurve vector of fit errors - standard deviations of empirical best-fit curve from "ideal" best-fit curve built with infinite number of samples, array[N]. errcurve = sqrt(diag(F*CovPar*F')), where F is functions matrix. * Rep.Noise vector of per-point estimates of noise, array[N] IMPORTANT: errors in parameters are calculated without taking into account boundary/linear constraints! Presence of constraints changes distribution of errors, but there is no easy way to account for constraints when you calculate covariance matrix. NOTE: noise in the data is estimated as follows: * for fitting without user-supplied weights all points are assumed to have same level of noise, which is estimated from the data * for fitting with user-supplied weights we assume that noise level in I-th point is inversely proportional to Ith weight. Coefficient of proportionality is estimated from the data. NOTE: we apply small amount of regularization when we invert squared Jacobian and calculate covariance matrix. It guarantees that algorithm won't divide by zero during inversion, but skews error estimates a bit (fractional error is about 10^-9). However, we believe that this difference is insignificant for all practical purposes except for the situation when you want to compare ALGLIB results with "reference" implementation up to the last significant digit. NOTE: covariance matrix is estimated using correction for degrees of freedom (covariances are divided by N-M instead of dividing by N). -- ALGLIB -- Copyright 07.09.2009 by Bochkanov Sergey *************************************************************************/void alglib::lsfitlinearc( real_1d_array y, real_2d_array fmatrix, real_2d_array cmatrix, ae_int_t& info, real_1d_array& c, lsfitreport& rep); void alglib::lsfitlinearc( real_1d_array y, real_2d_array fmatrix, real_2d_array cmatrix, ae_int_t n, ae_int_t m, ae_int_t k, ae_int_t& info, real_1d_array& c, lsfitreport& rep); void alglib::smp_lsfitlinearc( real_1d_array y, real_2d_array fmatrix, real_2d_array cmatrix, ae_int_t& info, real_1d_array& c, lsfitreport& rep); void alglib::smp_lsfitlinearc( real_1d_array y, real_2d_array fmatrix, real_2d_array cmatrix, ae_int_t n, ae_int_t m, ae_int_t k, ae_int_t& info, real_1d_array& c, lsfitreport& rep);
Examples: [1]
lsfitlinearw
function/************************************************************************* Weighted linear least squares fitting. QR decomposition is used to reduce task to MxM, then triangular solver or SVD-based solver is used depending on condition number of the system. It allows to maximize speed and retain decent accuracy. IMPORTANT: if you want to perform polynomial fitting, it may be more convenient to use PolynomialFit() function. This function gives best results on polynomial problems and solves numerical stability issues which arise when you fit high-degree polynomials to your data. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multithreading support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Speed-up provided by multithreading greatly depends on problem size ! - only large problems (number of coefficients is more than 500) can be ! efficiently multithreaded. ! ! Generally, commercial ALGLIB is several times faster than open-source ! generic C edition, and many times faster than open-source C# edition. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Y - array[0..N-1] Function values in N points. W - array[0..N-1] Weights corresponding to function values. Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. FMatrix - a table of basis functions values, array[0..N-1, 0..M-1]. FMatrix[I, J] - value of J-th basis function in I-th point. N - number of points used. N>=1. M - number of basis functions, M>=1. OUTPUT PARAMETERS: Info - error code: * -4 internal SVD decomposition subroutine failed (very rare and for degenerate systems only) * -1 incorrect N/M were specified * 1 task is solved C - decomposition coefficients, array[0..M-1] Rep - fitting report. Following fields are set: * Rep.TaskRCond reciprocal of condition number * R2 non-adjusted coefficient of determination (non-weighted) * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED ERRORS IN PARAMETERS This solver also calculates different kinds of errors in parameters and fills corresponding fields of report: * Rep.CovPar covariance matrix for parameters, array[K,K]. * Rep.ErrPar errors in parameters, array[K], errpar = sqrt(diag(CovPar)) * Rep.ErrCurve vector of fit errors - standard deviations of empirical best-fit curve from "ideal" best-fit curve built with infinite number of samples, array[N]. errcurve = sqrt(diag(F*CovPar*F')), where F is functions matrix. * Rep.Noise vector of per-point estimates of noise, array[N] NOTE: noise in the data is estimated as follows: * for fitting without user-supplied weights all points are assumed to have same level of noise, which is estimated from the data * for fitting with user-supplied weights we assume that noise level in I-th point is inversely proportional to Ith weight. Coefficient of proportionality is estimated from the data. NOTE: we apply small amount of regularization when we invert squared Jacobian and calculate covariance matrix. It guarantees that algorithm won't divide by zero during inversion, but skews error estimates a bit (fractional error is about 10^-9). However, we believe that this difference is insignificant for all practical purposes except for the situation when you want to compare ALGLIB results with "reference" implementation up to the last significant digit. NOTE: covariance matrix is estimated using correction for degrees of freedom (covariances are divided by N-M instead of dividing by N). -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void alglib::lsfitlinearw( real_1d_array y, real_1d_array w, real_2d_array fmatrix, ae_int_t& info, real_1d_array& c, lsfitreport& rep); void alglib::lsfitlinearw( real_1d_array y, real_1d_array w, real_2d_array fmatrix, ae_int_t n, ae_int_t m, ae_int_t& info, real_1d_array& c, lsfitreport& rep); void alglib::smp_lsfitlinearw( real_1d_array y, real_1d_array w, real_2d_array fmatrix, ae_int_t& info, real_1d_array& c, lsfitreport& rep); void alglib::smp_lsfitlinearw( real_1d_array y, real_1d_array w, real_2d_array fmatrix, ae_int_t n, ae_int_t m, ae_int_t& info, real_1d_array& c, lsfitreport& rep);
Examples: [1]
lsfitlinearwc
function/************************************************************************* Weighted constained linear least squares fitting. This is variation of LSFitLinearW(), which searchs for min|A*x=b| given that K additional constaints C*x=bc are satisfied. It reduces original task to modified one: min|B*y-d| WITHOUT constraints, then LSFitLinearW() is called. IMPORTANT: if you want to perform polynomial fitting, it may be more convenient to use PolynomialFit() function. This function gives best results on polynomial problems and solves numerical stability issues which arise when you fit high-degree polynomials to your data. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multithreading support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Speed-up provided by multithreading greatly depends on problem size ! - only large problems (number of coefficients is more than 500) can be ! efficiently multithreaded. ! ! Generally, commercial ALGLIB is several times faster than open-source ! generic C edition, and many times faster than open-source C# edition. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Y - array[0..N-1] Function values in N points. W - array[0..N-1] Weights corresponding to function values. Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. FMatrix - a table of basis functions values, array[0..N-1, 0..M-1]. FMatrix[I,J] - value of J-th basis function in I-th point. CMatrix - a table of constaints, array[0..K-1,0..M]. I-th row of CMatrix corresponds to I-th linear constraint: CMatrix[I,0]*C[0] + ... + CMatrix[I,M-1]*C[M-1] = CMatrix[I,M] N - number of points used. N>=1. M - number of basis functions, M>=1. K - number of constraints, 0 <= K < M K=0 corresponds to absence of constraints. OUTPUT PARAMETERS: Info - error code: * -4 internal SVD decomposition subroutine failed (very rare and for degenerate systems only) * -3 either too many constraints (M or more), degenerate constraints (some constraints are repetead twice) or inconsistent constraints were specified. * 1 task is solved C - decomposition coefficients, array[0..M-1] Rep - fitting report. Following fields are set: * R2 non-adjusted coefficient of determination (non-weighted) * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroitine doesn't calculate task's condition number for K<>0. ERRORS IN PARAMETERS This solver also calculates different kinds of errors in parameters and fills corresponding fields of report: * Rep.CovPar covariance matrix for parameters, array[K,K]. * Rep.ErrPar errors in parameters, array[K], errpar = sqrt(diag(CovPar)) * Rep.ErrCurve vector of fit errors - standard deviations of empirical best-fit curve from "ideal" best-fit curve built with infinite number of samples, array[N]. errcurve = sqrt(diag(F*CovPar*F')), where F is functions matrix. * Rep.Noise vector of per-point estimates of noise, array[N] IMPORTANT: errors in parameters are calculated without taking into account boundary/linear constraints! Presence of constraints changes distribution of errors, but there is no easy way to account for constraints when you calculate covariance matrix. NOTE: noise in the data is estimated as follows: * for fitting without user-supplied weights all points are assumed to have same level of noise, which is estimated from the data * for fitting with user-supplied weights we assume that noise level in I-th point is inversely proportional to Ith weight. Coefficient of proportionality is estimated from the data. NOTE: we apply small amount of regularization when we invert squared Jacobian and calculate covariance matrix. It guarantees that algorithm won't divide by zero during inversion, but skews error estimates a bit (fractional error is about 10^-9). However, we believe that this difference is insignificant for all practical purposes except for the situation when you want to compare ALGLIB results with "reference" implementation up to the last significant digit. NOTE: covariance matrix is estimated using correction for degrees of freedom (covariances are divided by N-M instead of dividing by N). -- ALGLIB -- Copyright 07.09.2009 by Bochkanov Sergey *************************************************************************/void alglib::lsfitlinearwc( real_1d_array y, real_1d_array w, real_2d_array fmatrix, real_2d_array cmatrix, ae_int_t& info, real_1d_array& c, lsfitreport& rep); void alglib::lsfitlinearwc( real_1d_array y, real_1d_array w, real_2d_array fmatrix, real_2d_array cmatrix, ae_int_t n, ae_int_t m, ae_int_t k, ae_int_t& info, real_1d_array& c, lsfitreport& rep); void alglib::smp_lsfitlinearwc( real_1d_array y, real_1d_array w, real_2d_array fmatrix, real_2d_array cmatrix, ae_int_t& info, real_1d_array& c, lsfitreport& rep); void alglib::smp_lsfitlinearwc( real_1d_array y, real_1d_array w, real_2d_array fmatrix, real_2d_array cmatrix, ae_int_t n, ae_int_t m, ae_int_t k, ae_int_t& info, real_1d_array& c, lsfitreport& rep);
Examples: [1]
lsfitresults
function/************************************************************************* Nonlinear least squares fitting results. Called after return from LSFitFit(). INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: Info - completion code: * -8 optimizer detected NAN/INF in the target function and/or gradient * -7 gradient verification failed. See LSFitSetGradientCheck() for more information. * -3 inconsistent constraints * 2 relative step is no more than EpsX. * 5 MaxIts steps was taken * 7 stopping conditions are too stringent, further improvement is impossible C - array[0..K-1], solution Rep - optimization report. On success following fields are set: * R2 non-adjusted coefficient of determination (non-weighted) * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED * WRMSError weighted rms error on the (X,Y). ERRORS IN PARAMETERS This solver also calculates different kinds of errors in parameters and fills corresponding fields of report: * Rep.CovPar covariance matrix for parameters, array[K,K]. * Rep.ErrPar errors in parameters, array[K], errpar = sqrt(diag(CovPar)) * Rep.ErrCurve vector of fit errors - standard deviations of empirical best-fit curve from "ideal" best-fit curve built with infinite number of samples, array[N]. errcurve = sqrt(diag(J*CovPar*J')), where J is Jacobian matrix. * Rep.Noise vector of per-point estimates of noise, array[N] IMPORTANT: errors in parameters are calculated without taking into account boundary/linear constraints! Presence of constraints changes distribution of errors, but there is no easy way to account for constraints when you calculate covariance matrix. NOTE: noise in the data is estimated as follows: * for fitting without user-supplied weights all points are assumed to have same level of noise, which is estimated from the data * for fitting with user-supplied weights we assume that noise level in I-th point is inversely proportional to Ith weight. Coefficient of proportionality is estimated from the data. NOTE: we apply small amount of regularization when we invert squared Jacobian and calculate covariance matrix. It guarantees that algorithm won't divide by zero during inversion, but skews error estimates a bit (fractional error is about 10^-9). However, we believe that this difference is insignificant for all practical purposes except for the situation when you want to compare ALGLIB results with "reference" implementation up to the last significant digit. NOTE: covariance matrix is estimated using correction for degrees of freedom (covariances are divided by N-M instead of dividing by N). -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void alglib::lsfitresults( lsfitstate state, ae_int_t& info, real_1d_array& c, lsfitreport& rep);
lsfitsetbc
function/************************************************************************* This function sets boundary constraints for underlying optimizer Boundary constraints are inactive by default (after initial creation). They are preserved until explicitly turned off with another SetBC() call. INPUT PARAMETERS: State - structure stores algorithm state BndL - lower bounds, array[K]. If some (all) variables are unbounded, you may specify very small number or -INF (latter is recommended because it will allow solver to use better algorithm). BndU - upper bounds, array[K]. If some (all) variables are unbounded, you may specify very large number or +INF (latter is recommended because it will allow solver to use better algorithm). NOTE 1: it is possible to specify BndL[i]=BndU[i]. In this case I-th variable will be "frozen" at X[i]=BndL[i]=BndU[i]. NOTE 2: unlike other constrained optimization algorithms, this solver has following useful properties: * bound constraints are always satisfied exactly * function is evaluated only INSIDE area specified by bound constraints -- ALGLIB -- Copyright 14.01.2011 by Bochkanov Sergey *************************************************************************/void alglib::lsfitsetbc( lsfitstate state, real_1d_array bndl, real_1d_array bndu);
lsfitsetcond
function/************************************************************************* Stopping conditions for nonlinear least squares fitting. INPUT PARAMETERS: State - structure which stores algorithm state EpsX - >=0 The subroutine finishes its work if on k+1-th iteration the condition |v|<=EpsX is fulfilled, where: * |.| means Euclidian norm * v - scaled step vector, v[i]=dx[i]/s[i] * dx - ste pvector, dx=X(k+1)-X(k) * s - scaling coefficients set by LSFitSetScale() MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. Only Levenberg-Marquardt iterations are counted (L-BFGS/CG iterations are NOT counted because their cost is very low compared to that of LM). NOTE Passing EpsX=0 and MaxIts=0 (simultaneously) will lead to automatic stopping criterion selection (according to the scheme used by MINLM unit). -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/void alglib::lsfitsetcond(lsfitstate state, double epsx, ae_int_t maxits);
lsfitsetgradientcheck
function/************************************************************************* This subroutine turns on verification of the user-supplied analytic gradient: * user calls this subroutine before fitting begins * LSFitFit() is called * prior to actual fitting, for each point in data set X_i and each component of parameters being fited C_j algorithm performs following steps: * two trial steps are made to C_j-TestStep*S[j] and C_j+TestStep*S[j], where C_j is j-th parameter and S[j] is a scale of j-th parameter * if needed, steps are bounded with respect to constraints on C[] * F(X_i|C) is evaluated at these trial points * we perform one more evaluation in the middle point of the interval * we build cubic model using function values and derivatives at trial points and we compare its prediction with actual value in the middle point * in case difference between prediction and actual value is higher than some predetermined threshold, algorithm stops with completion code -7; Rep.VarIdx is set to index of the parameter with incorrect derivative. * after verification is over, algorithm proceeds to the actual optimization. NOTE 1: verification needs N*K (points count * parameters count) gradient evaluations. It is very costly and you should use it only for low dimensional problems, when you want to be sure that you've correctly calculated analytic derivatives. You should not use it in the production code (unless you want to check derivatives provided by some third party). NOTE 2: you should carefully choose TestStep. Value which is too large (so large that function behaviour is significantly non-cubic) will lead to false alarms. You may use different step for different parameters by means of setting scale with LSFitSetScale(). NOTE 3: this function may lead to false positives. In case it reports that I-th derivative was calculated incorrectly, you may decrease test step and try one more time - maybe your function changes too sharply and your step is too large for such rapidly chanding function. NOTE 4: this function works only for optimizers created with LSFitCreateWFG() or LSFitCreateFG() constructors. INPUT PARAMETERS: State - structure used to store algorithm state TestStep - verification step: * TestStep=0 turns verification off * TestStep>0 activates verification -- ALGLIB -- Copyright 15.06.2012 by Bochkanov Sergey *************************************************************************/void alglib::lsfitsetgradientcheck(lsfitstate state, double teststep);
lsfitsetlc
function/************************************************************************* This function sets linear constraints for underlying optimizer Linear constraints are inactive by default (after initial creation). They are preserved until explicitly turned off with another SetLC() call. INPUT PARAMETERS: State - structure stores algorithm state C - linear constraints, array[K,N+1]. Each row of C represents one constraint, either equality or inequality (see below): * first N elements correspond to coefficients, * last element corresponds to the right part. All elements of C (including right part) must be finite. CT - type of constraints, array[K]: * if CT[i]>0, then I-th constraint is C[i,*]*x >= C[i,n+1] * if CT[i]=0, then I-th constraint is C[i,*]*x = C[i,n+1] * if CT[i]<0, then I-th constraint is C[i,*]*x <= C[i,n+1] K - number of equality/inequality constraints, K>=0: * if given, only leading K elements of C/CT are used * if not given, automatically determined from sizes of C/CT IMPORTANT: if you have linear constraints, it is strongly recommended to set scale of variables with lsfitsetscale(). QP solver which is used to calculate linearly constrained steps heavily relies on good scaling of input problems. NOTE: linear (non-box) constraints are satisfied only approximately - there always exists some violation due to numerical errors and algorithmic limitations. NOTE: general linear constraints add significant overhead to solution process. Although solver performs roughly same amount of iterations (when compared with similar box-only constrained problem), each iteration now involves solution of linearly constrained QP subproblem, which requires ~3-5 times more Cholesky decompositions. Thus, if you can reformulate your problem in such way this it has only box constraints, it may be beneficial to do so. -- ALGLIB -- Copyright 29.04.2017 by Bochkanov Sergey *************************************************************************/void alglib::lsfitsetlc( lsfitstate state, real_2d_array c, integer_1d_array ct); void alglib::lsfitsetlc( lsfitstate state, real_2d_array c, integer_1d_array ct, ae_int_t k);
lsfitsetscale
function/************************************************************************* This function sets scaling coefficients for underlying optimizer. ALGLIB optimizers use scaling matrices to test stopping conditions (step size and gradient are scaled before comparison with tolerances). Scale of the I-th variable is a translation invariant measure of: a) "how large" the variable is b) how large the step should be to make significant changes in the function Generally, scale is NOT considered to be a form of preconditioner. But LM optimizer is unique in that it uses scaling matrix both in the stopping condition tests and as Marquardt damping factor. Proper scaling is very important for the algorithm performance. It is less important for the quality of results, but still has some influence (it is easier to converge when variables are properly scaled, so premature stopping is possible when very badly scalled variables are combined with relaxed stopping conditions). INPUT PARAMETERS: State - structure stores algorithm state S - array[N], non-zero scaling coefficients S[i] may be negative, sign doesn't matter. -- ALGLIB -- Copyright 14.01.2011 by Bochkanov Sergey *************************************************************************/void alglib::lsfitsetscale(lsfitstate state, real_1d_array s);
Examples: [1]
lsfitsetstpmax
function/************************************************************************* This function sets maximum step length INPUT PARAMETERS: State - structure which stores algorithm state StpMax - maximum step length, >=0. Set StpMax to 0.0, if you don't want to limit step length. Use this subroutine when you optimize target function which contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow. This function allows us to reject steps that are too large (and therefore expose us to the possible overflow) without actually calculating function value at the x+stp*d. NOTE: non-zero StpMax leads to moderate performance degradation because intermediate step of preconditioned L-BFGS optimization is incompatible with limits on step size. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/void alglib::lsfitsetstpmax(lsfitstate state, double stpmax);
lsfitsetxrep
function/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not When reports are needed, State.C (current parameters) and State.F (current value of fitting function) are reported. -- ALGLIB -- Copyright 15.08.2010 by Bochkanov Sergey *************************************************************************/void alglib::lsfitsetxrep(lsfitstate state, bool needxrep);
lstfitpiecewiselinearrdp
function/************************************************************************* This subroutine fits piecewise linear curve to points with Ramer-Douglas- Peucker algorithm, which stops after achieving desired precision. IMPORTANT: * it performs non-least-squares fitting; it builds curve, but this curve does not minimize some least squares metric. See description of RDP algorithm (say, in Wikipedia) for more details on WHAT is performed. * this function does NOT work with parametric curves (i.e. curves which can be represented as {X(t),Y(t)}. It works with curves which can be represented as Y(X). Thus, it is impossible to model figures like circles with this functions. If you want to work with parametric curves, you should use ParametricRDPFixed() function provided by "Parametric" subpackage of "Interpolation" package. INPUT PARAMETERS: X - array of X-coordinates: * at least N elements * can be unordered (points are automatically sorted) * this function may accept non-distinct X (see below for more information on handling of such inputs) Y - array of Y-coordinates: * at least N elements N - number of elements in X/Y Eps - positive number, desired precision. OUTPUT PARAMETERS: X2 - X-values of corner points for piecewise approximation, has length NSections+1 or zero (for NSections=0). Y2 - Y-values of corner points, has length NSections+1 or zero (for NSections=0). NSections- number of sections found by algorithm, NSections can be zero for degenerate datasets (N<=1 or all X[] are non-distinct). NOTE: X2/Y2 are ordered arrays, i.e. (X2[0],Y2[0]) is a first point of curve, (X2[NSection-1],Y2[NSection-1]) is the last point. -- ALGLIB -- Copyright 02.10.2014 by Bochkanov Sergey *************************************************************************/void alglib::lstfitpiecewiselinearrdp( real_1d_array x, real_1d_array y, ae_int_t n, double eps, real_1d_array& x2, real_1d_array& y2, ae_int_t& nsections);
lstfitpiecewiselinearrdpfixed
function/************************************************************************* This subroutine fits piecewise linear curve to points with Ramer-Douglas- Peucker algorithm, which stops after generating specified number of linear sections. IMPORTANT: * it does NOT perform least-squares fitting; it builds curve, but this curve does not minimize some least squares metric. See description of RDP algorithm (say, in Wikipedia) for more details on WHAT is performed. * this function does NOT work with parametric curves (i.e. curves which can be represented as {X(t),Y(t)}. It works with curves which can be represented as Y(X). Thus, it is impossible to model figures like circles with this functions. If you want to work with parametric curves, you should use ParametricRDPFixed() function provided by "Parametric" subpackage of "Interpolation" package. INPUT PARAMETERS: X - array of X-coordinates: * at least N elements * can be unordered (points are automatically sorted) * this function may accept non-distinct X (see below for more information on handling of such inputs) Y - array of Y-coordinates: * at least N elements N - number of elements in X/Y M - desired number of sections: * at most M sections are generated by this function * less than M sections can be generated if we have N<M (or some X are non-distinct). OUTPUT PARAMETERS: X2 - X-values of corner points for piecewise approximation, has length NSections+1 or zero (for NSections=0). Y2 - Y-values of corner points, has length NSections+1 or zero (for NSections=0). NSections- number of sections found by algorithm, NSections<=M, NSections can be zero for degenerate datasets (N<=1 or all X[] are non-distinct). NOTE: X2/Y2 are ordered arrays, i.e. (X2[0],Y2[0]) is a first point of curve, (X2[NSection-1],Y2[NSection-1]) is the last point. -- ALGLIB -- Copyright 02.10.2014 by Bochkanov Sergey *************************************************************************/void alglib::lstfitpiecewiselinearrdpfixed( real_1d_array x, real_1d_array y, ae_int_t n, ae_int_t m, real_1d_array& x2, real_1d_array& y2, ae_int_t& nsections);
polynomialfit
function/************************************************************************* Fitting by polynomials in barycentric form. This function provides simple unterface for unconstrained unweighted fitting. See PolynomialFitWC() if you need constrained fitting. Task is linear, so linear least squares solver is used. Complexity of this computational scheme is O(N*M^2), mostly dominated by least squares solver SEE ALSO: PolynomialFitWC() COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multithreading support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Speed-up provided by multithreading greatly depends on problem size ! - only large problems (number of coefficients is more than 500) can be ! efficiently multithreaded. ! ! Generally, commercial ALGLIB is several times faster than open-source ! generic C edition, and many times faster than open-source C# edition. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. N - number of points, N>0 * if given, only leading N elements of X/Y are used * if not given, automatically determined from sizes of X/Y M - number of basis functions (= polynomial_degree + 1), M>=1 OUTPUT PARAMETERS: Info- same format as in LSFitLinearW() subroutine: * Info>0 task is solved * Info<=0 an error occured: -4 means inconvergence of internal SVD P - interpolant in barycentric form. Rep - report, same format as in LSFitLinearW() subroutine. Following fields are set: * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED NOTES: you can convert P from barycentric form to the power or Chebyshev basis with PolynomialBar2Pow() or PolynomialBar2Cheb() functions from POLINT subpackage. -- ALGLIB PROJECT -- Copyright 10.12.2009 by Bochkanov Sergey *************************************************************************/void alglib::polynomialfit( real_1d_array x, real_1d_array y, ae_int_t m, ae_int_t& info, barycentricinterpolant& p, polynomialfitreport& rep); void alglib::polynomialfit( real_1d_array x, real_1d_array y, ae_int_t n, ae_int_t m, ae_int_t& info, barycentricinterpolant& p, polynomialfitreport& rep); void alglib::smp_polynomialfit( real_1d_array x, real_1d_array y, ae_int_t m, ae_int_t& info, barycentricinterpolant& p, polynomialfitreport& rep); void alglib::smp_polynomialfit( real_1d_array x, real_1d_array y, ae_int_t n, ae_int_t m, ae_int_t& info, barycentricinterpolant& p, polynomialfitreport& rep);
Examples: [1]
polynomialfitwc
function/************************************************************************* Weighted fitting by polynomials in barycentric form, with constraints on function values or first derivatives. Small regularizing term is used when solving constrained tasks (to improve stability). Task is linear, so linear least squares solver is used. Complexity of this computational scheme is O(N*M^2), mostly dominated by least squares solver SEE ALSO: PolynomialFit() COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multithreading support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Speed-up provided by multithreading greatly depends on problem size ! - only large problems (number of coefficients is more than 500) can be ! efficiently multithreaded. ! ! Generally, commercial ALGLIB is several times faster than open-source ! generic C edition, and many times faster than open-source C# edition. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. W - weights, array[0..N-1] Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. Fill it by 1's if you don't want to solve weighted task. N - number of points, N>0. * if given, only leading N elements of X/Y/W are used * if not given, automatically determined from sizes of X/Y/W XC - points where polynomial values/derivatives are constrained, array[0..K-1]. YC - values of constraints, array[0..K-1] DC - array[0..K-1], types of constraints: * DC[i]=0 means that P(XC[i])=YC[i] * DC[i]=1 means that P'(XC[i])=YC[i] SEE BELOW FOR IMPORTANT INFORMATION ON CONSTRAINTS K - number of constraints, 0<=K<M. K=0 means no constraints (XC/YC/DC are not used in such cases) M - number of basis functions (= polynomial_degree + 1), M>=1 OUTPUT PARAMETERS: Info- same format as in LSFitLinearW() subroutine: * Info>0 task is solved * Info<=0 an error occured: -4 means inconvergence of internal SVD -3 means inconsistent constraints P - interpolant in barycentric form. Rep - report, same format as in LSFitLinearW() subroutine. Following fields are set: * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroitine doesn't calculate task's condition number for K<>0. NOTES: you can convert P from barycentric form to the power or Chebyshev basis with PolynomialBar2Pow() or PolynomialBar2Cheb() functions from POLINT subpackage. SETTING CONSTRAINTS - DANGERS AND OPPORTUNITIES: Setting constraints can lead to undesired results, like ill-conditioned behavior, or inconsistency being detected. From the other side, it allows us to improve quality of the fit. Here we summarize our experience with constrained regression splines: * even simple constraints can be inconsistent, see Wikipedia article on this subject: http://en.wikipedia.org/wiki/Birkhoff_interpolation * the greater is M (given fixed constraints), the more chances that constraints will be consistent * in the general case, consistency of constraints is NOT GUARANTEED. * in the one special cases, however, we can guarantee consistency. This case is: M>1 and constraints on the function values (NOT DERIVATIVES) Our final recommendation is to use constraints WHEN AND ONLY when you can't solve your task without them. Anything beyond special cases given above is not guaranteed and may result in inconsistency. -- ALGLIB PROJECT -- Copyright 10.12.2009 by Bochkanov Sergey *************************************************************************/void alglib::polynomialfitwc( real_1d_array x, real_1d_array y, real_1d_array w, real_1d_array xc, real_1d_array yc, integer_1d_array dc, ae_int_t m, ae_int_t& info, barycentricinterpolant& p, polynomialfitreport& rep); void alglib::polynomialfitwc( real_1d_array x, real_1d_array y, real_1d_array w, ae_int_t n, real_1d_array xc, real_1d_array yc, integer_1d_array dc, ae_int_t k, ae_int_t m, ae_int_t& info, barycentricinterpolant& p, polynomialfitreport& rep); void alglib::smp_polynomialfitwc( real_1d_array x, real_1d_array y, real_1d_array w, real_1d_array xc, real_1d_array yc, integer_1d_array dc, ae_int_t m, ae_int_t& info, barycentricinterpolant& p, polynomialfitreport& rep); void alglib::smp_polynomialfitwc( real_1d_array x, real_1d_array y, real_1d_array w, ae_int_t n, real_1d_array xc, real_1d_array yc, integer_1d_array dc, ae_int_t k, ae_int_t m, ae_int_t& info, barycentricinterpolant& p, polynomialfitreport& rep);
Examples: [1]
spline1dfitcubic
function/************************************************************************* Least squares fitting by cubic spline. This subroutine is "lightweight" alternative for more complex and feature- rich Spline1DFitCubicWC(). See Spline1DFitCubicWC() for more information about subroutine parameters (we don't duplicate it here because of length) COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multithreading support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Speed-up provided by multithreading greatly depends on problem size ! - only large problems (number of coefficients is more than 500) can be ! efficiently multithreaded. ! ! Generally, commercial ALGLIB is several times faster than open-source ! generic C edition, and many times faster than open-source C# edition. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/void alglib::spline1dfitcubic( real_1d_array x, real_1d_array y, ae_int_t m, ae_int_t& info, spline1dinterpolant& s, spline1dfitreport& rep); void alglib::spline1dfitcubic( real_1d_array x, real_1d_array y, ae_int_t n, ae_int_t m, ae_int_t& info, spline1dinterpolant& s, spline1dfitreport& rep); void alglib::smp_spline1dfitcubic( real_1d_array x, real_1d_array y, ae_int_t m, ae_int_t& info, spline1dinterpolant& s, spline1dfitreport& rep); void alglib::smp_spline1dfitcubic( real_1d_array x, real_1d_array y, ae_int_t n, ae_int_t m, ae_int_t& info, spline1dinterpolant& s, spline1dfitreport& rep);
spline1dfitcubicwc
function/************************************************************************* Weighted fitting by cubic spline, with constraints on function values or derivatives. Equidistant grid with M-2 nodes on [min(x,xc),max(x,xc)] is used to build basis functions. Basis functions are cubic splines with continuous second derivatives and non-fixed first derivatives at interval ends. Small regularizing term is used when solving constrained tasks (to improve stability). Task is linear, so linear least squares solver is used. Complexity of this computational scheme is O(N*M^2), mostly dominated by least squares solver SEE ALSO Spline1DFitHermiteWC() - fitting by Hermite splines (more flexible, less smooth) Spline1DFitCubic() - "lightweight" fitting by cubic splines, without invididual weights and constraints COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multithreading support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Speed-up provided by multithreading greatly depends on problem size ! - only large problems (number of coefficients is more than 500) can be ! efficiently multithreaded. ! ! Generally, commercial ALGLIB is several times faster than open-source ! generic C edition, and many times faster than open-source C# edition. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. W - weights, array[0..N-1] Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. Fill it by 1's if you don't want to solve weighted task. N - number of points (optional): * N>0 * if given, only first N elements of X/Y/W are processed * if not given, automatically determined from X/Y/W sizes XC - points where spline values/derivatives are constrained, array[0..K-1]. YC - values of constraints, array[0..K-1] DC - array[0..K-1], types of constraints: * DC[i]=0 means that S(XC[i])=YC[i] * DC[i]=1 means that S'(XC[i])=YC[i] SEE BELOW FOR IMPORTANT INFORMATION ON CONSTRAINTS K - number of constraints (optional): * 0<=K<M. * K=0 means no constraints (XC/YC/DC are not used) * if given, only first K elements of XC/YC/DC are used * if not given, automatically determined from XC/YC/DC M - number of basis functions ( = number_of_nodes+2), M>=4. OUTPUT PARAMETERS: Info- same format as in LSFitLinearWC() subroutine. * Info>0 task is solved * Info<=0 an error occured: -4 means inconvergence of internal SVD -3 means inconsistent constraints S - spline interpolant. Rep - report, same format as in LSFitLinearWC() subroutine. Following fields are set: * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroitine doesn't calculate task's condition number for K<>0. ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. SETTING CONSTRAINTS - DANGERS AND OPPORTUNITIES: Setting constraints can lead to undesired results, like ill-conditioned behavior, or inconsistency being detected. From the other side, it allows us to improve quality of the fit. Here we summarize our experience with constrained regression splines: * excessive constraints can be inconsistent. Splines are piecewise cubic functions, and it is easy to create an example, where large number of constraints concentrated in small area will result in inconsistency. Just because spline is not flexible enough to satisfy all of them. And same constraints spread across the [min(x),max(x)] will be perfectly consistent. * the more evenly constraints are spread across [min(x),max(x)], the more chances that they will be consistent * the greater is M (given fixed constraints), the more chances that constraints will be consistent * in the general case, consistency of constraints IS NOT GUARANTEED. * in the several special cases, however, we CAN guarantee consistency. * one of this cases is constraints on the function values AND/OR its derivatives at the interval boundaries. * another special case is ONE constraint on the function value (OR, but not AND, derivative) anywhere in the interval Our final recommendation is to use constraints WHEN AND ONLY WHEN you can't solve your task without them. Anything beyond special cases given above is not guaranteed and may result in inconsistency. -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/void alglib::spline1dfitcubicwc( real_1d_array x, real_1d_array y, real_1d_array w, real_1d_array xc, real_1d_array yc, integer_1d_array dc, ae_int_t m, ae_int_t& info, spline1dinterpolant& s, spline1dfitreport& rep); void alglib::spline1dfitcubicwc( real_1d_array x, real_1d_array y, real_1d_array w, ae_int_t n, real_1d_array xc, real_1d_array yc, integer_1d_array dc, ae_int_t k, ae_int_t m, ae_int_t& info, spline1dinterpolant& s, spline1dfitreport& rep); void alglib::smp_spline1dfitcubicwc( real_1d_array x, real_1d_array y, real_1d_array w, real_1d_array xc, real_1d_array yc, integer_1d_array dc, ae_int_t m, ae_int_t& info, spline1dinterpolant& s, spline1dfitreport& rep); void alglib::smp_spline1dfitcubicwc( real_1d_array x, real_1d_array y, real_1d_array w, ae_int_t n, real_1d_array xc, real_1d_array yc, integer_1d_array dc, ae_int_t k, ae_int_t m, ae_int_t& info, spline1dinterpolant& s, spline1dfitreport& rep);
spline1dfithermite
function/************************************************************************* Least squares fitting by Hermite spline. This subroutine is "lightweight" alternative for more complex and feature- rich Spline1DFitHermiteWC(). See Spline1DFitHermiteWC() description for more information about subroutine parameters (we don't duplicate it here because of length). COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multithreading support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Speed-up provided by multithreading greatly depends on problem size ! - only large problems (number of coefficients is more than 500) can be ! efficiently multithreaded. ! ! Generally, commercial ALGLIB is several times faster than open-source ! generic C edition, and many times faster than open-source C# edition. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/void alglib::spline1dfithermite( real_1d_array x, real_1d_array y, ae_int_t m, ae_int_t& info, spline1dinterpolant& s, spline1dfitreport& rep); void alglib::spline1dfithermite( real_1d_array x, real_1d_array y, ae_int_t n, ae_int_t m, ae_int_t& info, spline1dinterpolant& s, spline1dfitreport& rep); void alglib::smp_spline1dfithermite( real_1d_array x, real_1d_array y, ae_int_t m, ae_int_t& info, spline1dinterpolant& s, spline1dfitreport& rep); void alglib::smp_spline1dfithermite( real_1d_array x, real_1d_array y, ae_int_t n, ae_int_t m, ae_int_t& info, spline1dinterpolant& s, spline1dfitreport& rep);
spline1dfithermitewc
function/************************************************************************* Weighted fitting by Hermite spline, with constraints on function values or first derivatives. Equidistant grid with M nodes on [min(x,xc),max(x,xc)] is used to build basis functions. Basis functions are Hermite splines. Small regularizing term is used when solving constrained tasks (to improve stability). Task is linear, so linear least squares solver is used. Complexity of this computational scheme is O(N*M^2), mostly dominated by least squares solver SEE ALSO Spline1DFitCubicWC() - fitting by Cubic splines (less flexible, more smooth) Spline1DFitHermite() - "lightweight" Hermite fitting, without invididual weights and constraints COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multithreading support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Speed-up provided by multithreading greatly depends on problem size ! - only large problems (number of coefficients is more than 500) can be ! efficiently multithreaded. ! ! Generally, commercial ALGLIB is several times faster than open-source ! generic C edition, and many times faster than open-source C# edition. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. W - weights, array[0..N-1] Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. Fill it by 1's if you don't want to solve weighted task. N - number of points (optional): * N>0 * if given, only first N elements of X/Y/W are processed * if not given, automatically determined from X/Y/W sizes XC - points where spline values/derivatives are constrained, array[0..K-1]. YC - values of constraints, array[0..K-1] DC - array[0..K-1], types of constraints: * DC[i]=0 means that S(XC[i])=YC[i] * DC[i]=1 means that S'(XC[i])=YC[i] SEE BELOW FOR IMPORTANT INFORMATION ON CONSTRAINTS K - number of constraints (optional): * 0<=K<M. * K=0 means no constraints (XC/YC/DC are not used) * if given, only first K elements of XC/YC/DC are used * if not given, automatically determined from XC/YC/DC M - number of basis functions (= 2 * number of nodes), M>=4, M IS EVEN! OUTPUT PARAMETERS: Info- same format as in LSFitLinearW() subroutine: * Info>0 task is solved * Info<=0 an error occured: -4 means inconvergence of internal SVD -3 means inconsistent constraints -2 means odd M was passed (which is not supported) -1 means another errors in parameters passed (N<=0, for example) S - spline interpolant. Rep - report, same format as in LSFitLinearW() subroutine. Following fields are set: * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroitine doesn't calculate task's condition number for K<>0. IMPORTANT: this subroitine supports only even M's ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. SETTING CONSTRAINTS - DANGERS AND OPPORTUNITIES: Setting constraints can lead to undesired results, like ill-conditioned behavior, or inconsistency being detected. From the other side, it allows us to improve quality of the fit. Here we summarize our experience with constrained regression splines: * excessive constraints can be inconsistent. Splines are piecewise cubic functions, and it is easy to create an example, where large number of constraints concentrated in small area will result in inconsistency. Just because spline is not flexible enough to satisfy all of them. And same constraints spread across the [min(x),max(x)] will be perfectly consistent. * the more evenly constraints are spread across [min(x),max(x)], the more chances that they will be consistent * the greater is M (given fixed constraints), the more chances that constraints will be consistent * in the general case, consistency of constraints is NOT GUARANTEED. * in the several special cases, however, we can guarantee consistency. * one of this cases is M>=4 and constraints on the function value (AND/OR its derivative) at the interval boundaries. * another special case is M>=4 and ONE constraint on the function value (OR, BUT NOT AND, derivative) anywhere in [min(x),max(x)] Our final recommendation is to use constraints WHEN AND ONLY when you can't solve your task without them. Anything beyond special cases given above is not guaranteed and may result in inconsistency. -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/void alglib::spline1dfithermitewc( real_1d_array x, real_1d_array y, real_1d_array w, real_1d_array xc, real_1d_array yc, integer_1d_array dc, ae_int_t m, ae_int_t& info, spline1dinterpolant& s, spline1dfitreport& rep); void alglib::spline1dfithermitewc( real_1d_array x, real_1d_array y, real_1d_array w, ae_int_t n, real_1d_array xc, real_1d_array yc, integer_1d_array dc, ae_int_t k, ae_int_t m, ae_int_t& info, spline1dinterpolant& s, spline1dfitreport& rep); void alglib::smp_spline1dfithermitewc( real_1d_array x, real_1d_array y, real_1d_array w, real_1d_array xc, real_1d_array yc, integer_1d_array dc, ae_int_t m, ae_int_t& info, spline1dinterpolant& s, spline1dfitreport& rep); void alglib::smp_spline1dfithermitewc( real_1d_array x, real_1d_array y, real_1d_array w, ae_int_t n, real_1d_array xc, real_1d_array yc, integer_1d_array dc, ae_int_t k, ae_int_t m, ae_int_t& info, spline1dinterpolant& s, spline1dfitreport& rep);
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "interpolation.h" using namespace alglib; int main(int argc, char **argv) { // // In this example we demonstrate linear fitting by f(x|a) = a*exp(0.5*x). // // We have: // * y - vector of experimental data // * fmatrix - matrix of basis functions calculated at sample points // Actually, we have only one basis function F0 = exp(0.5*x). // real_2d_array fmatrix = "[[0.606531],[0.670320],[0.740818],[0.818731],[0.904837],[1.000000],[1.105171],[1.221403],[1.349859],[1.491825],[1.648721]]"; real_1d_array y = "[1.133719, 1.306522, 1.504604, 1.554663, 1.884638, 2.072436, 2.257285, 2.534068, 2.622017, 2.897713, 3.219371]"; ae_int_t info; real_1d_array c; lsfitreport rep; // // Linear fitting without weights // lsfitlinear(y, fmatrix, info, c, rep); printf("%d\n", int(info)); // EXPECTED: 1 printf("%s\n", c.tostring(4).c_str()); // EXPECTED: [1.98650] // // Linear fitting with individual weights. // Slightly different result is returned. // real_1d_array w = "[1.414213, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]"; lsfitlinearw(y, w, fmatrix, info, c, rep); printf("%d\n", int(info)); // EXPECTED: 1 printf("%s\n", c.tostring(4).c_str()); // EXPECTED: [1.983354] return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "interpolation.h" using namespace alglib; int main(int argc, char **argv) { // // In this example we demonstrate linear fitting by f(x|a,b) = a*x+b // with simple constraint f(0)=0. // // We have: // * y - vector of experimental data // * fmatrix - matrix of basis functions sampled at [0,1] with step 0.2: // [ 1.0 0.0 ] // [ 1.0 0.2 ] // [ 1.0 0.4 ] // [ 1.0 0.6 ] // [ 1.0 0.8 ] // [ 1.0 1.0 ] // first column contains value of first basis function (constant term) // second column contains second basis function (linear term) // * cmatrix - matrix of linear constraints: // [ 1.0 0.0 0.0 ] // first two columns contain coefficients before basis functions, // last column contains desired value of their sum. // So [1,0,0] means "1*constant_term + 0*linear_term = 0" // real_1d_array y = "[0.072436,0.246944,0.491263,0.522300,0.714064,0.921929]"; real_2d_array fmatrix = "[[1,0.0],[1,0.2],[1,0.4],[1,0.6],[1,0.8],[1,1.0]]"; real_2d_array cmatrix = "[[1,0,0]]"; ae_int_t info; real_1d_array c; lsfitreport rep; // // Constrained fitting without weights // lsfitlinearc(y, fmatrix, cmatrix, info, c, rep); printf("%d\n", int(info)); // EXPECTED: 1 printf("%s\n", c.tostring(3).c_str()); // EXPECTED: [0,0.932933] // // Constrained fitting with individual weights // real_1d_array w = "[1, 1.414213, 1, 1, 1, 1]"; lsfitlinearwc(y, w, fmatrix, cmatrix, info, c, rep); printf("%d\n", int(info)); // EXPECTED: 1 printf("%s\n", c.tostring(3).c_str()); // EXPECTED: [0,0.938322] return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "interpolation.h" using namespace alglib; void function_cx_1_func(const real_1d_array &c, const real_1d_array &x, double &func, void *ptr) { // this callback calculates f(c,x)=exp(-c0*sqr(x0)) // where x is a position on X-axis and c is adjustable parameter func = exp(-c[0]*pow(x[0],2)); } int main(int argc, char **argv) { // // In this example we demonstrate exponential fitting // by f(x) = exp(-c*x^2) // using function value only. // // Gradient is estimated using combination of numerical differences // and secant updates. diffstep variable stores differentiation step // (we have to tell algorithm what step to use). // real_2d_array x = "[[-1],[-0.8],[-0.6],[-0.4],[-0.2],[0],[0.2],[0.4],[0.6],[0.8],[1.0]]"; real_1d_array y = "[0.223130, 0.382893, 0.582748, 0.786628, 0.941765, 1.000000, 0.941765, 0.786628, 0.582748, 0.382893, 0.223130]"; real_1d_array c = "[0.3]"; double epsx = 0.000001; ae_int_t maxits = 0; ae_int_t info; lsfitstate state; lsfitreport rep; double diffstep = 0.0001; // // Fitting without weights // lsfitcreatef(x, y, c, diffstep, state); lsfitsetcond(state, epsx, maxits); alglib::lsfitfit(state, function_cx_1_func); lsfitresults(state, info, c, rep); printf("%d\n", int(info)); // EXPECTED: 2 printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [1.5] // // Fitting with weights // (you can change weights and see how it changes result) // real_1d_array w = "[1,1,1,1,1,1,1,1,1,1,1]"; lsfitcreatewf(x, y, w, c, diffstep, state); lsfitsetcond(state, epsx, maxits); alglib::lsfitfit(state, function_cx_1_func); lsfitresults(state, info, c, rep); printf("%d\n", int(info)); // EXPECTED: 2 printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [1.5] return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "interpolation.h" using namespace alglib; void function_cx_1_func(const real_1d_array &c, const real_1d_array &x, double &func, void *ptr) { // this callback calculates f(c,x)=exp(-c0*sqr(x0)) // where x is a position on X-axis and c is adjustable parameter func = exp(-c[0]*pow(x[0],2)); } int main(int argc, char **argv) { // // In this example we demonstrate exponential fitting by // f(x) = exp(-c*x^2) // subject to bound constraints // 0.0 <= c <= 1.0 // using function value only. // // Gradient is estimated using combination of numerical differences // and secant updates. diffstep variable stores differentiation step // (we have to tell algorithm what step to use). // // Unconstrained solution is c=1.5, but because of constraints we should // get c=1.0 (at the boundary). // real_2d_array x = "[[-1],[-0.8],[-0.6],[-0.4],[-0.2],[0],[0.2],[0.4],[0.6],[0.8],[1.0]]"; real_1d_array y = "[0.223130, 0.382893, 0.582748, 0.786628, 0.941765, 1.000000, 0.941765, 0.786628, 0.582748, 0.382893, 0.223130]"; real_1d_array c = "[0.3]"; real_1d_array bndl = "[0.0]"; real_1d_array bndu = "[1.0]"; double epsx = 0.000001; ae_int_t maxits = 0; ae_int_t info; lsfitstate state; lsfitreport rep; double diffstep = 0.0001; lsfitcreatef(x, y, c, diffstep, state); lsfitsetbc(state, bndl, bndu); lsfitsetcond(state, epsx, maxits); alglib::lsfitfit(state, function_cx_1_func); lsfitresults(state, info, c, rep); printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [1.0] return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "interpolation.h" using namespace alglib; void function_cx_1_func(const real_1d_array &c, const real_1d_array &x, double &func, void *ptr) { // this callback calculates f(c,x)=exp(-c0*sqr(x0)) // where x is a position on X-axis and c is adjustable parameter func = exp(-c[0]*pow(x[0],2)); } void function_cx_1_grad(const real_1d_array &c, const real_1d_array &x, double &func, real_1d_array &grad, void *ptr) { // this callback calculates f(c,x)=exp(-c0*sqr(x0)) and gradient G={df/dc[i]} // where x is a position on X-axis and c is adjustable parameter. // IMPORTANT: gradient is calculated with respect to C, not to X func = exp(-c[0]*pow(x[0],2)); grad[0] = -pow(x[0],2)*func; } int main(int argc, char **argv) { // // In this example we demonstrate exponential fitting // by f(x) = exp(-c*x^2) // using function value and gradient (with respect to c). // real_2d_array x = "[[-1],[-0.8],[-0.6],[-0.4],[-0.2],[0],[0.2],[0.4],[0.6],[0.8],[1.0]]"; real_1d_array y = "[0.223130, 0.382893, 0.582748, 0.786628, 0.941765, 1.000000, 0.941765, 0.786628, 0.582748, 0.382893, 0.223130]"; real_1d_array c = "[0.3]"; double epsx = 0.000001; ae_int_t maxits = 0; ae_int_t info; lsfitstate state; lsfitreport rep; // // Fitting without weights // lsfitcreatefg(x, y, c, true, state); lsfitsetcond(state, epsx, maxits); alglib::lsfitfit(state, function_cx_1_func, function_cx_1_grad); lsfitresults(state, info, c, rep); printf("%d\n", int(info)); // EXPECTED: 2 printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [1.5] // // Fitting with weights // (you can change weights and see how it changes result) // real_1d_array w = "[1,1,1,1,1,1,1,1,1,1,1]"; lsfitcreatewfg(x, y, w, c, true, state); lsfitsetcond(state, epsx, maxits); alglib::lsfitfit(state, function_cx_1_func, function_cx_1_grad); lsfitresults(state, info, c, rep); printf("%d\n", int(info)); // EXPECTED: 2 printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [1.5] return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "interpolation.h" using namespace alglib; void function_cx_1_func(const real_1d_array &c, const real_1d_array &x, double &func, void *ptr) { // this callback calculates f(c,x)=exp(-c0*sqr(x0)) // where x is a position on X-axis and c is adjustable parameter func = exp(-c[0]*pow(x[0],2)); } void function_cx_1_grad(const real_1d_array &c, const real_1d_array &x, double &func, real_1d_array &grad, void *ptr) { // this callback calculates f(c,x)=exp(-c0*sqr(x0)) and gradient G={df/dc[i]} // where x is a position on X-axis and c is adjustable parameter. // IMPORTANT: gradient is calculated with respect to C, not to X func = exp(-c[0]*pow(x[0],2)); grad[0] = -pow(x[0],2)*func; } void function_cx_1_hess(const real_1d_array &c, const real_1d_array &x, double &func, real_1d_array &grad, real_2d_array &hess, void *ptr) { // this callback calculates f(c,x)=exp(-c0*sqr(x0)), gradient G={df/dc[i]} and Hessian H={d2f/(dc[i]*dc[j])} // where x is a position on X-axis and c is adjustable parameter. // IMPORTANT: gradient/Hessian are calculated with respect to C, not to X func = exp(-c[0]*pow(x[0],2)); grad[0] = -pow(x[0],2)*func; hess[0][0] = pow(x[0],4)*func; } int main(int argc, char **argv) { // // In this example we demonstrate exponential fitting // by f(x) = exp(-c*x^2) // using function value, gradient and Hessian (with respect to c) // real_2d_array x = "[[-1],[-0.8],[-0.6],[-0.4],[-0.2],[0],[0.2],[0.4],[0.6],[0.8],[1.0]]"; real_1d_array y = "[0.223130, 0.382893, 0.582748, 0.786628, 0.941765, 1.000000, 0.941765, 0.786628, 0.582748, 0.382893, 0.223130]"; real_1d_array c = "[0.3]"; double epsx = 0.000001; ae_int_t maxits = 0; ae_int_t info; lsfitstate state; lsfitreport rep; // // Fitting without weights // lsfitcreatefgh(x, y, c, state); lsfitsetcond(state, epsx, maxits); alglib::lsfitfit(state, function_cx_1_func, function_cx_1_grad, function_cx_1_hess); lsfitresults(state, info, c, rep); printf("%d\n", int(info)); // EXPECTED: 2 printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [1.5] // // Fitting with weights // (you can change weights and see how it changes result) // real_1d_array w = "[1,1,1,1,1,1,1,1,1,1,1]"; lsfitcreatewfgh(x, y, w, c, state); lsfitsetcond(state, epsx, maxits); alglib::lsfitfit(state, function_cx_1_func, function_cx_1_grad, function_cx_1_hess); lsfitresults(state, info, c, rep); printf("%d\n", int(info)); // EXPECTED: 2 printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [1.5] return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "interpolation.h" using namespace alglib; void function_debt_func(const real_1d_array &c, const real_1d_array &x, double &func, void *ptr) { // // this callback calculates f(c,x)=c[0]*(1+c[1]*(pow(x[0]-1999,c[2])-1)) // func = c[0]*(1+c[1]*(pow(x[0]-1999,c[2])-1)); } int main(int argc, char **argv) { // // In this example we demonstrate fitting by // f(x) = c[0]*(1+c[1]*((x-1999)^c[2]-1)) // subject to bound constraints // -INF < c[0] < +INF // -10 <= c[1] <= +10 // 0.1 <= c[2] <= 2.0 // Data we want to fit are time series of Japan national debt // collected from 2000 to 2008 measured in USD (dollars, not // millions of dollars). // // Our variables are: // c[0] - debt value at initial moment (2000), // c[1] - direction coefficient (growth or decrease), // c[2] - curvature coefficient. // You may see that our variables are badly scaled - first one // is order of 10^12, and next two are somewhere about 1 in // magnitude. Such problem is difficult to solve without some // kind of scaling. // That is exactly where lsfitsetscale() function can be used. // We set scale of our variables to [1.0E12, 1, 1], which allows // us to easily solve this problem. // // You can try commenting out lsfitsetscale() call - and you will // see that algorithm will fail to converge. // real_2d_array x = "[[2000],[2001],[2002],[2003],[2004],[2005],[2006],[2007],[2008]]"; real_1d_array y = "[4323239600000.0, 4560913100000.0, 5564091500000.0, 6743189300000.0, 7284064600000.0, 7050129600000.0, 7092221500000.0, 8483907600000.0, 8625804400000.0]"; real_1d_array c = "[1.0e+13, 1, 1]"; double epsx = 1.0e-5; real_1d_array bndl = "[-inf, -10, 0.1]"; real_1d_array bndu = "[+inf, +10, 2.0]"; real_1d_array s = "[1.0e+12, 1, 1]"; ae_int_t maxits = 0; ae_int_t info; lsfitstate state; lsfitreport rep; double diffstep = 1.0e-5; lsfitcreatef(x, y, c, diffstep, state); lsfitsetcond(state, epsx, maxits); lsfitsetbc(state, bndl, bndu); lsfitsetscale(state, s); alglib::lsfitfit(state, function_debt_func); lsfitresults(state, info, c, rep); printf("%d\n", int(info)); // EXPECTED: 2 printf("%s\n", c.tostring(-2).c_str()); // EXPECTED: [4.142560E+12, 0.434240, 0.565376] return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "interpolation.h" using namespace alglib; int main(int argc, char **argv) { // // This example demonstrates polynomial fitting. // // Fitting is done by two (M=2) functions from polynomial basis: // f0 = 1 // f1 = x // Basically, it just a linear fit; more complex polynomials may be used // (e.g. parabolas with M=3, cubic with M=4), but even such simple fit allows // us to demonstrate polynomialfit() function in action. // // We have: // * x set of abscissas // * y experimental data // // Additionally we demonstrate weighted fitting, where second point has // more weight than other ones. // real_1d_array x = "[0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0]"; real_1d_array y = "[0.00,0.05,0.26,0.32,0.33,0.43,0.60,0.60,0.77,0.98,1.02]"; ae_int_t m = 2; double t = 2; ae_int_t info; barycentricinterpolant p; polynomialfitreport rep; double v; // // Fitting without individual weights // // NOTE: result is returned as barycentricinterpolant structure. // if you want to get representation in the power basis, // you can use barycentricbar2pow() function to convert // from barycentric to power representation (see docs for // POLINT subpackage for more info). // polynomialfit(x, y, m, info, p, rep); v = barycentriccalc(p, t); printf("%.2f\n", double(v)); // EXPECTED: 2.011 // // Fitting with individual weights // // NOTE: slightly different result is returned // real_1d_array w = "[1,1.414213562,1,1,1,1,1,1,1,1,1]"; real_1d_array xc = "[]"; real_1d_array yc = "[]"; integer_1d_array dc = "[]"; polynomialfitwc(x, y, w, xc, yc, dc, m, info, p, rep); v = barycentriccalc(p, t); printf("%.2f\n", double(v)); // EXPECTED: 2.023 return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "interpolation.h" using namespace alglib; int main(int argc, char **argv) { // // This example demonstrates polynomial fitting. // // Fitting is done by two (M=2) functions from polynomial basis: // f0 = 1 // f1 = x // with simple constraint on function value // f(0) = 0 // Basically, it just a linear fit; more complex polynomials may be used // (e.g. parabolas with M=3, cubic with M=4), but even such simple fit allows // us to demonstrate polynomialfit() function in action. // // We have: // * x set of abscissas // * y experimental data // * xc points where constraints are placed // * yc constraints on derivatives // * dc derivative indices // (0 means function itself, 1 means first derivative) // real_1d_array x = "[1.0,1.0]"; real_1d_array y = "[0.9,1.1]"; real_1d_array w = "[1,1]"; real_1d_array xc = "[0]"; real_1d_array yc = "[0]"; integer_1d_array dc = "[0]"; double t = 2; ae_int_t m = 2; ae_int_t info; barycentricinterpolant p; polynomialfitreport rep; double v; polynomialfitwc(x, y, w, xc, yc, dc, m, info, p, rep); v = barycentriccalc(p, t); printf("%.2f\n", double(v)); // EXPECTED: 2.000 return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "interpolation.h" using namespace alglib; int main(int argc, char **argv) { // // In this example we demonstrate penalized spline fitting of noisy data // // We have: // * x - abscissas // * y - vector of experimental data, straight line with small noise // real_1d_array x = "[0.00,0.10,0.20,0.30,0.40,0.50,0.60,0.70,0.80,0.90]"; real_1d_array y = "[0.10,0.00,0.30,0.40,0.30,0.40,0.62,0.68,0.75,0.95]"; ae_int_t info; double v; spline1dinterpolant s; spline1dfitreport rep; double rho; // // Fit with VERY small amount of smoothing (rho = -5.0) // and large number of basis functions (M=50). // // With such small regularization penalized spline almost fully reproduces function values // rho = -5.0; spline1dfitpenalized(x, y, 50, rho, info, s, rep); printf("%d\n", int(info)); // EXPECTED: 1 v = spline1dcalc(s, 0.0); printf("%.1f\n", double(v)); // EXPECTED: 0.10 // // Fit with VERY large amount of smoothing (rho = 10.0) // and large number of basis functions (M=50). // // With such regularization our spline should become close to the straight line fit. // We will compare its value in x=1.0 with results obtained from such fit. // rho = +10.0; spline1dfitpenalized(x, y, 50, rho, info, s, rep); printf("%d\n", int(info)); // EXPECTED: 1 v = spline1dcalc(s, 1.0); printf("%.2f\n", double(v)); // EXPECTED: 0.969 // // In real life applications you may need some moderate degree of fitting, // so we try to fit once more with rho=3.0. // rho = +3.0; spline1dfitpenalized(x, y, 50, rho, info, s, rep); printf("%d\n", int(info)); // EXPECTED: 1 return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "interpolation.h" using namespace alglib; int main(int argc, char **argv) { real_1d_array x = "[1,2,3,4,5,6,7,8]"; real_1d_array y = "[0.06313223,0.44552624,0.61838364,0.71385108,0.77345838,0.81383140,0.84280033,0.86449822]"; ae_int_t n = 8; double a; double b; double c; double d; lsfitreport rep; // // Test logisticfit4() on carefully designed data with a priori known answer. // logisticfit4(x, y, n, a, b, c, d, rep); printf("%.1f\n", double(a)); // EXPECTED: -1.000 printf("%.1f\n", double(b)); // EXPECTED: 1.200 printf("%.1f\n", double(c)); // EXPECTED: 0.900 printf("%.1f\n", double(d)); // EXPECTED: 1.000 // // Evaluate model at point x=0.5 // double v; v = logisticcalc4(0.5, a, b, c, d); printf("%.2f\n", double(v)); // EXPECTED: -0.33874308 return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "interpolation.h" using namespace alglib; int main(int argc, char **argv) { real_1d_array x = "[1,2,3,4,5,6,7,8]"; real_1d_array y = "[0.1949776139,0.5710060208,0.726002637,0.8060434158,0.8534547965,0.8842071579,0.9054773317,0.9209088299]"; ae_int_t n = 8; double a; double b; double c; double d; double g; lsfitreport rep; // // Test logisticfit5() on carefully designed data with a priori known answer. // logisticfit5(x, y, n, a, b, c, d, g, rep); printf("%.1f\n", double(a)); // EXPECTED: -1.000 printf("%.1f\n", double(b)); // EXPECTED: 1.200 printf("%.1f\n", double(c)); // EXPECTED: 0.900 printf("%.1f\n", double(d)); // EXPECTED: 1.000 printf("%.1f\n", double(g)); // EXPECTED: 1.200 // // Evaluate model at point x=0.5 // double v; v = logisticcalc5(0.5, a, b, c, d, g); printf("%.2f\n", double(v)); // EXPECTED: -0.2354656824 return 0; }
mannwhitneyu
subpackagemannwhitneyutest
function/************************************************************************* Mann-Whitney U-test This test checks hypotheses about whether X and Y are samples of two continuous distributions of the same shape and same median or whether their medians are different. The following tests are performed: * two-tailed test (null hypothesis - the medians are equal) * left-tailed test (null hypothesis - the median of the first sample is greater than or equal to the median of the second sample) * right-tailed test (null hypothesis - the median of the first sample is less than or equal to the median of the second sample). Requirements: * the samples are independent * X and Y are continuous distributions (or discrete distributions well- approximating continuous distributions) * distributions of X and Y have the same shape. The only possible difference is their position (i.e. the value of the median) * the number of elements in each sample is not less than 5 * the scale of measurement should be ordinal, interval or ratio (i.e. the test could not be applied to nominal variables). The test is non-parametric and doesn't require distributions to be normal. Input parameters: X - sample 1. Array whose index goes from 0 to N-1. N - size of the sample. N>=5 Y - sample 2. Array whose index goes from 0 to M-1. M - size of the sample. M>=5 Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. To calculate p-values, special approximation is used. This method lets us calculate p-values with satisfactory accuracy in interval [0.0001, 1]. There is no approximation outside the [0.0001, 1] interval. Therefore, if the significance level outlies this interval, the test returns 0.0001. Relative precision of approximation of p-value: N M Max.err. Rms.err. 5..10 N..10 1.4e-02 6.0e-04 5..10 N..100 2.2e-02 5.3e-06 10..15 N..15 1.0e-02 3.2e-04 10..15 N..100 1.0e-02 2.2e-05 15..100 N..100 6.1e-03 2.7e-06 For N,M>100 accuracy checks weren't put into practice, but taking into account characteristics of asymptotic approximation used, precision should not be sharply different from the values for interval [5, 100]. NOTE: P-value approximation was optimized for 0.0001<=p<=0.2500. Thus, P's outside of this interval are enforced to these bounds. Say, you may quite often get P equal to exactly 0.25 or 0.0001. -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/void alglib::mannwhitneyutest( real_1d_array x, ae_int_t n, real_1d_array y, ae_int_t m, double& bothtails, double& lefttail, double& righttail);
matdet
subpackagematdet_d_1 | Determinant calculation, real matrix, short form | |
matdet_d_2 | Determinant calculation, real matrix, full form | |
matdet_d_3 | Determinant calculation, complex matrix, short form | |
matdet_d_4 | Determinant calculation, complex matrix, full form | |
matdet_d_5 | Determinant calculation, complex matrix with zero imaginary part, short form |
cmatrixdet
function/************************************************************************* Calculation of the determinant of a general matrix Input parameters: A - matrix, array[0..N-1, 0..N-1] N - (optional) size of matrix A: * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, automatically determined from matrix size (A must be square matrix) Result: determinant of matrix A. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/alglib::complex alglib::cmatrixdet(complex_2d_array a); alglib::complex alglib::cmatrixdet(complex_2d_array a, ae_int_t n);
cmatrixludet
function/************************************************************************* Determinant calculation of the matrix given by its LU decomposition. Input parameters: A - LU decomposition of the matrix (output of RMatrixLU subroutine). Pivots - table of permutations which were made during the LU decomposition. Output of RMatrixLU subroutine. N - (optional) size of matrix A: * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, automatically determined from matrix size (A must be square matrix) Result: matrix determinant. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/alglib::complex alglib::cmatrixludet( complex_2d_array a, integer_1d_array pivots); alglib::complex alglib::cmatrixludet( complex_2d_array a, integer_1d_array pivots, ae_int_t n);
rmatrixdet
function/************************************************************************* Calculation of the determinant of a general matrix Input parameters: A - matrix, array[0..N-1, 0..N-1] N - (optional) size of matrix A: * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, automatically determined from matrix size (A must be square matrix) Result: determinant of matrix A. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/double alglib::rmatrixdet(real_2d_array a); double alglib::rmatrixdet(real_2d_array a, ae_int_t n);
rmatrixludet
function/************************************************************************* Determinant calculation of the matrix given by its LU decomposition. Input parameters: A - LU decomposition of the matrix (output of RMatrixLU subroutine). Pivots - table of permutations which were made during the LU decomposition. Output of RMatrixLU subroutine. N - (optional) size of matrix A: * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, automatically determined from matrix size (A must be square matrix) Result: matrix determinant. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/double alglib::rmatrixludet(real_2d_array a, integer_1d_array pivots); double alglib::rmatrixludet( real_2d_array a, integer_1d_array pivots, ae_int_t n);
spdmatrixcholeskydet
function/************************************************************************* Determinant calculation of the matrix given by the Cholesky decomposition. Input parameters: A - Cholesky decomposition, output of SMatrixCholesky subroutine. N - (optional) size of matrix A: * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, automatically determined from matrix size (A must be square matrix) As the determinant is equal to the product of squares of diagonal elements, it's not necessary to specify which triangle - lower or upper - the matrix is stored in. Result: matrix determinant. -- ALGLIB -- Copyright 2005-2008 by Bochkanov Sergey *************************************************************************/double alglib::spdmatrixcholeskydet(real_2d_array a); double alglib::spdmatrixcholeskydet(real_2d_array a, ae_int_t n);
spdmatrixdet
function/************************************************************************* Determinant calculation of the symmetric positive definite matrix. Input parameters: A - matrix. Array with elements [0..N-1, 0..N-1]. N - (optional) size of matrix A: * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, automatically determined from matrix size (A must be square matrix) IsUpper - (optional) storage type: * if True, symmetric matrix A is given by its upper triangle, and the lower triangle isn't used/changed by function * if False, symmetric matrix A is given by its lower triangle, and the upper triangle isn't used/changed by function * if not given, both lower and upper triangles must be filled. Result: determinant of matrix A. If matrix A is not positive definite, exception is thrown. -- ALGLIB -- Copyright 2005-2008 by Bochkanov Sergey *************************************************************************/double alglib::spdmatrixdet(real_2d_array a); double alglib::spdmatrixdet(real_2d_array a, ae_int_t n, bool isupper);
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "linalg.h" using namespace alglib; int main(int argc, char **argv) { real_2d_array b = "[[1,2],[2,1]]"; double a; a = rmatrixdet(b); printf("%.3f\n", double(a)); // EXPECTED: -3 return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "linalg.h" using namespace alglib; int main(int argc, char **argv) { real_2d_array b = "[[5,4],[4,5]]"; double a; a = rmatrixdet(b, 2); printf("%.3f\n", double(a)); // EXPECTED: 9 return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "linalg.h" using namespace alglib; int main(int argc, char **argv) { complex_2d_array b = "[[1+1i,2],[2,1-1i]]"; alglib::complex a; a = cmatrixdet(b); printf("%s\n", a.tostring(3).c_str()); // EXPECTED: -2 return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "linalg.h" using namespace alglib; int main(int argc, char **argv) { alglib::complex a; complex_2d_array b = "[[5i,4],[4i,5]]"; a = cmatrixdet(b, 2); printf("%s\n", a.tostring(3).c_str()); // EXPECTED: 9i return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "linalg.h" using namespace alglib; int main(int argc, char **argv) { alglib::complex a; complex_2d_array b = "[[9,1],[2,1]]"; a = cmatrixdet(b); printf("%s\n", a.tostring(3).c_str()); // EXPECTED: 7 return 0; }
matgen
subpackagecmatrixrndcond
function/************************************************************************* Generation of random NxN complex matrix with given condition number C and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void alglib::cmatrixrndcond(ae_int_t n, double c, complex_2d_array& a);
cmatrixrndorthogonal
function/************************************************************************* Generation of a random Haar distributed orthogonal complex matrix INPUT PARAMETERS: N - matrix size, N>=1 OUTPUT PARAMETERS: A - orthogonal NxN matrix, array[0..N-1,0..N-1] NOTE: this function uses algorithm described in Stewart, G. W. (1980), "The Efficient Generation of Random Orthogonal Matrices with an Application to Condition Estimators". Speaking short, to generate an (N+1)x(N+1) orthogonal matrix, it: * takes an NxN one * takes uniformly distributed unit vector of dimension N+1. * constructs a Householder reflection from the vector, then applies it to the smaller matrix (embedded in the larger size with a 1 at the bottom right corner). -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void alglib::cmatrixrndorthogonal(ae_int_t n, complex_2d_array& a);
cmatrixrndorthogonalfromtheleft
function/************************************************************************* Multiplication of MxN complex matrix by MxM random Haar distributed complex orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..M-1, 0..N-1] M, N- matrix size OUTPUT PARAMETERS: A - Q*A, where Q is random MxM orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void alglib::cmatrixrndorthogonalfromtheleft( complex_2d_array& a, ae_int_t m, ae_int_t n);
cmatrixrndorthogonalfromtheright
function/************************************************************************* Multiplication of MxN complex matrix by NxN random Haar distributed complex orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..M-1, 0..N-1] M, N- matrix size OUTPUT PARAMETERS: A - A*Q, where Q is random NxN orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void alglib::cmatrixrndorthogonalfromtheright( complex_2d_array& a, ae_int_t m, ae_int_t n);
hmatrixrndcond
function/************************************************************************* Generation of random NxN Hermitian matrix with given condition number and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void alglib::hmatrixrndcond(ae_int_t n, double c, complex_2d_array& a);
hmatrixrndmultiply
function/************************************************************************* Hermitian multiplication of NxN matrix by random Haar distributed complex orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..N-1, 0..N-1] N - matrix size OUTPUT PARAMETERS: A - Q^H*A*Q, where Q is random NxN orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void alglib::hmatrixrndmultiply(complex_2d_array& a, ae_int_t n);
hpdmatrixrndcond
function/************************************************************************* Generation of random NxN Hermitian positive definite matrix with given condition number and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random HPD matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void alglib::hpdmatrixrndcond(ae_int_t n, double c, complex_2d_array& a);
rmatrixrndcond
function/************************************************************************* Generation of random NxN matrix with given condition number and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void alglib::rmatrixrndcond(ae_int_t n, double c, real_2d_array& a);
rmatrixrndorthogonal
function/************************************************************************* Generation of a random uniformly distributed (Haar) orthogonal matrix INPUT PARAMETERS: N - matrix size, N>=1 OUTPUT PARAMETERS: A - orthogonal NxN matrix, array[0..N-1,0..N-1] NOTE: this function uses algorithm described in Stewart, G. W. (1980), "The Efficient Generation of Random Orthogonal Matrices with an Application to Condition Estimators". Speaking short, to generate an (N+1)x(N+1) orthogonal matrix, it: * takes an NxN one * takes uniformly distributed unit vector of dimension N+1. * constructs a Householder reflection from the vector, then applies it to the smaller matrix (embedded in the larger size with a 1 at the bottom right corner). -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void alglib::rmatrixrndorthogonal(ae_int_t n, real_2d_array& a);
rmatrixrndorthogonalfromtheleft
function/************************************************************************* Multiplication of MxN matrix by MxM random Haar distributed orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..M-1, 0..N-1] M, N- matrix size OUTPUT PARAMETERS: A - Q*A, where Q is random MxM orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void alglib::rmatrixrndorthogonalfromtheleft( real_2d_array& a, ae_int_t m, ae_int_t n);
rmatrixrndorthogonalfromtheright
function/************************************************************************* Multiplication of MxN matrix by NxN random Haar distributed orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..M-1, 0..N-1] M, N- matrix size OUTPUT PARAMETERS: A - A*Q, where Q is random NxN orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void alglib::rmatrixrndorthogonalfromtheright( real_2d_array& a, ae_int_t m, ae_int_t n);
smatrixrndcond
function/************************************************************************* Generation of random NxN symmetric matrix with given condition number and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void alglib::smatrixrndcond(ae_int_t n, double c, real_2d_array& a);
smatrixrndmultiply
function/************************************************************************* Symmetric multiplication of NxN matrix by random Haar distributed orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..N-1, 0..N-1] N - matrix size OUTPUT PARAMETERS: A - Q'*A*Q, where Q is random NxN orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void alglib::smatrixrndmultiply(real_2d_array& a, ae_int_t n);
spdmatrixrndcond
function/************************************************************************* Generation of random NxN symmetric positive definite matrix with given condition number and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random SPD matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/void alglib::spdmatrixrndcond(ae_int_t n, double c, real_2d_array& a);
matinv
subpackagematinv_d_c1 | Complex matrix inverse | |
matinv_d_hpd1 | HPD matrix inverse | |
matinv_d_r1 | Real matrix inverse | |
matinv_d_spd1 | SPD matrix inverse |
matinvreport
class/************************************************************************* Matrix inverse report: * R1 reciprocal of condition number in 1-norm * RInf reciprocal of condition number in inf-norm *************************************************************************/class matinvreport { double r1; double rinf; };
cmatrixinverse
function/************************************************************************* Inversion of a general matrix. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. We should note that matrix inversion is harder to ! parallelize than, say, matrix-matrix product - this algorithm has ! many internal synchronization points which can not be avoided. However ! parallelism starts to be profitable starting from N=1024, achieving ! near-linear speedup for N=4096 or higher. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: A - matrix N - size of matrix A (optional) : * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, size is automatically determined from matrix size (A must be square matrix) Output parameters: Info - return code, same as in RMatrixLUInverse Rep - solver report, same as in RMatrixLUInverse A - inverse of matrix A, same as in RMatrixLUInverse -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/void alglib::cmatrixinverse( complex_2d_array& a, ae_int_t& info, matinvreport& rep); void alglib::cmatrixinverse( complex_2d_array& a, ae_int_t n, ae_int_t& info, matinvreport& rep); void alglib::smp_cmatrixinverse( complex_2d_array& a, ae_int_t& info, matinvreport& rep); void alglib::smp_cmatrixinverse( complex_2d_array& a, ae_int_t n, ae_int_t& info, matinvreport& rep);
Examples: [1]
cmatrixluinverse
function/************************************************************************* Inversion of a matrix given by its LU decomposition. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. We should note that matrix inversion is harder to ! parallelize than, say, matrix-matrix product - this algorithm has ! many internal synchronization points which can not be avoided. However ! parallelism starts to be profitable starting from N=1024, achieving ! near-linear speedup for N=4096 or higher. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: A - LU decomposition of the matrix (output of CMatrixLU subroutine). Pivots - table of permutations (the output of CMatrixLU subroutine). N - size of matrix A (optional) : * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, size is automatically determined from matrix size (A must be square matrix) OUTPUT PARAMETERS: Info - return code, same as in RMatrixLUInverse Rep - solver report, same as in RMatrixLUInverse A - inverse of matrix A, same as in RMatrixLUInverse -- ALGLIB routine -- 05.02.2010 Bochkanov Sergey *************************************************************************/void alglib::cmatrixluinverse( complex_2d_array& a, integer_1d_array pivots, ae_int_t& info, matinvreport& rep); void alglib::cmatrixluinverse( complex_2d_array& a, integer_1d_array pivots, ae_int_t n, ae_int_t& info, matinvreport& rep); void alglib::smp_cmatrixluinverse( complex_2d_array& a, integer_1d_array pivots, ae_int_t& info, matinvreport& rep); void alglib::smp_cmatrixluinverse( complex_2d_array& a, integer_1d_array pivots, ae_int_t n, ae_int_t& info, matinvreport& rep);
cmatrixtrinverse
function/************************************************************************* Triangular matrix inverse (complex) The subroutine inverts the following types of matrices: * upper triangular * upper triangular with unit diagonal * lower triangular * lower triangular with unit diagonal In case of an upper (lower) triangular matrix, the inverse matrix will also be upper (lower) triangular, and after the end of the algorithm, the inverse matrix replaces the source matrix. The elements below (above) the main diagonal are not changed by the algorithm. If the matrix has a unit diagonal, the inverse matrix also has a unit diagonal, and the diagonal elements are not passed to the algorithm. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. We should note that triangular inverse is harder to ! parallelize than, say, matrix-matrix product - this algorithm has ! many internal synchronization points which can not be avoided. However ! parallelism starts to be profitable starting from N=1024, achieving ! near-linear speedup for N=4096 or higher. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: A - matrix, array[0..N-1, 0..N-1]. N - size of matrix A (optional) : * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, size is automatically determined from matrix size (A must be square matrix) IsUpper - True, if the matrix is upper triangular. IsUnit - diagonal type (optional): * if True, matrix has unit diagonal (a[i,i] are NOT used) * if False, matrix diagonal is arbitrary * if not given, False is assumed Output parameters: Info - same as for RMatrixLUInverse Rep - same as for RMatrixLUInverse A - same as for RMatrixLUInverse. -- ALGLIB -- Copyright 05.02.2010 by Bochkanov Sergey *************************************************************************/void alglib::cmatrixtrinverse( complex_2d_array& a, bool isupper, ae_int_t& info, matinvreport& rep); void alglib::cmatrixtrinverse( complex_2d_array& a, ae_int_t n, bool isupper, bool isunit, ae_int_t& info, matinvreport& rep); void alglib::smp_cmatrixtrinverse( complex_2d_array& a, bool isupper, ae_int_t& info, matinvreport& rep); void alglib::smp_cmatrixtrinverse( complex_2d_array& a, ae_int_t n, bool isupper, bool isunit, ae_int_t& info, matinvreport& rep);
hpdmatrixcholeskyinverse
function/************************************************************************* Inversion of a Hermitian positive definite matrix which is given by Cholesky decomposition. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. However, Cholesky inversion is a "difficult" ! algorithm - it has lots of internal synchronization points which ! prevents efficient parallelization of algorithm. Only very large ! problems (N=thousands) can be efficiently parallelized. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: A - Cholesky decomposition of the matrix to be inverted: A=U'*U or A = L*L'. Output of HPDMatrixCholesky subroutine. N - size of matrix A (optional) : * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, size is automatically determined from matrix size (A must be square matrix) IsUpper - storage type (optional): * if True, symmetric matrix A is given by its upper triangle, and the lower triangle isn't used/changed by function * if False, symmetric matrix A is given by its lower triangle, and the upper triangle isn't used/changed by function * if not given, lower half is used. Output parameters: Info - return code, same as in RMatrixLUInverse Rep - solver report, same as in RMatrixLUInverse A - inverse of matrix A, same as in RMatrixLUInverse -- ALGLIB routine -- 10.02.2010 Bochkanov Sergey *************************************************************************/void alglib::hpdmatrixcholeskyinverse( complex_2d_array& a, ae_int_t& info, matinvreport& rep); void alglib::hpdmatrixcholeskyinverse( complex_2d_array& a, ae_int_t n, bool isupper, ae_int_t& info, matinvreport& rep); void alglib::smp_hpdmatrixcholeskyinverse( complex_2d_array& a, ae_int_t& info, matinvreport& rep); void alglib::smp_hpdmatrixcholeskyinverse( complex_2d_array& a, ae_int_t n, bool isupper, ae_int_t& info, matinvreport& rep);
hpdmatrixinverse
function/************************************************************************* Inversion of a Hermitian positive definite matrix. Given an upper or lower triangle of a Hermitian positive definite matrix, the algorithm generates matrix A^-1 and saves the upper or lower triangle depending on the input. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. However, Cholesky inversion is a "difficult" ! algorithm - it has lots of internal synchronization points which ! prevents efficient parallelization of algorithm. Only very large ! problems (N=thousands) can be efficiently parallelized. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: A - matrix to be inverted (upper or lower triangle). Array with elements [0..N-1,0..N-1]. N - size of matrix A (optional) : * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, size is automatically determined from matrix size (A must be square matrix) IsUpper - storage type (optional): * if True, symmetric matrix A is given by its upper triangle, and the lower triangle isn't used/changed by function * if False, symmetric matrix A is given by its lower triangle, and the upper triangle isn't used/changed by function * if not given, both lower and upper triangles must be filled. Output parameters: Info - return code, same as in RMatrixLUInverse Rep - solver report, same as in RMatrixLUInverse A - inverse of matrix A, same as in RMatrixLUInverse -- ALGLIB routine -- 10.02.2010 Bochkanov Sergey *************************************************************************/void alglib::hpdmatrixinverse( complex_2d_array& a, ae_int_t& info, matinvreport& rep); void alglib::hpdmatrixinverse( complex_2d_array& a, ae_int_t n, bool isupper, ae_int_t& info, matinvreport& rep); void alglib::smp_hpdmatrixinverse( complex_2d_array& a, ae_int_t& info, matinvreport& rep); void alglib::smp_hpdmatrixinverse( complex_2d_array& a, ae_int_t n, bool isupper, ae_int_t& info, matinvreport& rep);
Examples: [1]
rmatrixinverse
function/************************************************************************* Inversion of a general matrix. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. We should note that matrix inversion is harder to ! parallelize than, say, matrix-matrix product - this algorithm has ! many internal synchronization points which can not be avoided. However ! parallelism starts to be profitable starting from N=1024, achieving ! near-linear speedup for N=4096 or higher. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: A - matrix. N - size of matrix A (optional) : * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, size is automatically determined from matrix size (A must be square matrix) Output parameters: Info - return code, same as in RMatrixLUInverse Rep - solver report, same as in RMatrixLUInverse A - inverse of matrix A, same as in RMatrixLUInverse Result: True, if the matrix is not singular. False, if the matrix is singular. -- ALGLIB -- Copyright 2005-2010 by Bochkanov Sergey *************************************************************************/void alglib::rmatrixinverse( real_2d_array& a, ae_int_t& info, matinvreport& rep); void alglib::rmatrixinverse( real_2d_array& a, ae_int_t n, ae_int_t& info, matinvreport& rep); void alglib::smp_rmatrixinverse( real_2d_array& a, ae_int_t& info, matinvreport& rep); void alglib::smp_rmatrixinverse( real_2d_array& a, ae_int_t n, ae_int_t& info, matinvreport& rep);
Examples: [1]
rmatrixluinverse
function/************************************************************************* Inversion of a matrix given by its LU decomposition. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. We should note that matrix inversion is harder to ! parallelize than, say, matrix-matrix product - this algorithm has ! many internal synchronization points which can not be avoided. However ! parallelism starts to be profitable starting from N=1024, achieving ! near-linear speedup for N=4096 or higher. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: A - LU decomposition of the matrix (output of RMatrixLU subroutine). Pivots - table of permutations (the output of RMatrixLU subroutine). N - size of matrix A (optional) : * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, size is automatically determined from matrix size (A must be square matrix) OUTPUT PARAMETERS: Info - return code: * -3 A is singular, or VERY close to singular. it is filled by zeros in such cases. * 1 task is solved (but matrix A may be ill-conditioned, check R1/RInf parameters for condition numbers). Rep - solver report, see below for more info A - inverse of matrix A. Array whose indexes range within [0..N-1, 0..N-1]. SOLVER REPORT Subroutine sets following fields of the Rep structure: * R1 reciprocal of condition number: 1/cond(A), 1-norm. * RInf reciprocal of condition number: 1/cond(A), inf-norm. -- ALGLIB routine -- 05.02.2010 Bochkanov Sergey *************************************************************************/void alglib::rmatrixluinverse( real_2d_array& a, integer_1d_array pivots, ae_int_t& info, matinvreport& rep); void alglib::rmatrixluinverse( real_2d_array& a, integer_1d_array pivots, ae_int_t n, ae_int_t& info, matinvreport& rep); void alglib::smp_rmatrixluinverse( real_2d_array& a, integer_1d_array pivots, ae_int_t& info, matinvreport& rep); void alglib::smp_rmatrixluinverse( real_2d_array& a, integer_1d_array pivots, ae_int_t n, ae_int_t& info, matinvreport& rep);
rmatrixtrinverse
function/************************************************************************* Triangular matrix inverse (real) The subroutine inverts the following types of matrices: * upper triangular * upper triangular with unit diagonal * lower triangular * lower triangular with unit diagonal In case of an upper (lower) triangular matrix, the inverse matrix will also be upper (lower) triangular, and after the end of the algorithm, the inverse matrix replaces the source matrix. The elements below (above) the main diagonal are not changed by the algorithm. If the matrix has a unit diagonal, the inverse matrix also has a unit diagonal, and the diagonal elements are not passed to the algorithm. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. We should note that triangular inverse is harder to ! parallelize than, say, matrix-matrix product - this algorithm has ! many internal synchronization points which can not be avoided. However ! parallelism starts to be profitable starting from N=1024, achieving ! near-linear speedup for N=4096 or higher. ! ! In order to use multicore features you have to: ! * use commercial version of ALGLIB ! * call this function with "smp_" prefix, which indicates that ! multicore code will be used (for multicore support) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: A - matrix, array[0..N-1, 0..N-1]. N - size of matrix A (optional) : * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, size is automatically determined from matrix size (A must be square matrix) IsUpper - True, if the matrix is upper triangular. IsUnit - diagonal type (optional): * if True, matrix has unit diagonal (a[i,i] are NOT used) * if False, matrix diagonal is arbitrary * if not given, False is assumed Output parameters: Info - same as for RMatrixLUInverse Rep - same as for RMatrixLUInverse A - same as for RMatrixLUInverse. -- ALGLIB -- Copyright 05.02.2010 by Bochkanov Sergey *************************************************************************/void alglib::rmatrixtrinverse( real_2d_array& a, bool isupper, ae_int_t& info, matinvreport& rep); void alglib::rmatrixtrinverse( real_2d_array& a, ae_int_t n, bool isupper, bool isunit, ae_int_t& info, matinvreport& rep); void alglib::smp_rmatrixtrinverse( real_2d_array& a, bool isupper, ae_int_t& info, matinvreport& rep); void alglib::smp_rmatrixtrinverse( real_2d_array& a, ae_int_t n, bool isupper, bool isunit, ae_int_t& info, matinvreport& rep);
spdmatrixcholeskyinverse
function/************************************************************************* Inversion of a symmetric positive definite matrix which is given by Cholesky decomposition. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. However, Cholesky inversion is a "difficult" ! algorithm - it has lots of internal synchronization points which ! prevents efficient parallelization of algorithm. Only very large ! problems (N=thousands) can be efficiently parallelized. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: A - Cholesky decomposition of the matrix to be inverted: A=U'*U or A = L*L'. Output of SPDMatrixCholesky subroutine. N - size of matrix A (optional) : * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, size is automatically determined from matrix size (A must be square matrix) IsUpper - storage type (optional): * if True, symmetric matrix A is given by its upper triangle, and the lower triangle isn't used/changed by function * if False, symmetric matrix A is given by its lower triangle, and the upper triangle isn't used/changed by function * if not given, lower half is used. Output parameters: Info - return code, same as in RMatrixLUInverse Rep - solver report, same as in RMatrixLUInverse A - inverse of matrix A, same as in RMatrixLUInverse -- ALGLIB routine -- 10.02.2010 Bochkanov Sergey *************************************************************************/void alglib::spdmatrixcholeskyinverse( real_2d_array& a, ae_int_t& info, matinvreport& rep); void alglib::spdmatrixcholeskyinverse( real_2d_array& a, ae_int_t n, bool isupper, ae_int_t& info, matinvreport& rep); void alglib::smp_spdmatrixcholeskyinverse( real_2d_array& a, ae_int_t& info, matinvreport& rep); void alglib::smp_spdmatrixcholeskyinverse( real_2d_array& a, ae_int_t n, bool isupper, ae_int_t& info, matinvreport& rep);
spdmatrixinverse
function/************************************************************************* Inversion of a symmetric positive definite matrix. Given an upper or lower triangle of a symmetric positive definite matrix, the algorithm generates matrix A^-1 and saves the upper or lower triangle depending on the input. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes two important improvements of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! * multicore support ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Say, on SSE2-capable CPU with N=1024, HPC ALGLIB will be: ! * about 2-3x faster than ALGLIB for C++ without MKL ! * about 7-10x faster than "pure C#" edition of ALGLIB ! Difference in performance will be more striking on newer CPU's with ! support for newer SIMD instructions. Generally, MKL accelerates any ! problem whose size is at least 128, with best efficiency achieved for ! N's larger than 512. ! ! Commercial edition of ALGLIB also supports multithreaded acceleration ! of this function. However, Cholesky inversion is a "difficult" ! algorithm - it has lots of internal synchronization points which ! prevents efficient parallelization of algorithm. Only very large ! problems (N=thousands) can be efficiently parallelized. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: A - matrix to be inverted (upper or lower triangle). Array with elements [0..N-1,0..N-1]. N - size of matrix A (optional) : * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, size is automatically determined from matrix size (A must be square matrix) IsUpper - storage type (optional): * if True, symmetric matrix A is given by its upper triangle, and the lower triangle isn't used/changed by function * if False, symmetric matrix A is given by its lower triangle, and the upper triangle isn't used/changed by function * if not given, both lower and upper triangles must be filled. Output parameters: Info - return code, same as in RMatrixLUInverse Rep - solver report, same as in RMatrixLUInverse A - inverse of matrix A, same as in RMatrixLUInverse -- ALGLIB routine -- 10.02.2010 Bochkanov Sergey *************************************************************************/void alglib::spdmatrixinverse( real_2d_array& a, ae_int_t& info, matinvreport& rep); void alglib::spdmatrixinverse( real_2d_array& a, ae_int_t n, bool isupper, ae_int_t& info, matinvreport& rep); void alglib::smp_spdmatrixinverse( real_2d_array& a, ae_int_t& info, matinvreport& rep); void alglib::smp_spdmatrixinverse( real_2d_array& a, ae_int_t n, bool isupper, ae_int_t& info, matinvreport& rep);
Examples: [1]
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "linalg.h" using namespace alglib; int main(int argc, char **argv) { complex_2d_array a = "[[1i,-1],[1i,1]]"; ae_int_t info; matinvreport rep; cmatrixinverse(a, info, rep); printf("%d\n", int(info)); // EXPECTED: 1 printf("%s\n", a.tostring(4).c_str()); // EXPECTED: [[-0.5i,-0.5i],[-0.5,0.5]] printf("%.4f\n", double(rep.r1)); // EXPECTED: 0.5 printf("%.4f\n", double(rep.rinf)); // EXPECTED: 0.5 return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "linalg.h" using namespace alglib; int main(int argc, char **argv) { complex_2d_array a = "[[2,1],[1,2]]"; ae_int_t info; matinvreport rep; hpdmatrixinverse(a, info, rep); printf("%d\n", int(info)); // EXPECTED: 1 printf("%s\n", a.tostring(4).c_str()); // EXPECTED: [[0.666666,-0.333333],[-0.333333,0.666666]] return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "linalg.h" using namespace alglib; int main(int argc, char **argv) { real_2d_array a = "[[1,-1],[1,1]]"; ae_int_t info; matinvreport rep; rmatrixinverse(a, info, rep); printf("%d\n", int(info)); // EXPECTED: 1 printf("%s\n", a.tostring(4).c_str()); // EXPECTED: [[0.5,0.5],[-0.5,0.5]] printf("%.4f\n", double(rep.r1)); // EXPECTED: 0.5 printf("%.4f\n", double(rep.rinf)); // EXPECTED: 0.5 return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "linalg.h" using namespace alglib; int main(int argc, char **argv) { real_2d_array a = "[[2,1],[1,2]]"; ae_int_t info; matinvreport rep; spdmatrixinverse(a, info, rep); printf("%d\n", int(info)); // EXPECTED: 1 printf("%s\n", a.tostring(4).c_str()); // EXPECTED: [[0.666666,-0.333333],[-0.333333,0.666666]] return 0; }
mcpd
subpackagemcpd_simple1 | Simple unconstrained MCPD model (no entry/exit states) | |
mcpd_simple2 | Simple MCPD model (no entry/exit states) with equality constraints |
mcpdreport
class/************************************************************************* This structure is a MCPD training report: InnerIterationsCount - number of inner iterations of the underlying optimization algorithm OuterIterationsCount - number of outer iterations of the underlying optimization algorithm NFEV - number of merit function evaluations TerminationType - termination type (same as for MinBLEIC optimizer, positive values denote success, negative ones - failure) -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/class mcpdreport { ae_int_t inneriterationscount; ae_int_t outeriterationscount; ae_int_t nfev; ae_int_t terminationtype; };
mcpdstate
class/************************************************************************* This structure is a MCPD (Markov Chains for Population Data) solver. You should use ALGLIB functions in order to work with this object. -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/class mcpdstate { };
mcpdaddbc
function/************************************************************************* This function is used to add bound constraints on the elements of the transition matrix P. MCPD solver has four types of constraints which can be placed on P: * user-specified equality constraints (optional) * user-specified bound constraints (optional) * user-specified general linear constraints (optional) * basic constraints (always present): * non-negativity: P[i,j]>=0 * consistency: every column of P sums to 1.0 Final constraints which are passed to the underlying optimizer are calculated as intersection of all present constraints. For example, you may specify boundary constraint on P[0,0] and equality one: 0.1<=P[0,0]<=0.9 P[0,0]=0.5 Such combination of constraints will be silently reduced to their intersection, which is P[0,0]=0.5. This function can be used to ADD bound constraint for one element of P without changing constraints for other elements. You can also use MCPDSetBC() function which allows to place bound constraints on arbitrary subset of elements of P. Set of constraints is specified by BndL/BndU matrices, which may contain arbitrary combination of finite numbers or infinities (like -INF<x<=0.5 or 0.1<=x<+INF). These functions (MCPDSetBC and MCPDAddBC) interact as follows: * there is internal matrix of bound constraints which is stored in the MCPD solver * MCPDSetBC() replaces this matrix by another one (SET) * MCPDAddBC() modifies one element of this matrix and leaves other ones unchanged (ADD) * thus MCPDAddBC() call preserves all modifications done by previous calls, while MCPDSetBC() completely discards all changes done to the equality constraints. INPUT PARAMETERS: S - solver I - row index of element being constrained J - column index of element being constrained BndL - lower bound BndU - upper bound -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/void alglib::mcpdaddbc( mcpdstate s, ae_int_t i, ae_int_t j, double bndl, double bndu);
mcpdaddec
function/************************************************************************* This function is used to add equality constraints on the elements of the transition matrix P. MCPD solver has four types of constraints which can be placed on P: * user-specified equality constraints (optional) * user-specified bound constraints (optional) * user-specified general linear constraints (optional) * basic constraints (always present): * non-negativity: P[i,j]>=0 * consistency: every column of P sums to 1.0 Final constraints which are passed to the underlying optimizer are calculated as intersection of all present constraints. For example, you may specify boundary constraint on P[0,0] and equality one: 0.1<=P[0,0]<=0.9 P[0,0]=0.5 Such combination of constraints will be silently reduced to their intersection, which is P[0,0]=0.5. This function can be used to ADD equality constraint for one element of P without changing constraints for other elements. You can also use MCPDSetEC() function which allows you to specify arbitrary set of equality constraints in one call. These functions (MCPDSetEC and MCPDAddEC) interact as follows: * there is internal matrix of equality constraints which is stored in the MCPD solver * MCPDSetEC() replaces this matrix by another one (SET) * MCPDAddEC() modifies one element of this matrix and leaves other ones unchanged (ADD) * thus MCPDAddEC() call preserves all modifications done by previous calls, while MCPDSetEC() completely discards all changes done to the equality constraints. INPUT PARAMETERS: S - solver I - row index of element being constrained J - column index of element being constrained C - value (constraint for P[I,J]). Can be either NAN (no constraint) or finite value from [0,1]. NOTES: 1. infinite values of C will lead to exception being thrown. Values less than 0.0 or greater than 1.0 will lead to error code being returned after call to MCPDSolve(). -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/void alglib::mcpdaddec(mcpdstate s, ae_int_t i, ae_int_t j, double c);
Examples: [1]
mcpdaddtrack
function/************************************************************************* This function is used to add a track - sequence of system states at the different moments of its evolution. You may add one or several tracks to the MCPD solver. In case you have several tracks, they won't overwrite each other. For example, if you pass two tracks, A1-A2-A3 (system at t=A+1, t=A+2 and t=A+3) and B1-B2-B3, then solver will try to model transitions from t=A+1 to t=A+2, t=A+2 to t=A+3, t=B+1 to t=B+2, t=B+2 to t=B+3. But it WONT mix these two tracks - i.e. it wont try to model transition from t=A+3 to t=B+1. INPUT PARAMETERS: S - solver XY - track, array[K,N]: * I-th row is a state at t=I * elements of XY must be non-negative (exception will be thrown on negative elements) K - number of points in a track * if given, only leading K rows of XY are used * if not given, automatically determined from size of XY NOTES: 1. Track may contain either proportional or population data: * with proportional data all rows of XY must sum to 1.0, i.e. we have proportions instead of absolute population values * with population data rows of XY contain population counts and generally do not sum to 1.0 (although they still must be non-negative) -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/void alglib::mcpdaddtrack(mcpdstate s, real_2d_array xy); void alglib::mcpdaddtrack(mcpdstate s, real_2d_array xy, ae_int_t k);
mcpdcreate
function/************************************************************************* DESCRIPTION: This function creates MCPD (Markov Chains for Population Data) solver. This solver can be used to find transition matrix P for N-dimensional prediction problem where transition from X[i] to X[i+1] is modelled as X[i+1] = P*X[i] where X[i] and X[i+1] are N-dimensional population vectors (components of each X are non-negative), and P is a N*N transition matrix (elements of P are non-negative, each column sums to 1.0). Such models arise when when: * there is some population of individuals * individuals can have different states * individuals can transit from one state to another * population size is constant, i.e. there is no new individuals and no one leaves population * you want to model transitions of individuals from one state into another USAGE: Here we give very brief outline of the MCPD. We strongly recommend you to read examples in the ALGLIB Reference Manual and to read ALGLIB User Guide on data analysis which is available at http://www.alglib.net/dataanalysis/ 1. User initializes algorithm state with MCPDCreate() call 2. User adds one or more tracks - sequences of states which describe evolution of a system being modelled from different starting conditions 3. User may add optional boundary, equality and/or linear constraints on the coefficients of P by calling one of the following functions: * MCPDSetEC() to set equality constraints * MCPDSetBC() to set bound constraints * MCPDSetLC() to set linear constraints 4. Optionally, user may set custom weights for prediction errors (by default, algorithm assigns non-equal, automatically chosen weights for errors in the prediction of different components of X). It can be done with a call of MCPDSetPredictionWeights() function. 5. User calls MCPDSolve() function which takes algorithm state and pointer (delegate, etc.) to callback function which calculates F/G. 6. User calls MCPDResults() to get solution INPUT PARAMETERS: N - problem dimension, N>=1 OUTPUT PARAMETERS: State - structure stores algorithm state -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/void alglib::mcpdcreate(ae_int_t n, mcpdstate& s);
mcpdcreateentry
function/************************************************************************* DESCRIPTION: This function is a specialized version of MCPDCreate() function, and we recommend you to read comments for this function for general information about MCPD solver. This function creates MCPD (Markov Chains for Population Data) solver for "Entry-state" model, i.e. model where transition from X[i] to X[i+1] is modelled as X[i+1] = P*X[i] where X[i] and X[i+1] are N-dimensional state vectors P is a N*N transition matrix and one selected component of X[] is called "entry" state and is treated in a special way: system state always transits from "entry" state to some another state system state can not transit from any state into "entry" state Such conditions basically mean that row of P which corresponds to "entry" state is zero. Such models arise when: * there is some population of individuals * individuals can have different states * individuals can transit from one state to another * population size is NOT constant - at every moment of time there is some (unpredictable) amount of "new" individuals, which can transit into one of the states at the next turn, but still no one leaves population * you want to model transitions of individuals from one state into another * but you do NOT want to predict amount of "new" individuals because it does not depends on individuals already present (hence system can not transit INTO entry state - it can only transit FROM it). This model is discussed in more details in the ALGLIB User Guide (see http://www.alglib.net/dataanalysis/ for more data). INPUT PARAMETERS: N - problem dimension, N>=2 EntryState- index of entry state, in 0..N-1 OUTPUT PARAMETERS: State - structure stores algorithm state -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/void alglib::mcpdcreateentry( ae_int_t n, ae_int_t entrystate, mcpdstate& s);
mcpdcreateentryexit
function/************************************************************************* DESCRIPTION: This function is a specialized version of MCPDCreate() function, and we recommend you to read comments for this function for general information about MCPD solver. This function creates MCPD (Markov Chains for Population Data) solver for "Entry-Exit-states" model, i.e. model where transition from X[i] to X[i+1] is modelled as X[i+1] = P*X[i] where X[i] and X[i+1] are N-dimensional state vectors P is a N*N transition matrix one selected component of X[] is called "entry" state and is treated in a special way: system state always transits from "entry" state to some another state system state can not transit from any state into "entry" state and another one component of X[] is called "exit" state and is treated in a special way too: system state can transit from any state into "exit" state system state can not transit from "exit" state into any other state transition operator discards "exit" state (makes it zero at each turn) Such conditions basically mean that: row of P which corresponds to "entry" state is zero column of P which corresponds to "exit" state is zero Multiplication by such P may decrease sum of vector components. Such models arise when: * there is some population of individuals * individuals can have different states * individuals can transit from one state to another * population size is NOT constant * at every moment of time there is some (unpredictable) amount of "new" individuals, which can transit into one of the states at the next turn * some individuals can move (predictably) into "exit" state and leave population at the next turn * you want to model transitions of individuals from one state into another, including transitions from the "entry" state and into the "exit" state. * but you do NOT want to predict amount of "new" individuals because it does not depends on individuals already present (hence system can not transit INTO entry state - it can only transit FROM it). This model is discussed in more details in the ALGLIB User Guide (see http://www.alglib.net/dataanalysis/ for more data). INPUT PARAMETERS: N - problem dimension, N>=2 EntryState- index of entry state, in 0..N-1 ExitState- index of exit state, in 0..N-1 OUTPUT PARAMETERS: State - structure stores algorithm state -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/void alglib::mcpdcreateentryexit( ae_int_t n, ae_int_t entrystate, ae_int_t exitstate, mcpdstate& s);
mcpdcreateexit
function/************************************************************************* DESCRIPTION: This function is a specialized version of MCPDCreate() function, and we recommend you to read comments for this function for general information about MCPD solver. This function creates MCPD (Markov Chains for Population Data) solver for "Exit-state" model, i.e. model where transition from X[i] to X[i+1] is modelled as X[i+1] = P*X[i] where X[i] and X[i+1] are N-dimensional state vectors P is a N*N transition matrix and one selected component of X[] is called "exit" state and is treated in a special way: system state can transit from any state into "exit" state system state can not transit from "exit" state into any other state transition operator discards "exit" state (makes it zero at each turn) Such conditions basically mean that column of P which corresponds to "exit" state is zero. Multiplication by such P may decrease sum of vector components. Such models arise when: * there is some population of individuals * individuals can have different states * individuals can transit from one state to another * population size is NOT constant - individuals can move into "exit" state and leave population at the next turn, but there are no new individuals * amount of individuals which leave population can be predicted * you want to model transitions of individuals from one state into another (including transitions into the "exit" state) This model is discussed in more details in the ALGLIB User Guide (see http://www.alglib.net/dataanalysis/ for more data). INPUT PARAMETERS: N - problem dimension, N>=2 ExitState- index of exit state, in 0..N-1 OUTPUT PARAMETERS: State - structure stores algorithm state -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/void alglib::mcpdcreateexit(ae_int_t n, ae_int_t exitstate, mcpdstate& s);
mcpdresults
function/************************************************************************* MCPD results INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: P - array[N,N], transition matrix Rep - optimization report. You should check Rep.TerminationType in order to distinguish successful termination from unsuccessful one. Speaking short, positive values denote success, negative ones are failures. More information about fields of this structure can be found in the comments on MCPDReport datatype. -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/void alglib::mcpdresults(mcpdstate s, real_2d_array& p, mcpdreport& rep);
mcpdsetbc
function/************************************************************************* This function is used to add bound constraints on the elements of the transition matrix P. MCPD solver has four types of constraints which can be placed on P: * user-specified equality constraints (optional) * user-specified bound constraints (optional) * user-specified general linear constraints (optional) * basic constraints (always present): * non-negativity: P[i,j]>=0 * consistency: every column of P sums to 1.0 Final constraints which are passed to the underlying optimizer are calculated as intersection of all present constraints. For example, you may specify boundary constraint on P[0,0] and equality one: 0.1<=P[0,0]<=0.9 P[0,0]=0.5 Such combination of constraints will be silently reduced to their intersection, which is P[0,0]=0.5. This function can be used to place bound constraints on arbitrary subset of elements of P. Set of constraints is specified by BndL/BndU matrices, which may contain arbitrary combination of finite numbers or infinities (like -INF<x<=0.5 or 0.1<=x<+INF). You can also use MCPDAddBC() function which allows to ADD bound constraint for one element of P without changing constraints for other elements. These functions (MCPDSetBC and MCPDAddBC) interact as follows: * there is internal matrix of bound constraints which is stored in the MCPD solver * MCPDSetBC() replaces this matrix by another one (SET) * MCPDAddBC() modifies one element of this matrix and leaves other ones unchanged (ADD) * thus MCPDAddBC() call preserves all modifications done by previous calls, while MCPDSetBC() completely discards all changes done to the equality constraints. INPUT PARAMETERS: S - solver BndL - lower bounds constraints, array[N,N]. Elements of BndL can be finite numbers or -INF. BndU - upper bounds constraints, array[N,N]. Elements of BndU can be finite numbers or +INF. -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/void alglib::mcpdsetbc( mcpdstate s, real_2d_array bndl, real_2d_array bndu);
mcpdsetec
function/************************************************************************* This function is used to add equality constraints on the elements of the transition matrix P. MCPD solver has four types of constraints which can be placed on P: * user-specified equality constraints (optional) * user-specified bound constraints (optional) * user-specified general linear constraints (optional) * basic constraints (always present): * non-negativity: P[i,j]>=0 * consistency: every column of P sums to 1.0 Final constraints which are passed to the underlying optimizer are calculated as intersection of all present constraints. For example, you may specify boundary constraint on P[0,0] and equality one: 0.1<=P[0,0]<=0.9 P[0,0]=0.5 Such combination of constraints will be silently reduced to their intersection, which is P[0,0]=0.5. This function can be used to place equality constraints on arbitrary subset of elements of P. Set of constraints is specified by EC, which may contain either NAN's or finite numbers from [0,1]. NAN denotes absence of constraint, finite number denotes equality constraint on specific element of P. You can also use MCPDAddEC() function which allows to ADD equality constraint for one element of P without changing constraints for other elements. These functions (MCPDSetEC and MCPDAddEC) interact as follows: * there is internal matrix of equality constraints which is stored in the MCPD solver * MCPDSetEC() replaces this matrix by another one (SET) * MCPDAddEC() modifies one element of this matrix and leaves other ones unchanged (ADD) * thus MCPDAddEC() call preserves all modifications done by previous calls, while MCPDSetEC() completely discards all changes done to the equality constraints. INPUT PARAMETERS: S - solver EC - equality constraints, array[N,N]. Elements of EC can be either NAN's or finite numbers from [0,1]. NAN denotes absence of constraints, while finite value denotes equality constraint on the corresponding element of P. NOTES: 1. infinite values of EC will lead to exception being thrown. Values less than 0.0 or greater than 1.0 will lead to error code being returned after call to MCPDSolve(). -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/void alglib::mcpdsetec(mcpdstate s, real_2d_array ec);
mcpdsetlc
function/************************************************************************* This function is used to set linear equality/inequality constraints on the elements of the transition matrix P. This function can be used to set one or several general linear constraints on the elements of P. Two types of constraints are supported: * equality constraints * inequality constraints (both less-or-equal and greater-or-equal) Coefficients of constraints are specified by matrix C (one of the parameters). One row of C corresponds to one constraint. Because transition matrix P has N*N elements, we need N*N columns to store all coefficients (they are stored row by row), and one more column to store right part - hence C has N*N+1 columns. Constraint kind is stored in the CT array. Thus, I-th linear constraint is P[0,0]*C[I,0] + P[0,1]*C[I,1] + .. + P[0,N-1]*C[I,N-1] + + P[1,0]*C[I,N] + P[1,1]*C[I,N+1] + ... + + P[N-1,N-1]*C[I,N*N-1] ?=? C[I,N*N] where ?=? can be either "=" (CT[i]=0), "<=" (CT[i]<0) or ">=" (CT[i]>0). Your constraint may involve only some subset of P (less than N*N elements). For example it can be something like P[0,0] + P[0,1] = 0.5 In this case you still should pass matrix with N*N+1 columns, but all its elements (except for C[0,0], C[0,1] and C[0,N*N-1]) will be zero. INPUT PARAMETERS: S - solver C - array[K,N*N+1] - coefficients of constraints (see above for complete description) CT - array[K] - constraint types (see above for complete description) K - number of equality/inequality constraints, K>=0: * if given, only leading K elements of C/CT are used * if not given, automatically determined from sizes of C/CT -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/void alglib::mcpdsetlc(mcpdstate s, real_2d_array c, integer_1d_array ct); void alglib::mcpdsetlc( mcpdstate s, real_2d_array c, integer_1d_array ct, ae_int_t k);
mcpdsetpredictionweights
function/************************************************************************* This function is used to change prediction weights MCPD solver scales prediction errors as follows Error(P) = ||W*(y-P*x)||^2 where x is a system state at time t y is a system state at time t+1 P is a transition matrix W is a diagonal scaling matrix By default, weights are chosen in order to minimize relative prediction error instead of absolute one. For example, if one component of state is about 0.5 in magnitude and another one is about 0.05, then algorithm will make corresponding weights equal to 2.0 and 20.0. INPUT PARAMETERS: S - solver PW - array[N], weights: * must be non-negative values (exception will be thrown otherwise) * zero values will be replaced by automatically chosen values -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/void alglib::mcpdsetpredictionweights(mcpdstate s, real_1d_array pw);
mcpdsetprior
function/************************************************************************* This function allows to set prior values used for regularization of your problem. By default, regularizing term is equal to r*||P-prior_P||^2, where r is a small non-zero value, P is transition matrix, prior_P is identity matrix, ||X||^2 is a sum of squared elements of X. This function allows you to change prior values prior_P. You can also change r with MCPDSetTikhonovRegularizer() function. INPUT PARAMETERS: S - solver PP - array[N,N], matrix of prior values: 1. elements must be real numbers from [0,1] 2. columns must sum to 1.0. First property is checked (exception is thrown otherwise), while second one is not checked/enforced. -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/void alglib::mcpdsetprior(mcpdstate s, real_2d_array pp);
mcpdsettikhonovregularizer
function/************************************************************************* This function allows to tune amount of Tikhonov regularization being applied to your problem. By default, regularizing term is equal to r*||P-prior_P||^2, where r is a small non-zero value, P is transition matrix, prior_P is identity matrix, ||X||^2 is a sum of squared elements of X. This function allows you to change coefficient r. You can also change prior values with MCPDSetPrior() function. INPUT PARAMETERS: S - solver V - regularization coefficient, finite non-negative value. It is not recommended to specify zero value unless you are pretty sure that you want it. -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/void alglib::mcpdsettikhonovregularizer(mcpdstate s, double v);
mcpdsolve
function/************************************************************************* This function is used to start solution of the MCPD problem. After return from this function, you can use MCPDResults() to get solution and completion code. -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/void alglib::mcpdsolve(mcpdstate s);
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "dataanalysis.h" using namespace alglib; int main(int argc, char **argv) { // // The very simple MCPD example // // We have a loan portfolio. Our loans can be in one of two states: // * normal loans ("good" ones) // * past due loans ("bad" ones) // // We assume that: // * loans can transition from any state to any other state. In // particular, past due loan can become "good" one at any moment // with same (fixed) probability. Not realistic, but it is toy example :) // * portfolio size does not change over time // // Thus, we have following model // state_new = P*state_old // where // ( p00 p01 ) // P = ( ) // ( p10 p11 ) // // We want to model transitions between these two states using MCPD // approach (Markov Chains for Proportional/Population Data), i.e. // to restore hidden transition matrix P using actual portfolio data. // We have: // * poportional data, i.e. proportion of loans in the normal and past // due states (not portfolio size measured in some currency, although // it is possible to work with population data too) // * two tracks, i.e. two sequences which describe portfolio // evolution from two different starting states: [1,0] (all loans // are "good") and [0.8,0.2] (only 80% of portfolio is in the "good" // state) // mcpdstate s; mcpdreport rep; real_2d_array p; real_2d_array track0 = "[[1.00000,0.00000],[0.95000,0.05000],[0.92750,0.07250],[0.91738,0.08263],[0.91282,0.08718]]"; real_2d_array track1 = "[[0.80000,0.20000],[0.86000,0.14000],[0.88700,0.11300],[0.89915,0.10085]]"; mcpdcreate(2, s); mcpdaddtrack(s, track0); mcpdaddtrack(s, track1); mcpdsolve(s); mcpdresults(s, p, rep); // // Hidden matrix P is equal to // ( 0.95 0.50 ) // ( ) // ( 0.05 0.50 ) // which means that "good" loans can become "bad" with 5% probability, // while "bad" loans will return to good state with 50% probability. // printf("%s\n", p.tostring(2).c_str()); // EXPECTED: [[0.95,0.50],[0.05,0.50]] return 0; }
#include "stdafx.h" #include <stdlib.h> #include <stdio.h> #include <math.h> #include "dataanalysis.h" using namespace alglib; int main(int argc, char **argv) { // // Simple MCPD example // // We have a loan portfolio. Our loans can be in one of three states: // * normal loans // * past due loans // * charged off loans // // We assume that: // * normal loan can stay normal or become past due (but not charged off) // * past due loan can stay past due, become normal or charged off // * charged off loan will stay charged off for the rest of eternity // * portfolio size does not change over time // Not realistic, but it is toy example :) // // Thus, we have following model // state_new = P*state_old // where // ( p00 p01 ) // P = ( p10 p11 ) // ( p21 1 ) // i.e. four elements of P are known a priori. // // Although it is possible (given enough data) to In order to enforce // this property we set equality constraints on these elements. // // We want to model transitions between these two states using MCPD // approach (Markov Chains for Proportional/Population Data), i.e. // to restore hidden transition matrix P using actual portfolio data. // We have: // * poportional data, i.e. proportion of loans in the current and past // due states (not portfolio size measured in some currency, although // it is possible to work with population data too) // * two tracks, i.e. two sequences which describe portfolio // evolution from two different starting states: [1,0,0] (all loans // are "good") and [0.8,0.2,0.0] (only 80% of portfolio is in the "good" // state) // mcpdstate s; mcpdreport rep; real_2d_array p; real_2d_array track0 = "[[1.000000,0.000000,0.000000],[0.950000,0.050000,0.000000],[0.927500,0.060000,0.012500],[0.911125,0.061375,0.027500],[0.896256,0.060900,0.042844]]"; real_2d_array track1 = "[[0.800000,0.200000,0.000000],[0.860000,0.090000,0.050000],[0.862000,0.065500,0.072500],[0.851650,0.059475,0.088875],[0.838805,0.057451,0.103744]]"; mcpdcreate(3, s); mcpdaddtrack(s, track0); mcpdaddtrack(s, track1); mcpdaddec(s, 0, 2, 0.0); mcpdaddec(s, 1, 2, 0.0); mcpdaddec(s, 2, 2, 1.0); mcpdaddec(s, 2, 0, 0.0); mcpdsolve(s); mcpdresults(s, p, rep); // // Hidden matrix P is equal to // ( 0.95 0.50 ) // ( 0.05 0.25 ) // ( 0.25 1.00 ) // which means that "good" loans can become past due with 5% probability, // while past due loans will become charged off with 25% probability or // return back to normal state with 50% probability. // printf("%s\n", p.tostring(2).c_str()); // EXPECTED: [[0.95,0.50,0.00],[0.05,0.25,0.00],[0.00,0.25,1.00]] return 0; }
minbc
subpackageminbc_d_1 | Nonlinear optimization with box constraints | |
minbc_numdiff | Nonlinear optimization with bound constraints and numerical differentiation |
minbcreport
class/************************************************************************* This structure stores optimization report: * IterationsCount number of iterations * NFEV number of gradient evaluations * TerminationType termination type (see below) TERMINATION CODES TerminationType field contains completion code, which can be: -8 internal integrity control detected infinite or NAN values in function/gradient. Abnormal termination signalled. -7 gradient verification failed. See MinBCSetGradientCheck() for more information. -3 inconsistent constraints. 1 relative function improvement is no more than EpsF. 2 relative step is no more than EpsX. 4 gradient norm is no more than EpsG 5 MaxIts steps was taken 7 stopping conditions are too stringent, further improvement is impossible, X contains best point found so far. 8 terminated by user who called minbcrequesttermination(). X contains point which was "current accepted" when termination request was submitted. ADDITIONAL FIELDS There are additional fields which can be used for debugging: * DebugEqErr error in the equality constraints (2-norm) * DebugFS f, calculated at projection of initial point to the feasible set * DebugFF f, calculated at the final point * DebugDX |X_start-X_final| *************************************************************************/class minbcreport { ae_int_t iterationscount; ae_int_t nfev; ae_int_t varidx; ae_int_t terminationtype; };
minbcstate
class/************************************************************************* This object stores nonlinear optimizer state. You should use functions provided by MinBC subpackage to work with this object *************************************************************************/class minbcstate { };
minbccreate
function/************************************************************************* BOX CONSTRAINED OPTIMIZATION WITH FAST ACTIVATION OF MULTIPLE BOX CONSTRAINTS DESCRIPTION: The subroutine minimizes function F(x) of N arguments subject to box constraints (with some of box constraints actually being equality ones). This optimizer uses algorithm similar to that of MinBLEIC (optimizer with general linear constraints), but presence of box-only constraints allows us to use faster constraint activation strategies. On large-scale problems, with multiple constraints active at the solution, this optimizer can be several times faster than BLEIC. REQUIREMENTS: * user must provide function value and gradient * starting point X0 must be feasible or not too far away from the feasible set * grad(f) must be Lipschitz continuous on a level set: L = { x : f(x)<=f(x0) } * function must be defined everywhere on the feasible set F USAGE: Constrained optimization if far more complex than the unconstrained one. Here we give very brief outline of the BC optimizer. We strongly recommend you to read examples in the ALGLIB Reference Manual and to read ALGLIB User Guide on optimization, which is available at http://www.alglib.net/optimization/ 1. User initializes algorithm state with MinBCCreate() call 2. USer adds box constraints by calling MinBCSetBC() function. 3. User sets stopping conditions with MinBCSetCond(). 4. User calls MinBCOptimize() function which takes algorithm state and pointer (delegate, etc.) to callback function which calculates F/G. 5. User calls MinBCResults() to get solution 6. Optionally user may call MinBCRestartFrom() to solve another problem with same N but another starting point. MinBCRestartFrom() allows to reuse already initialized structure. INPUT PARAMETERS: N - problem dimension, N>0: * if given, only leading N elements of X are used * if not given, automatically determined from size ofX X - starting point, array[N]: * it is better to set X to a feasible point * but X can be infeasible, in which case algorithm will try to find feasible point first, using X as initial approximation. OUTPUT PARAMETERS: State - structure stores algorithm state -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/void alglib::minbccreate(real_1d_array x, minbcstate& state); void alglib::minbccreate(ae_int_t n, real_1d_array x, minbcstate& state);
Examples: [1]
minbccreatef
function/************************************************************************* The subroutine is finite difference variant of MinBCCreate(). It uses finite differences in order to differentiate target function. Description below contains information which is specific to this function only. We recommend to read comments on MinBCCreate() in order to get more information about creation of BC optimizer. INPUT PARAMETERS: N - problem dimension, N>0: * if given, only leading N elements of X are used * if not given, automatically determined from size of X X - starting point, array[0..N-1]. DiffStep- differentiation step, >0 OUTPUT PARAMETERS: State - structure which stores algorithm state NOTES: 1. algorithm uses 4-point central formula for differentiation. 2. differentiation step along I-th axis is equal to DiffStep*S[I] where S[] is scaling vector which can be set by MinBCSetScale() call. 3. we recommend you to use moderate values of differentiation step. Too large step will result in too large truncation errors, while too small step will result in too large numerical errors. 1.0E-6 can be good value to start with. 4. Numerical differentiation is very inefficient - one gradient calculation needs 4*N function evaluations. This function will work for any N - either small (1...10), moderate (10...100) or large (100...). However, performance penalty will be too severe for any N's except for small ones. We should also say that code which relies on numerical differentiation is less robust and precise. CG needs exact gradient values. Imprecise gradient may slow down convergence, especially on highly nonlinear problems. Thus we recommend to use this function for fast prototyping on small- dimensional problems only, and to implement analytical gradient as soon as possible. -- ALGLIB -- Copyright 16.05.2011 by Bochkanov Sergey *************************************************************************/void alglib::minbccreatef( real_1d_array x, double diffstep, minbcstate& state); void alglib::minbccreatef( ae_int_t n, real_1d_array x, double diffstep, minbcstate& state);
Examples: [1]
minbcoptimize
function/*********************************************