# Sparse matrix operations (BLAS)

Support for sparse linear algebra (and other operations) is an important part of any numerical package. ALGLIB package includes highly optimized implementation of sparse matrix class which supports rich set of operations and can be used in several programming languages, including:

• , a high performance C++ library with great portability across hardware and software platforms
• , a highly optimized C# library with two alternative backends: a pure C# implementation (100% managed code) and a high-performance native implementation (Windows, Linux) with same C# interface

Our implementation of sparse matrices has following distinctive features (see below for more detailed discussion):

• high-performance C++/C# code
• various storage formats: well known CRS and SKS (Skyline) formats, and a flexible hash-table-based format for easy initialization
• rich set of sparse BLAS functions working with general, symmetric and triangular sparse matrices
• triangular factorizations, sparse linear solvers and other functionality

# Getting started and examples

## Getting started

Most important pieces of sparse matrix functionality are provided by the sparse subpackage of ALGLIB package. This subpackage implements sparse matrix class, basic operations like matrix initialization/modification, and rich set of linear algebra operations (sparse BLAS).

Everything starts with creation of the sparse matrix instance. You have to decide which sparse matrix storage format to use for initialization: CRS, SKS or HTS (hash table based storage). Every format has its benefits and drawbacks:

• HTS (hash table based storage) is the most flexible one - you do not have to specify non-zero pattern in advance. But the downside is relatively large storage and performance overhead. Finally, this format is not intended for efficient linear algebra operations, so you will have to convert matrix to one of the formats below in any case.
• CRS (compressed row storage) and SKS (skyline storage, also known as variable bandwidth) formats support more efficient efficient initialization and storage at the cost of some inflexibility - you have to specify non-zero pattern during matrix creation.

Successfully initialized matrix can be used in one of the following ways:

• you may access (read/write) individual elements with sparseget and sparseset functions
• you may pass it to one of the Level 2 BLAS functions described below
• you may pass it to triangular factorization functions
• or you may pass it to one of the sparse solvers (direct or iterative)

## Examples

Following examples (in C++, C# and other languages supported by ALGLIB) will show you how to with with sparse matrices in ALGLIB:

• sparse_d_1 - the very simple example of sparse matrix creation and BLAS operations
• sparse_d_crs - creation of CRS matrix
• solvesks_d_1 - using direct sparse solver

Sections below discuss these examples in more details.

# Sparse storage formats

There exist several popular sparse storage formats which are used to store sparse matrices, each with its own benefits and drawbacks. ALGLIB package implements three such formats: HTS (hash table storage, also known as Dictionary of keys), CRS (Compressed row storage) and SKS (Skyline storage).

## HTS (hash table storage)

Sparse matrix can be stored as hash table which maps keys (row/column indexes) to values of corresponding elements. Such storage format has following features:

• matrix creation - flexible, storage is automatically resized if needed.
• time to insert element - O(1). Elements can be inserted in arbitrary order.
• memory efficiency - inefficient, roughly 6·sizeof(double) bytes per element.
• time to read element - O(1).
• BLAS operations - not supported. Hash table stores matrix elements in random order, which prevents efficient implementation of the linear algebra operations. Thus, in order to use matrix in linear operations you have to convert it to CRS or SKS format.

You may see that HTS is good for fast prototyping (flexible initialization, no need to know matrix size in advance), but if you want to use sparse matrix for something, you have to convert it to one of the more efficient formats.

## CRS representation

CRS (Compressed Row Storage) is another representation which can be used to store sparse matrix. Matrix elements are ordered by rows (from top to bottom) and columns (within row - from left to right). Such storage format has following features:

• matrix creation - you have to know in advance how many nonzero elements is stored in each row of the matrix.
• time to insert element - O(1). Elements must be inserted in strict order: from left to right, from top to bottom.
• memory efficiency - 2·sizeof(double) bytes per element.
• time to read element - O(log(NonzeroElementsInRow)) (binary search is used).
• BLAS operations - supported.

The only downside of CRS representation is that it does not allow flexible HTS-like initialization. You have to tell in advance how many elements is located in each row. From the other side, this format has better memory efficiency and supports sparse BLAS operations.

## SKS representation

SKS (skyline storage, variable bandwidth storage) is a representation optimized for low-profile matrices. All elements which fall within skyline are stored in the matrix:

This storage format has following features:

• matrix creation - you have to know in advance skyline pattern.
• time to insert element - O(1), very fast, in arbitrary order.
• memory efficiency - sizeof(double) bytes per element.
• time to read element - O(1), very fast.
• BLAS operations - supported.

Skyline storage format has following interesting property: its shape is preserved by Cholesky decomposition, i.e. no fill-in occurs.

# Basic operations with sparse matrices

## Creation, copying and conversion between storage formats

Sparse matrix object can be created with one of the following functions:

• sparsecreate - creates matrix in HTS format
• sparsecreatecrs - creates matrix in CRS format
• sparsecreatesks - creates matrix in Skyline (SKS) format
• sparsecreatesksband - creates SKS matrix with fixed bandwidth

If you have some already allocated sparse matrix object, and want to reuse previously allocated place as much as possible, you can use buffered versions of the functions above: sparsecreatebuf, sparsecreatecrsbuf, sparsecreatesksbuf or sparsecreatesksbandbuf. If you want to immediately free memory associated with sparse matrix object (but for some reason can not destroy instance of the class), you can use sparsefree function.

A copy of sparse matrix instance can be created with sparsecopy (or sparsecopybuf) function. An instance of sparse matrix class can also be copied and converted to some other storage format (it can also be converted in-place, although such "in-place" conversion often involves allocation of additional memory):

• sparseconverttocrs, sparsecopytocrs and sparsecopytocrsbuf functions perform in-place conversion to CRS format (or copying combined with conversion).
• sparseconverttosks, sparsecopytosks and sparsecopytosksbuf functions can be used to copy/convert matrix to SKS format
• sparseconverttohash, sparsecopytohash and sparsecopytohashbuf functions perform same operations for HTS format.
• finally, sparseconvertto and sparsecopytobuf allow you to select storage format on the fly.

Other operations with sparse matrices include:

• sparsetransposesks - which performs in-place transposition
• sparseswap - which allows to swap contents of two instances of the sparse matrix class

Sparse matrix modification can be performed with one of the following functions:

• sparseset and sparseadd, which set or modify some element Ai,j
• sparserewriteexisting, which supports thread-safe modification of the matrix

You can read contents of the sparse matrix with functions below:

• sparseget, which retrieves element Ai,j
• sparsegetrow and sparsegetcompressedrow, which return entire row of the CRS/SKS matrix
• sparseenumerate, which allows to enumerate non-zero elements of the matrix

# Sparse BLAS

## General (unsymmetric) sparse matrices

One of the most important sparse matrix operations is calculation of matrix-vector product. Following functions are supported:

• sparsemv, which calculates A·x
• sparsemtv, which calculates A T·x
• sparsemv2, which calculates A·x and A T·x simultaneously

Sometimes you want to calculate matrix-matrix product A·X, where A is sparse and X is dense:

• sparsemm calculates A·X
• sparsemtm calculates A T·X
• sparsemm2 calculates A·X and A T·X

## Symmetric sparse matrices

Another important case is a sparse symmetric matrix: although such matrices can be handled just as general ones, you can save both memory and time by storing just one half of the matrix (either upper or lower triangle), and explicitly accounting for the structure of the matrix. Following functions work with sparse symmetric matrices:

• sparsesmv calculates matrix-vector product A·x (which is same as A T·x
• sparsesmm calculates matrix-matrix product A·X
• sparsevsmv can be used to calculate two-sided product x T·A·x

## Triangular sparse matrices

Finally, triangular sparse matrices are one more important case:

• sparsetrmv can be used to calculate matrix-vector product A·x
• sparsetrsv solves triangular system A·x=b

# Other functions

## Factorizations

Triangular factorization of sparse matrices are implemented in the trfac subpackage. You can find more information on the subject at the corresponding page of ALGLIB User Guide.

## Linear systems

ALGLIB includes several direct and iterative sparse solvers: linear conjugate gradient method, LSQR iterative least squares solver, direct SKS-based solver. More information on the subject can be found in the following sections of ALGLIB Reference Manual:

• directsparsesolvers section includes information about direct sparse solvers
• linlsqr section includes information about LSQR solver
• lincg section describes linear CG solver

ALGLIB Project offers you two editions of ALGLIB:

ALGLIB Free Edition:
offers full set of numerical functionality
extensive algorithmic optimizations
no low level optimizations

ALGLIB Commercial Edition:
flexible pricing
offers full set of numerical functionality
extensive algorithmic optimizations
high performance (SMP, SIMD)

## ALGLIB 3.15.0 for C++

C++ library.
Delivered with sources.
Monolithic design.
Extreme portability.
Editions:   FREE   COMMERCIAL

## ALGLIB 3.15.0 for C#

C# library with native kernels.
Delivered with sources.
VB.NET and IronPython wrappers.
Extreme portability.
Editions:   FREE   COMMERCIAL

## ALGLIB 3.15.0 for Delphi

Delphi wrapper around C core.
Delivered as precompiled binary.
Compatible with FreePascal.
Editions:   FREE   COMMERCIAL

## ALGLIB 3.15.0 for CPython

CPython wrapper around C core.
Delivered as precompiled binary.
Editions:   FREE   COMMERCIAL