We recommend to use the strumpack::structured::StructuredMatrix interface instead of directly using the HODLR classes. See Dense Solvers .
HODLR, or Hierarchically Off-Diagonal Low Rank, is a rank-structured format that is similar to HSS, but simpler. It uses the same weak admissibility, i.e, all off-diagonal blocks are low rank, but it does not use nested bases. Compared to HSS, HODLR theoretically has worse asymptotic complexity, but the algorithms might be faster in practice for medium sized problems.
STRUMPACK's HODLR code uses an external library, which can be found here: https://github.com/liuyangzhuan/ButterflyPACK
See the Installation and Requirements instructions for how to configure and compile STRUMPACK with support for HODLR.
The HODLR include files are installed in the include/HODLR/ subdirectory, or in src/HODLR/. All HODLR code is in the namespace strumpack::HODLR. The main class for sequential/multithreaded as well as distributed memory HODLR matrices is strumpack::HODLR::HODLRMatrix.
We use a simple wrapper class strumpack::DenseMatrix, as a wrapper around a column-major matrix. See the documentation for that class for more info.
There are currently 3 ways to construct an HODLR matrix:
Use the constructor
For example, to construct an HODLR approximation of a Toeplitz matrix:
Use the constructor
The strumpack::MPIComm object is a simple wrapper around an MPI communicator. The partition or cluster tree data structure is the same as for HSS matrices. See dense_matrices for how to construct this tree.
We have an optimized HODLR construction algorithm for the so called kernel matrices, which arise in several applications, such as kernel ridge regression in machine learning. One can use the strumpack::HODLR::HODLRMatrix constructor:
However, for kernel ridge regression, the strumpack::kernel::Kernel class provides some easy to use driver routines, see
There is also a Python interface to these Kernel regression routines, compatibile with scikit-learn, see install/python/STRUMPACKKernel.py and examples/dense/KernelRegressionMPI.py.
TODO discuss parallel storage, mult, factor, solve, inv_mult, etc..