strumpack::MPIComm Class Reference

Wrapper class around an MPI_Comm object. More...

#include <MPIWrapper.hpp>

Public Member Functions

 MPIComm ()
 
 MPIComm (MPI_Comm c)
 
 MPIComm (const MPIComm &c)
 
 MPIComm (MPIComm &&c) noexcept
 
virtual ~MPIComm ()
 
MPICommoperator= (const MPIComm &c)
 
MPICommoperator= (MPIComm &&c) noexcept
 
MPI_Comm comm () const
 
bool is_null () const
 
int rank () const
 
int size () const
 
bool is_root () const
 
void barrier () const
 
template<typename T >
void broadcast (std::vector< T > &sbuf) const
 
template<typename T >
void broadcast_from (std::vector< T > &sbuf, int src) const
 
template<typename T , std::size_t N>
void broadcast (std::array< T, N > &sbuf) const
 
template<typename T >
void broadcast (T &data) const
 
template<typename T >
void broadcast_from (T &data, int src) const
 
template<typename T >
void broadcast (T *sbuf, std::size_t ssize) const
 
template<typename T >
void broadcast_from (T *sbuf, std::size_t ssize, int src) const
 
template<typename T >
void all_gather (T *buf, std::size_t rsize) const
 
template<typename T >
void all_gather_v (T *buf, const int *rcnts, const int *displs) const
 
template<typename T >
void gather (T *sbuf, int ssize, int *rbuf, int rsize, int root) const
 
template<typename T >
void gather_v (T *sbuf, int scnts, T *rbuf, const int *rcnts, const int *displs, int root) const
 
template<typename T >
MPIRequest isend (const std::vector< T > &sbuf, int dest, int tag) const
 
template<typename T >
void isend (const std::vector< T > &sbuf, int dest, int tag, MPI_Request *req) const
 
template<typename T >
void isend (const T *sbuf, std::size_t ssize, int dest, int tag, MPI_Request *req) const
 
template<typename T >
void send (const T *sbuf, std::size_t ssize, int dest, int tag) const
 
template<typename T >
void isend (const T &buf, int dest, int tag, MPI_Request *req) const
 
template<typename T >
void send (const std::vector< T > &sbuf, int dest, int tag) const
 
template<typename T >
std::vector< T > recv (int src, int tag) const
 
template<typename T >
std::pair< int, std::vector< T > > recv_any_src (int tag) const
 
template<typename T >
recv_one (int src, int tag) const
 
template<typename T >
void irecv (const T *rbuf, std::size_t rsize, int src, int tag, MPI_Request *req) const
 
template<typename T >
void recv (const T *rbuf, std::size_t rsize, int src, int tag) const
 
template<typename T >
all_reduce (T t, MPI_Op op) const
 
template<typename T >
reduce (T t, MPI_Op op) const
 
template<typename T >
void all_reduce (T *t, int ssize, MPI_Op op) const
 
template<typename T >
void all_reduce (std::vector< T > &t, MPI_Op op) const
 
template<typename T >
void reduce (T *t, int ssize, MPI_Op op, int dest=0) const
 
template<typename T >
void all_to_all (const T *sbuf, int scnt, T *rbuf) const
 
template<typename T , typename A = std::allocator<T>>
std::vector< T, A > all_to_allv (const T *sbuf, int *scnts, int *sdispls, int *rcnts, int *rdispls) const
 
template<typename T >
void all_to_allv (const T *sbuf, int *scnts, int *sdispls, T *rbuf, int *rcnts, int *rdispls) const
 
template<typename T , typename A = std::allocator<T>>
void all_to_all_v (std::vector< std::vector< T >> &sbuf, std::vector< T, A > &rbuf, std::vector< T * > &pbuf) const
 
template<typename T , typename A = std::allocator<T>>
std::vector< T, A > all_to_all_v (std::vector< std::vector< T >> &sbuf) const
 
template<typename T , typename A = std::allocator<T>>
void all_to_all_v (std::vector< std::vector< T >> &sbuf, std::vector< T, A > &rbuf, std::vector< T * > &pbuf, const MPI_Datatype Ttype) const
 
MPIComm sub (int P0, int P, int stride=1) const
 
MPIComm sub_self (int p) const
 

Static Public Member Functions

static void control_start (const std::string &name)
 
static void control_stop (const std::string &name)
 
static bool initialized ()
 

Detailed Description

Wrapper class around an MPI_Comm object.

This is a simple wrapper around a MPI_Comm object. The main reason for this class is to simplify resource management of the MPI_Comm object. An object of class MPIComm owns the MPI_Comm that it stores, and is responsible for freeing it (in the destructor).

A number of simple wrappers around basic MPI calls are provided.

Constructor & Destructor Documentation

◆ MPIComm() [1/4]

strumpack::MPIComm::MPIComm ( )
inline

Default constructor. This will initialize the encapsulated MPI_Comm to MPI_COMM_WORLD.

◆ MPIComm() [2/4]

strumpack::MPIComm::MPIComm ( MPI_Comm  c)
inline

Constructor using a MPI_Comm. This will DUPLICATE the input communicator c!

Parameters
cthe input MPI communicator, this will be duplicated internally and can thus be freed immediately

◆ MPIComm() [3/4]

strumpack::MPIComm::MPIComm ( const MPIComm c)
inline

Copy constructor. Will DUPLICATE the underlying MPI_Comm object!

Parameters
cwill be copied, not changed

◆ MPIComm() [4/4]

strumpack::MPIComm::MPIComm ( MPIComm &&  c)
inlinenoexcept

Move constructor.

Parameters
cwill be moved from, will be reset to MPI_COMM_NULL
See also
operator=(MPIComm&& c)

◆ ~MPIComm()

virtual strumpack::MPIComm::~MPIComm ( )
inlinevirtual

Virtual destructor. Free the MPI_Comm object, unless it is MPI_COMM_NULL or MPI_COMM_WORLD.

Member Function Documentation

◆ all_reduce() [1/2]

template<typename T >
void strumpack::MPIComm::all_reduce ( T *  t,
int  ssize,
MPI_Op  op 
) const
inline

Compute the reduction of op(t[]_i) over all processes i, t[] is an array, and where op can be any MPI_Op, on all processes. This routine is collective on all ranks in this communicator. See documentation for MPI_Allreduce. Every element of the array will be reduced over the different processes. The operation is performed in-place.

Template Parameters
Ttype of variables to reduce, should have a corresponding mpi_type<T>() implementation
Parameters
tpointer to array of variables to reduce
ssizesize of array to reduce
opreduction operator

◆ all_reduce() [2/2]

template<typename T >
T strumpack::MPIComm::all_reduce ( t,
MPI_Op  op 
) const
inline

Compute the reduction of op(t_i) over all processes i, where op can be any MPI_Op, on all processes. This routine is collective on all ranks in this communicator. See documentation for MPI_Allreduce.

Template Parameters
Ttype of variable to reduce, should have a corresponding mpi_type<T>() implementation
Parameters
tvariable to reduce, passed by value
opreduction operator
Returns
variable of type T, result of the reduction, available on each rank

◆ all_to_all_v() [1/3]

template<typename T , typename A = std::allocator<T>>
std::vector<T,A> strumpack::MPIComm::all_to_all_v ( std::vector< std::vector< T >> &  sbuf) const
inline

Perform an MPI_Alltoallv. Each rank sends sbuf[i] to process i. The results are received in a single contiguous vector which is returned. This is collective on this MPI communicator.

Template Parameters
Ttype of data to send, this should have a corresponding mpi_type<T>() implementation or should define T::mpi_type()
Aallocator to be used for the receive buffer
Parameters
sbufsend buffers (should be size this->size())
Returns
receive buffer
See also
all_to_all_v

◆ all_to_all_v() [2/3]

template<typename T , typename A = std::allocator<T>>
void strumpack::MPIComm::all_to_all_v ( std::vector< std::vector< T >> &  sbuf,
std::vector< T, A > &  rbuf,
std::vector< T * > &  pbuf 
) const
inline

Perform an MPI_Alltoallv. Each rank sends sbuf[i] to process i. The results are received in a single contiguous vector rbuf. pbuf has pointers into rbuf, with pbuf[i] pointing to the data received from rank i. This is collective on this MPI communicator.

Template Parameters
Ttype of data to send, this should have a corresponding mpi_type<T>() implementation or should define T::mpi_type()
Aallocator to be used for the rbuf
Parameters
sbufsend buffers (should be size this->size())
rbufreceive buffer, can be empty, will be allocated
pbufpointers (to positions in rbuf) to where data received from different ranks start
See also
all_to_all_v

◆ all_to_all_v() [3/3]

template<typename T , typename A = std::allocator<T>>
void strumpack::MPIComm::all_to_all_v ( std::vector< std::vector< T >> &  sbuf,
std::vector< T, A > &  rbuf,
std::vector< T * > &  pbuf,
const MPI_Datatype  Ttype 
) const
inline

Perform an MPI_Alltoallv. Each rank sends sbuf[i] to process i. The results are received in a single contiguous vector rbuf. pbuf has pointers into rbuf, with pbuf[i] pointing to the data received from rank i. This is collective on this MPI communicator.

Template Parameters
Ttype of data to send
Aallocator to be used for the receive buffer
Parameters
sbufsend buffers (should be size this->size())
rbufreceive buffer, can be empty, will be allocated
pbufpointers (to positions in rbuf) to where data received from different ranks start
TtypeMPI_Datatype corresponding to the template parameter T
See also
all_to_all_v

◆ barrier()

void strumpack::MPIComm::barrier ( ) const
inline

Perform a barrier operation. This operation is collective on all the ranks in this communicator.

◆ comm()

MPI_Comm strumpack::MPIComm::comm ( ) const
inline

Returns the underlying MPI_Comm object.

◆ control_start()

static void strumpack::MPIComm::control_start ( const std::string &  name)
inlinestatic

Call MPI_Pcontrol with level 1, and string name

◆ control_stop()

static void strumpack::MPIComm::control_stop ( const std::string &  name)
inlinestatic

Call MPI_Pcontrol with level -1, and string name

◆ is_null()

bool strumpack::MPIComm::is_null ( ) const
inline

Checks whether the underlying MPI_Comm object is MPI_COMM_NULL.

◆ is_root()

bool strumpack::MPIComm::is_root ( ) const
inline

Check whether the current process is the root of this MPI communicator.

◆ isend() [1/2]

template<typename T >
MPIRequest strumpack::MPIComm::isend ( const std::vector< T > &  sbuf,
int  dest,
int  tag 
) const
inline

Non-blocking send of a vector to a destination process, with a certain tag.

Template Parameters
Ttemplate type of the send buffer, should have a corresponding mpi_type<T>() implementation
Parameters
sbufbuffer of type T to be send
destrank of destination process in this MPI communicator
tagtag to use in MPI message
Returns
request object, use this to wait for completion of the non-blocking send

◆ isend() [2/2]

template<typename T >
void strumpack::MPIComm::isend ( const std::vector< T > &  sbuf,
int  dest,
int  tag,
MPI_Request *  req 
) const
inline

Non-blocking send of a vector to a destination process, with a certain tag.

Template Parameters
Ttemplate type of the send buffer, should have a corresponding mpi_type<T>() implementation
Parameters
sbufbuffer of type T to be send
destrank of destination process in this MPI communicator
tagtag to use in MPI message
reqMPI request object

◆ operator=() [1/2]

MPIComm& strumpack::MPIComm::operator= ( const MPIComm c)
inline

Assignement operator. This will DUPLICATE the MPI_Comm object.

Parameters
cobject to copy from, will not be modified.

◆ operator=() [2/2]

MPIComm& strumpack::MPIComm::operator= ( MPIComm &&  c)
inlinenoexcept

Move operator.

Parameters
cthe object ro be moved from, will be reset to MPI_COMM_NULL

◆ rank()

int strumpack::MPIComm::rank ( ) const
inline

Return the current rank in this communicator.

◆ recv()

template<typename T >
std::vector<T> strumpack::MPIComm::recv ( int  src,
int  tag 
) const
inline

Receive a vector of T's from process src, with tag. The message size does not need to be known in advance.

Template Parameters
Ttemplate parameter of vector to receive, should have a corresponding mpi_type<T>() implementation
Parameters
srcprocess to receive from
tagtag to match the message
Returns
std::vector<T> with the data to be received.
See also
isend, send

◆ reduce() [1/2]

template<typename T >
void strumpack::MPIComm::reduce ( T *  t,
int  ssize,
MPI_Op  op,
int  dest = 0 
) const
inline

Compute the reduction of op(t[]_i) over all processes i, t[] is an array, and where op can be any MPI_Op, on the root process. This routine is collective on all ranks in this communicator. See documentation for MPI_Reduce. Every element of the array will be reduced over the different processes. The operation is performed in-place.

Template Parameters
Ttype of variables to reduce, should have a corresponding mpi_type<T>() implementation
Parameters
tpointer to array of variables to reduce
ssizesize of array to reduce
opreduction operator
destwhere to reduce to

◆ reduce() [2/2]

template<typename T >
T strumpack::MPIComm::reduce ( t,
MPI_Op  op 
) const
inline

Compute the reduction of op(t_i) over all processes i, where op can be any MPI_Op, on the root. This routine is collective on all ranks in this communicator. See documentation for MPI_Reduce (with dest == 0).

Template Parameters
Ttype of variable to reduce, should have a corresponding mpi_type<T>() implementation
Parameters
tvariable to reduce, passed by value
opreduction operator
Returns
variable of type T, result of the reduction, available only on the root process.

◆ send()

template<typename T >
void strumpack::MPIComm::send ( const std::vector< T > &  sbuf,
int  dest,
int  tag 
) const
inline

Blocking send of a vector to a destination process, with a certain tag.

Template Parameters
Ttemplate type of the send buffer, should have a corresponding mpi_type<T>() implementation
Parameters
sbufbuffer of type T to be send
destrank of destination process in this MPI communicator
tagtag to use in MPI message
See also
recv, isend

◆ size()

int strumpack::MPIComm::size ( ) const
inline

Return the size of, ie, number of processes in, this communicator. This communicator should not be MPI_COMM_NULL.

◆ sub()

MPIComm strumpack::MPIComm::sub ( int  P0,
int  P,
int  stride = 1 
) const
inline

Return a subcommunicator with P ranks, starting from rank P0, using stride stride. Ie., ranks (relative to this communicator) [P0:stride:P0+stride*P) will be included in the new communicator. This operation is collective on all the processes in this communicator.

Parameters
P0first rank in the new communicator
Pnumber of ranks in the new communicator
stridestride between ranks in this communicator determining which ranks will go into new communicator
Returns
new communicator containing P ranks from this communicator [P0:stride:P0+stride*P)
See also
sub_self

◆ sub_self()

MPIComm strumpack::MPIComm::sub_self ( int  p) const
inline

Returns a communicator with only rank p, or an MPIComm wrapping MPI_COMM_NULL if my rank in the current MPIComm != p. This is collective on the current communicator.

Parameters
prank to be included in new MPIComm
Returns
a new communicator containing only rank p (from the original MPIComm)

The documentation for this class was generated from the following file: