GLAC  1.0
Public Member Functions | Static Public Member Functions
Parallel::Communicator Class Reference

#include <communicator.h>

Public Member Functions

 Communicator ()
 
 ~Communicator ()
 

Static Public Member Functions

static void init (int *numberOfArguments, char ***cmdLineArguments)
 Parallel::Communicator::init initializes the communicater and sets up the lattice geometry. More...
 
static void initializeSubLattice ()
 Parallel::Communicator::initializeSubLattice. More...
 
static SU3 getPositiveLink (Lattice< SU3 > *lattice, std::vector< int > n, int mu, int *muIndex, int SU3Dir)
 Parallel::Communicator::getPositiveLink fetches a link in the positive direction. More...
 
static SU3 getNegativeLink (Lattice< SU3 > *lattice, std::vector< int > n, int mu, int *muIndex, int SU3Dir)
 Parallel::Communicator::getNegativeLink fetches a link in the negative direction. More...
 
static SU3 getNeighboursNeighbourLink (Lattice< SU3 > *lattice, std::vector< int > n, int mu, int *muIndex, int nu, int *nuIndex, int SU3Dir)
 Parallel::Communicator::getNeighboursNeighbourLink fetches a neighbours neighbor link. More...
 
static SU3 getNeighboursNeighbourNegativeLink (Lattice< SU3 > *lattice, std::vector< int > n, int mu, int *muIndex, int nu, int *nuIndex, int SU3Dir)
 Parallel::Communicator::getNeighboursNeighbourNegativeLink. More...
 
static int getProcessRank ()
 
static int getNumProc ()
 
static void setN (std::vector< unsigned int > N)
 Parallel::Communicator::setN sets the lattice dimensions in the Parallel::Communicator class. More...
 
static void MPIExit (std::string message)
 Parallel::Communicator::MPIExit exits the program. Frees MPI groups before it exits. More...
 
static void MPIPrint (std::string message)
 Parallel::Communicator::MPIPrint prints a message from rank 0. Includes barriers. More...
 
static void setBarrier ()
 Parallel::Communicator::setBarrier. More...
 
static void setBarrierActive ()
 Parallel::Communicator::setBarrierActive. More...
 
static void gatherDoubleResults (double *data, unsigned int N)
 Parallel::Communicator::gatherDoubleResults. More...
 
static void freeMPIGroups ()
 Parallel::Communicator::freeMPIGroups. More...
 
static void reduceToTemporalDimension (std::vector< double > &obsResults, std::vector< double > obs)
 Parallel::Communicator::reduceToTemporalDimension reduces the results to the temporal dimensions, i.e. Euclidean time. More...
 
static void checkProcessorValidity ()
 Parallel::Communicator::checkProcessorValidity checks that we do not have an odd number of processors. More...
 
static void checkSubLatticeDimensionsValidity ()
 Parallel::Communicator::checkSubLatticeDimensionsValidity. More...
 
static void checkSubLatticeValidity ()
 Parallel::Communicator::checkSubLatticeValidity runs a series of tests to ensure that the sub lattices has been correctly set up. More...
 
static void checkLattice (Lattice< SU3 > *lattice, std::string message)
 Parallel::Communicator::checkLattice checks if the lattice containts invalid numbers/nan. More...
 

Constructor & Destructor Documentation

◆ Communicator()

Communicator::Communicator ( )

◆ ~Communicator()

Communicator::~Communicator ( )

Member Function Documentation

◆ checkLattice()

void Communicator::checkLattice ( Lattice< SU3 > *  lattice,
std::string  message 
)
static

Parallel::Communicator::checkLattice checks if the lattice containts invalid numbers/nan.

Parameters
latticelattice to check.
messageto print if it contains nan numbers.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ checkProcessorValidity()

void Communicator::checkProcessorValidity ( )
static

Parallel::Communicator::checkProcessorValidity checks that we do not have an odd number of processors.

Todo:
: this could probably be changed such that it just leaves out one processor that is left over.

◆ checkSubLatticeDimensionsValidity()

void Communicator::checkSubLatticeDimensionsValidity ( )
static

Parallel::Communicator::checkSubLatticeDimensionsValidity.

Ensures that the lattice size is valid. If it is of size 2 or less, we exit.

◆ checkSubLatticeValidity()

void Communicator::checkSubLatticeValidity ( )
static

Parallel::Communicator::checkSubLatticeValidity runs a series of tests to ensure that the sub lattices has been correctly set up.

Here is the call graph for this function:

◆ freeMPIGroups()

void Communicator::freeMPIGroups ( )
static

Parallel::Communicator::freeMPIGroups.

Frees MPI groups and communicators.

Here is the caller graph for this function:

◆ gatherDoubleResults()

void Communicator::gatherDoubleResults ( double *  data,
unsigned int  N 
)
static

Parallel::Communicator::gatherDoubleResults.

Parameters
datato reduce.
Nnumber of points in data to reduce.
Todo:
Remove the unsigned long int, and use instead just unsigned long? Change globally to only use long?
Here is the caller graph for this function:

◆ getNegativeLink()

SU3 Communicator::getNegativeLink ( Lattice< SU3 > *  lattice,
std::vector< int >  n,
int  mu,
int *  muIndex,
int  SU3Dir 
)
static

Parallel::Communicator::getNegativeLink fetches a link in the negative direction.

Parameters
latticea lattice pointer for all four dimensions.
nposition in lattice to fetch link from.
mudirection to shift in. Always negative in x, y, z and t directions. Index is the same as the step direction of muIndex.
muIndexis a unit vector containing a step in the direction of sharing. It is \(\hat{\mu}\) in \(U_\nu (n + \hat{mu}) \).
SU3Diris the index of the tensor \(\mu\) in link \(U_{\mu}\).
Returns
fetched SU3 matrix.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ getNeighboursNeighbourLink()

SU3 Communicator::getNeighboursNeighbourLink ( Lattice< SU3 > *  lattice,
std::vector< int >  n,
int  mu,
int *  muIndex,
int  nu,
int *  nuIndex,
int  SU3Dir 
)
static

Parallel::Communicator::getNeighboursNeighbourLink fetches a neighbours neighbor link.

Fetches the link when it is given as \(U_\mu(n + \hat{\mu} - \hat{\nu})\).

Parameters
latticea lattice pointer for all four dimensions.
nposition in lattice to fetch link from.
muindex of the muIndex vector we are sharing.
muIndexis a unit vector containing a step in the direction of sharing.
nuindex of the nuIndex vector we are sharing.
nuIndexis a unit vector containing a step in the direction of sharing.
SU3Diris the index of the tensor \(\mu\) in link \(U_{\mu}\).
Returns
the fetched SU3 matrix.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ getNeighboursNeighbourNegativeLink()

SU3 Communicator::getNeighboursNeighbourNegativeLink ( Lattice< SU3 > *  lattice,
std::vector< int >  n,
int  mu,
int *  muIndex,
int  nu,
int *  nuIndex,
int  SU3Dir 
)
static

Parallel::Communicator::getNeighboursNeighbourNegativeLink.

Fetches the link when it is given as \(U_\mu(n - \hat{\mu} - \hat{\nu})\).

Parameters
latticea lattice pointer for all four dimensions.
nposition in lattice to fetch link from.
muindex of the muIndex vector we are sharing.
muIndexis a unit vector containing a step in the direction of sharing.
nuindex of the nuIndex vector we are sharing.
nuIndexis a unit vector containing a step in the direction of sharing.
SU3Diris the index of the tensor \(\mu\) in link \(U_{\mu}\).
Returns
the fetched SU3 matrix.
Todo:
Should probably pass n by reference, and set all elements as const.
Here is the call graph for this function:

◆ getNumProc()

static int Parallel::Communicator::getNumProc ( )
inlinestatic
Here is the caller graph for this function:

◆ getPositiveLink()

SU3 Communicator::getPositiveLink ( Lattice< SU3 > *  lattice,
std::vector< int >  n,
int  mu,
int *  muIndex,
int  SU3Dir 
)
static

Parallel::Communicator::getPositiveLink fetches a link in the positive direction.

Parameters
latticea lattice pointer for all four dimensions.
nposition in lattice to fetch link from.
mudirection to shift in. Always negative in x, y, z and t directions. Index is the same as the step direction of muIndex.
muIndexis a unit vector containing a step in the direction of sharing. It is \(\hat{\mu}\) in \(U_\nu (n + \hat{mu}) \).
SU3Diris the index of the tensor \(\mu\) in link \(U_{\mu}\).
Returns
fetched SU3 matrix.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ getProcessRank()

static int Parallel::Communicator::getProcessRank ( )
inlinestatic
Here is the caller graph for this function:

◆ init()

void Communicator::init ( int *  numberOfArguments,
char ***  cmdLineArguments 
)
static

Parallel::Communicator::init initializes the communicater and sets up the lattice geometry.

Parameters
numberOfArgumentsnumber of command line arguments.
cmdLineArgumentscommand line arguments.
Here is the caller graph for this function:

◆ initializeSubLattice()

void Communicator::initializeSubLattice ( )
static

Parallel::Communicator::initializeSubLattice.

Sets up sublattices. Either by retrieving it from the Parameters class, or by setting it up manually.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ MPIExit()

void Communicator::MPIExit ( std::string  message)
static

Parallel::Communicator::MPIExit exits the program. Frees MPI groups before it exits.

Parameters
messagemessage to print before exiting.
Here is the caller graph for this function:

◆ MPIPrint()

void Communicator::MPIPrint ( std::string  message)
static

Parallel::Communicator::MPIPrint prints a message from rank 0. Includes barriers.

Parameters
messageto print.

◆ reduceToTemporalDimension()

void Communicator::reduceToTemporalDimension ( std::vector< double > &  obsResults,
std::vector< double >  obs 
)
static

Parallel::Communicator::reduceToTemporalDimension reduces the results to the temporal dimensions, i.e. Euclidean time.

Parameters
obsResultscontigious vector that results will be placed in.
obsvector we are gathering results in.
Todo:

Should probably pass by the obs by reference.

Should probably set obs as const.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ setBarrier()

void Communicator::setBarrier ( )
static

Parallel::Communicator::setBarrier.

A MPI_Barrier for all processors.

Here is the caller graph for this function:

◆ setBarrierActive()

void Communicator::setBarrierActive ( )
static

Parallel::Communicator::setBarrierActive.

A MPI_Barrier for only the active processors, i.e. those used in flow or cfg. generation.

Here is the caller graph for this function:

◆ setN()

void Communicator::setN ( std::vector< unsigned int >  N)
static

Parallel::Communicator::setN sets the lattice dimensions in the Parallel::Communicator class.

Parameters
Nvector of global lattice dimensions
Here is the caller graph for this function:

The documentation for this class was generated from the following files: