Epetra_MpiSmpComm Class Reference

Epetra_MpiSmpComm: The Epetra MPI Shared Memory Parallel Communication Class. More...

#include <Epetra_MpiSmpComm.h>

Inheritance diagram for Epetra_MpiSmpComm:

[legend]
Collaboration diagram for Epetra_MpiSmpComm:
[legend]
List of all members.

Public Member Functions

Epetra_MpiSmpCommoperator= (const Epetra_MpiSmpComm &Comm)
 Assignment Operator.
Constructor/Destructor Methods
 Epetra_MpiSmpComm (MPI_Comm comm)
 Epetra_MpiSmpComm MPI Constructor.
 Epetra_MpiSmpComm (const Epetra_MpiSmpComm &Comm)
 Epetra_MpiSmpComm Copy Constructor.
Epetra_CommClone () const
 Clone method.
virtual ~Epetra_MpiSmpComm ()
 Epetra_MpiSmpComm Destructor.
Barrier Methods
void Barrier () const
 Epetra_MpiSmpComm Barrier function.
Broadcast Methods
int Broadcast (double *MyVals, int Count, int Root) const
 Epetra_MpiSmpComm Broadcast function.
int Broadcast (int *MyVals, int Count, int Root) const
 Epetra_MpiSmpComm Broadcast function.
Gather Methods
int GatherAll (double *MyVals, double *AllVals, int Count) const
 Epetra_MpiSmpComm All Gather function.
int GatherAll (int *MyVals, int *AllVals, int Count) const
 Epetra_MpiSmpComm All Gather function.
Sum Methods
int SumAll (double *PartialSums, double *GlobalSums, int Count) const
 Epetra_MpiSmpComm Global Sum function.
int SumAll (int *PartialSums, int *GlobalSums, int Count) const
 Epetra_MpiSmpComm Global Sum function.
Max/Min Methods
int MaxAll (double *PartialMaxs, double *GlobalMaxs, int Count) const
 Epetra_MpiSmpComm Global Max function.
int MaxAll (int *PartialMaxs, int *GlobalMaxs, int Count) const
 Epetra_MpiSmpComm Global Max function.
int MinAll (double *PartialMins, double *GlobalMins, int Count) const
 Epetra_MpiSmpComm Global Min function.
int MinAll (int *PartialMins, int *GlobalMins, int Count) const
 Epetra_MpiSmpComm Global Min function.
Parallel Prefix Methods
int ScanSum (double *MyVals, double *ScanSums, int Count) const
 Epetra_MpiSmpComm Scan Sum function.
int ScanSum (int *MyVals, int *ScanSums, int Count) const
 Epetra_MpiSmpComm Scan Sum function.
Attribute Accessor Methods
MPI_Comm Comm () const
 Extract MPI Communicator from a Epetra_MpiSmpComm object.
int MyPID () const
 Return my process ID.
int NumProc () const
 Returns total number of processes.
Gather/Scatter and Directory Constructors
Epetra_DistributorCreateDistributor () const
 Create a distributor object.
Epetra_DirectoryCreateDirectory (const Epetra_BlockMap &Map) const
 Create a directory object for the given Epetra_BlockMap.
MPI-specific Methods
int GetMpiTag () const
 Acquire an MPI tag from the Epetra range of 24050-24099, increment tag.
MPI_Comm GetMpiComm () const
 Acquire an MPI tag from the Epetra range of 24050-24099, increment tag.
Experimental SMP cluster methods (not rigorously implemented)
void NodeBarrier () const
 Epetra_MpiSmpComm Node Barrier function.
int MyThreadID () const
 Return my thread ID.
int MyNodeID () const
 Return my node ID.
int SetNumThreads (int NumThreads)
 Set number of threads on this node.
int NumThreads () const
 Get number of threads on this node.
int SetMyThreadID (int ThreadID)
 Set my thread ID.
int SetMyNodeID (int NodeID)
 Set my node ID.
Print object to an output stream
void Print (ostream &os) const
 Print method that implements Epetra_Object virtual Print method.
void PrintInfo (ostream &os) const
 Print method that implements Epetra_Comm virtual PrintInfo method.

Detailed Description

Epetra_MpiSmpComm: The Epetra MPI Shared Memory Parallel Communication Class.

The Epetra_MpiSmpComm class is an implementation of Epetra_Comm that encapsulates the general information and services needed for other Epetra classes to run on a parallel computer using MPI and shared memory threads.

Warning:
This is an experimental class that marginally supported nested share memory parallelism within MPI processes.


Constructor & Destructor Documentation

Epetra_MpiSmpComm::Epetra_MpiSmpComm MPI_Comm  comm  ) 
 

Epetra_MpiSmpComm MPI Constructor.

Creates a Epetra_MpiSmpComm instance for use with MPI. If no specialized MPI communicator is needed, this constuctor can be called with the argument MPI_COMM_WORLD.

Epetra_MpiSmpComm::Epetra_MpiSmpComm const Epetra_MpiSmpComm Comm  ) 
 

Epetra_MpiSmpComm Copy Constructor.

Makes an exact copy of an existing Epetra_MpiSmpComm instance.

virtual Epetra_MpiSmpComm::~Epetra_MpiSmpComm  )  [virtual]
 

Epetra_MpiSmpComm Destructor.

Completely deletes a Epetra_MpiSmpComm object.

Warning:
Note: All objects that depend on a Epetra_MpiSmpComm instance should be destroyed prior to calling this function.


Member Function Documentation

void Epetra_MpiSmpComm::Barrier  )  const [virtual]
 

Epetra_MpiSmpComm Barrier function.

Causes each processor in the communicator to wait until all processors have arrived.

Implements Epetra_Comm.

int Epetra_MpiSmpComm::Broadcast int *  MyVals,
int  Count,
int  Root
const [virtual]
 

Epetra_MpiSmpComm Broadcast function.

Take list of input values from the root processor and sends to all other processors.

Parameters:
Values InOut On entry, the root processor contains the list of values. On exit, all processors will have the same list of values. Note that values must be allocated on all processor before the broadcast.
Count In On entry, contains the length of the list of Values.
Root In On entry, contains the processor from which all processors will receive a copy of Values.

Implements Epetra_Comm.

int Epetra_MpiSmpComm::Broadcast double *  MyVals,
int  Count,
int  Root
const [virtual]
 

Epetra_MpiSmpComm Broadcast function.

Takes list of input values from the root processor and sends to all other processors.

Parameters:
Values InOut On entry, the root processor contains the list of values. On exit, all processors will have the same list of values. Note that values must be allocated on all processor before the broadcast.
Count In On entry, contains the length of the list of Values.
Root In On entry, contains the processor from which all processors will receive a copy of Values.

Implements Epetra_Comm.

int Epetra_MpiSmpComm::GatherAll int *  MyVals,
int *  AllVals,
int  Count
const [virtual]
 

Epetra_MpiSmpComm All Gather function.

Take list of input values from all processors in the communicator and creates an ordered contiguous list of those values on each processor.

Parameters:
MyVals In On entry, contains the list of values, to be sent to all processors.
AllVals Out On exit, contains the list of values from all processors. Must by of size NumProc*Count.
Count In On entry, contains the length of the list of MyVals.

Implements Epetra_Comm.

int Epetra_MpiSmpComm::GatherAll double *  MyVals,
double *  AllVals,
int  Count
const [virtual]
 

Epetra_MpiSmpComm All Gather function.

Take list of input values from all processors in the communicator and creates an ordered contiguous list of those values on each processor.

Parameters:
MyVals In On entry, contains the list of values, to be sent to all processors.
AllVals Out On exit, contains the list of values from all processors. Must by of size NumProc*Count.
Count In On entry, contains the length of the list of MyVals.

Implements Epetra_Comm.

int Epetra_MpiSmpComm::MaxAll int *  PartialMaxs,
int *  GlobalMaxs,
int  Count
const [virtual]
 

Epetra_MpiSmpComm Global Max function.

Take list of input values from all processors in the communicator, computes the max and returns the max to all processors.

Parameters:
PartialMaxs In On entry, contains the list of values, usually partial sums computed locally, to be summed across all processors.
GlobalMaxs Out On exit, contains the list of values summed across all processors.
Count In On entry, contains the length of the list of values.

Implements Epetra_Comm.

int Epetra_MpiSmpComm::MaxAll double *  PartialMaxs,
double *  GlobalMaxs,
int  Count
const [virtual]
 

Epetra_MpiSmpComm Global Max function.

Take list of input values from all processors in the communicator, computes the max and returns the max to all processors.

Parameters:
PartialMaxs In On entry, contains the list of values, usually partial sums computed locally, to be summed across all processors.
GlobalMaxs Out On exit, contains the list of values summed across all processors.
Count In On entry, contains the length of the list of values.

Implements Epetra_Comm.

int Epetra_MpiSmpComm::MinAll int *  PartialMins,
int *  GlobalMins,
int  Count
const [virtual]
 

Epetra_MpiSmpComm Global Min function.

Take list of input values from all processors in the communicator, computes the max and returns the max to all processors.

Parameters:
PartialMins In On entry, contains the list of values, usually partial sums computed locally, to be summed across all processors.
GlobalMins Out On exit, contains the list of values summed across all processors.
Count In On entry, contains the length of the list of values.

Implements Epetra_Comm.

int Epetra_MpiSmpComm::MinAll double *  PartialMins,
double *  GlobalMins,
int  Count
const [virtual]
 

Epetra_MpiSmpComm Global Min function.

Take list of input values from all processors in the communicator, computes the max and returns the max to all processors.

Parameters:
PartialMins In On entry, contains the list of values, usually partial sums computed locally, to be summed across all processors.
GlobalMins Out On exit, contains the list of values summed across all processors.
Count In On entry, contains the length of the list of values.

Implements Epetra_Comm.

int Epetra_MpiSmpComm::MyNodeID  )  const [inline]
 

Return my node ID.

If SetMyNodeD was called to set a node value, this function returns the thread ID of the calling process. Otherwise returns the same value as MyPID().

int Epetra_MpiSmpComm::MyPID  )  const [inline, virtual]
 

Return my process ID.

In MPI mode returns the rank of the calling process. In serial mode returns 0.

Implements Epetra_Comm.

int Epetra_MpiSmpComm::MyThreadID  )  const [inline]
 

Return my thread ID.

If SetMyThreadID was called to set a thread value, this function returns the thread ID of the calling process. Otherwise returns 0.

void Epetra_MpiSmpComm::NodeBarrier  )  const
 

Epetra_MpiSmpComm Node Barrier function.

A no-op for a serial communicator. For MPI, it causes each process on a given node in the communicator to wait until all processes on that node have arrived.

This function can be used to select a subset of MPI processes that are associated with a group of threaded processes and synchronize only with this subset.

int Epetra_MpiSmpComm::NumProc  )  const [inline, virtual]
 

Returns total number of processes.

In MPI mode returns the size of the MPI communicator. In serial mode returns 1.

Implements Epetra_Comm.

int Epetra_MpiSmpComm::NumThreads  )  const [inline]
 

Get number of threads on this node.

Sets the number of threads on the node that owns the calling process. By default the number of threads is 1.

int Epetra_MpiSmpComm::ScanSum int *  MyVals,
int *  ScanSums,
int  Count
const [virtual]
 

Epetra_MpiSmpComm Scan Sum function.

Take list of input values from all processors in the communicator, computes the scan sum and returns it to all processors such that processor i contains the sum of values from processor 0 up to and including processor i.

Parameters:
MyValss In On entry, contains the list of values to be summed across all processors.
ScanSums Out On exit, contains the list of values summed across processors 0 through i.
Count In On entry, contains the length of the list of values.

Implements Epetra_Comm.

int Epetra_MpiSmpComm::ScanSum double *  MyVals,
double *  ScanSums,
int  Count
const [virtual]
 

Epetra_MpiSmpComm Scan Sum function.

Take list of input values from all processors in the communicator, computes the scan sum and returns it to all processors such that processor i contains the sum of values from processor 0 up to and including processor i.

Parameters:
MyValss In On entry, contains the list of values to be summed across all processors.
ScanSums Out On exit, contains the list of values summed across processors 0 through i.
Count In On entry, contains the length of the list of values.

Implements Epetra_Comm.

int Epetra_MpiSmpComm::SetMyNodeID int  NodeID  )  [inline]
 

Set my node ID.

Sets the node ID for the calling process. Can be used to facilitate threaded programming across an MPI application by associating several MPI processes with a single node. By default, each MPI process is associated with a single node with the same ID.

int Epetra_MpiSmpComm::SetMyThreadID int  ThreadID  )  [inline]
 

Set my thread ID.

Sets the thread ID for the calling process. Can be used to facilitate threaded programming across an MPI application by allowing multiple MPI processes to be considered threads of a virtual shared memory process. Threads and nodes should be used together. By default the thread ID is zero.

int Epetra_MpiSmpComm::SetNumThreads int  NumThreads  )  [inline]
 

Set number of threads on this node.

Sets the number of threads on the node that owns the calling process. By default the number of threads is 1.

int Epetra_MpiSmpComm::SumAll int *  PartialSums,
int *  GlobalSums,
int  Count
const [virtual]
 

Epetra_MpiSmpComm Global Sum function.

Take list of input values from all processors in the communicator, computes the sum and returns the sum to all processors.

Parameters:
PartialSums In On entry, contains the list of values, usually partial sums computed locally, to be summed across all processors.
GlobalSums Out On exit, contains the list of values summed across all processors.
Count In On entry, contains the length of the list of values.

Implements Epetra_Comm.

int Epetra_MpiSmpComm::SumAll double *  PartialSums,
double *  GlobalSums,
int  Count
const [virtual]
 

Epetra_MpiSmpComm Global Sum function.

Take list of input values from all processors in the communicator, computes the sum and returns the sum to all processors.

Parameters:
PartialSums In On entry, contains the list of values, usually partial sums computed locally, to be summed across all processors.
GlobalSums Out On exit, contains the list of values summed across all processors.
Count In On entry, contains the length of the list of values.

Implements Epetra_Comm.


The documentation for this class was generated from the following file:
Generated on Thu Sep 18 12:43:16 2008 for Epetra by doxygen 1.3.9.1