Tpetra Matrix/Vector Services Version of the Day

One or more distributed dense vectors. More...
#include <Tpetra_KokkosRefactor_MultiVector_decl.hpp>
Public Types  
Typedefs to facilitate template metaprogramming.  
typedef Scalar  scalar_type 
The type of entries in the vector(s).  
typedef LocalOrdinal  local_ordinal_type 
The type of local indices.  
typedef GlobalOrdinal  global_ordinal_type 
The type of global indices.  
typedef Node  node_type 
The Kokkos Node type.  
typedef Scalar  dot_type 
The type for inner product (dot) products.  
typedef DistObject< Scalar, LocalOrdinal, GlobalOrdinal, Node >  DO 
Typedefs  
typedef Scalar  packet_type 
The type of each datum being sent or received in an Import or Export.  
Public Member Functions  
virtual void  removeEmptyProcessesInPlace (const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &newMap) 
Remove processes owning zero rows from the Map and their communicator.  
void  setCopyOrView (const Teuchos::DataAccess copyOrView) 
Set whether this has copy (copyOrView = Teuchos::Copy) or view (copyOrView = Teuchos::View) semantics.  
Teuchos::DataAccess  getCopyOrView () const 
Get whether this has copy (copyOrView = Teuchos::Copy) or view (copyOrView = Teuchos::View) semantics.  
Constructors and destructor  
MultiVector ()  
Default constructor: makes a MultiVector with no rows or columns.  
MultiVector (const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &map, size_t NumVectors, bool zeroOut=true)  
Basic constuctor.  
MultiVector (const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &source)  
Copy constructor.  
MultiVector (const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &source, const Teuchos::DataAccess copyOrView)  
Copy constructor, with option to do shallow copy and mark the result as having "view semantics.".  
MultiVector (const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &map, const Teuchos::ArrayView< const Scalar > &A, size_t LDA, size_t NumVectors)  
Create multivector by copying twodimensional array of local data.  
MultiVector (const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &map, const Teuchos::ArrayView< const Teuchos::ArrayView< const Scalar > > &ArrayOfPtrs, size_t NumVectors)  
Create multivector by copying array of views of local data.  
MultiVector (const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &map, const Teuchos::ArrayRCP< Scalar > &data, const size_t LDA, const size_t numVectors)  
Expert mode constructor.  
template<class Node2 >  
Teuchos::RCP< MultiVector < Scalar, LocalOrdinal, GlobalOrdinal, Node2 > >  clone (const Teuchos::RCP< Node2 > &node2) const 
Create a cloned MultiVector for a different node type.  
virtual  ~MultiVector () 
Destructor (virtual for memory safety of derived classes).  
Postconstruction modification routines  
void  replaceGlobalValue (GlobalOrdinal globalRow, size_t vectorIndex, const Scalar &value) 
Replace value, using global (row) index.  
void  sumIntoGlobalValue (GlobalOrdinal globalRow, size_t vectorIndex, const Scalar &value) 
Add value to existing value, using global (row) index.  
void  replaceLocalValue (LocalOrdinal myRow, size_t vectorIndex, const Scalar &value) 
Replace value, using local (row) index.  
void  sumIntoLocalValue (LocalOrdinal myRow, size_t vectorIndex, const Scalar &value) 
Add value to existing value, using local (row) index.  
void  putScalar (const Scalar &value) 
Set all values in the multivector with the given value.  
void  randomize () 
Set all values in the multivector to pseudorandom numbers.  
void  replaceMap (const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &map) 
Replace the underlying Map in place.  
void  reduce () 
Sum values of a locally replicated multivector across all processes.  
MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &  operator= (const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &source) 
Assignment operator.  
Data copy and view methods  
These methods are used to get the data underlying the MultiVector. They return data in one of two forms:
Not all of these methods are valid for a particular MultiVector. For instance, calling a method that accesses a view of the data in a 1D format (i.e., get1dView) requires that the target MultiVector has constant stride.  
Teuchos::RCP< MultiVector < Scalar, LocalOrdinal, GlobalOrdinal, Node > >  subCopy (const Teuchos::Range1D &colRng) const 
Return a MultiVector with copies of selected columns.  
Teuchos::RCP< MultiVector < Scalar, LocalOrdinal, GlobalOrdinal, Node > >  subCopy (const Teuchos::ArrayView< const size_t > &cols) const 
Return a MultiVector with copies of selected columns.  
Teuchos::RCP< const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > >  subView (const Teuchos::Range1D &colRng) const 
Return a MultiVector with const views of selected columns.  
Teuchos::RCP< const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > >  subView (const Teuchos::ArrayView< const size_t > &cols) const 
Return a const MultiVector with const views of selected columns.  
Teuchos::RCP< MultiVector < Scalar, LocalOrdinal, GlobalOrdinal, Node > >  subViewNonConst (const Teuchos::Range1D &colRng) 
Return a MultiVector with views of selected columns.  
Teuchos::RCP< MultiVector < Scalar, LocalOrdinal, GlobalOrdinal, Node > >  subViewNonConst (const Teuchos::ArrayView< const size_t > &cols) 
Return a MultiVector with views of selected columns.  
Teuchos::RCP< const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > >  offsetView (const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &subMap, size_t offset) const 
Return a const view of a subset of rows.  
Teuchos::RCP< MultiVector < Scalar, LocalOrdinal, GlobalOrdinal, Node > >  offsetViewNonConst (const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &subMap, size_t offset) 
Return a nonconst view of a subset of rows.  
Teuchos::RCP< const Vector < Scalar, LocalOrdinal, GlobalOrdinal, Node > >  getVector (size_t j) const 
Return a Vector which is a const view of column j.  
Teuchos::RCP< Vector< Scalar, LocalOrdinal, GlobalOrdinal, Node > >  getVectorNonConst (size_t j) 
Return a Vector which is a nonconst view of column j.  
Teuchos::ArrayRCP< const Scalar >  getData (size_t j) const 
Const view of the local values in a particular vector of this multivector.  
Teuchos::ArrayRCP< Scalar >  getDataNonConst (size_t j) 
View of the local values in a particular vector of this multivector.  
void  get1dCopy (Teuchos::ArrayView< Scalar > A, size_t LDA) const 
Fill the given array with a copy of this multivector's local values.  
void  get2dCopy (Teuchos::ArrayView< const Teuchos::ArrayView< Scalar > > ArrayOfPtrs) const 
Fill the given array with a copy of this multivector's local values.  
Teuchos::ArrayRCP< const Scalar >  get1dView () const 
Const persisting (1D) view of this multivector's local values.  
Teuchos::ArrayRCP < Teuchos::ArrayRCP< const Scalar > >  get2dView () const 
Return const persisting pointers to values.  
Teuchos::ArrayRCP< Scalar >  get1dViewNonConst () 
Nonconst persisting (1D) view of this multivector's local values.  
Teuchos::ArrayRCP < Teuchos::ArrayRCP< Scalar > >  get2dViewNonConst () 
Return nonconst persisting pointers to values.  
KokkosClassic::MultiVector < Scalar, Node >  getLocalMV () const 
A view of the underlying KokkosClassic::MultiVector object.  
TEUCHOS_DEPRECATED KokkosClassic::MultiVector < Scalar, Node > &  getLocalMVNonConst () 
A nonconst reference to a view of the underlying KokkosClassic::MultiVector object.  
Mathematical methods  
void  dot (const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &A, const Teuchos::ArrayView< Scalar > &dots) const 
Compute the dot product of each corresponding pair of vectors (columns) in A and B.  
void  abs (const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &A) 
Put elementwise absolute values of input Multivector in target: A = abs(this)  
void  reciprocal (const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &A) 
Put elementwise reciprocal values of input Multivector in target, this(i,j) = 1/A(i,j).  
void  scale (const Scalar &alpha) 
Scale in place: this = alpha*this .  
void  scale (Teuchos::ArrayView< const Scalar > alpha) 
Scale each column in place: this[j] = alpha[j]*this[j] .  
void  scale (const Scalar &alpha, const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &A) 
Scale in place: this = alpha * A .  
void  update (const Scalar &alpha, const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &A, const Scalar &beta) 
Update: this = beta*this + alpha*A .  
void  update (const Scalar &alpha, const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &A, const Scalar &beta, const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &B, const Scalar &gamma) 
Update: this = gamma*this + alpha*A + beta*B .  
void  norm1 (const Teuchos::ArrayView< typename Teuchos::ScalarTraits< Scalar >::magnitudeType > &norms) const 
Compute the onenorm of each vector (column).  
void  norm2 (const Teuchos::ArrayView< typename Teuchos::ScalarTraits< Scalar >::magnitudeType > &norms) const 
Compute the twonorm of each vector (column).  
void  normInf (const Teuchos::ArrayView< typename Teuchos::ScalarTraits< Scalar >::magnitudeType > &norms) const 
Compute the infinitynorm of each vector (column).  
void  normWeighted (const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &weights, const Teuchos::ArrayView< typename Teuchos::ScalarTraits< Scalar >::magnitudeType > &norms) const 
void  meanValue (const Teuchos::ArrayView< Scalar > &means) const 
Compute mean (average) value of each vector in multivector. The outcome of this routine is undefined for nonfloating point scalar types (e.g., int).  
void  multiply (Teuchos::ETransp transA, Teuchos::ETransp transB, const Scalar &alpha, const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &A, const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &B, const Scalar &beta) 
Matrixmatrix multiplication: this = beta*this + alpha*op(A)*op(B) .  
void  elementWiseMultiply (Scalar scalarAB, const Vector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &A, const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &B, Scalar scalarThis) 
Multiply a Vector A elementwise by a MultiVector B.  
Attribute access functions  
size_t  getNumVectors () const 
Number of columns in the multivector.  
size_t  getLocalLength () const 
Local number of rows on the calling process.  
global_size_t  getGlobalLength () const 
Global number of rows in the multivector.  
size_t  getStride () const 
Stride between columns in the multivector.  
bool  isConstantStride () const 
Whether this multivector has constant stride between columns.  
Overridden from Teuchos::Describable  
virtual std::string  description () const 
A simple oneline description of this object.  
virtual void  describe (Teuchos::FancyOStream &out, const Teuchos::EVerbosityLevel verbLevel=Teuchos::Describable::verbLevel_default) const 
Print the object with the given verbosity level to a FancyOStream.  
Public methods for redistributing data  
void  doImport (const SrcDistObject &source, const Import< LocalOrdinal, GlobalOrdinal, Node > &importer, CombineMode CM) 
Import data into this object using an Import object ("forward mode").  
void  doImport (const SrcDistObject &source, const Export< LocalOrdinal, GlobalOrdinal, Node > &exporter, CombineMode CM) 
Import data into this object using an Export object ("reverse mode").  
void  doExport (const SrcDistObject &source, const Export< LocalOrdinal, GlobalOrdinal, Node > &exporter, CombineMode CM) 
Export data into this object using an Export object ("forward mode").  
void  doExport (const SrcDistObject &source, const Import< LocalOrdinal, GlobalOrdinal, Node > &importer, CombineMode CM) 
Export data into this object using an Import object ("reverse mode").  
Attribute accessor methods  
bool  isDistributed () const 
Whether this is a globally distributed object.  
virtual Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > >  getMap () const 
The Map describing the parallel distribution of this object.  
I/O methods  
void  print (std::ostream &os) const 
Print this object to the given output stream.  
Protected Member Functions  
virtual void  doTransfer (const SrcDistObject &src, CombineMode CM, size_t numSameIDs, const Teuchos::ArrayView< const LocalOrdinal > &permuteToLIDs, const Teuchos::ArrayView< const LocalOrdinal > &permuteFromLIDs, const Teuchos::ArrayView< const LocalOrdinal > &remoteLIDs, const Teuchos::ArrayView< const LocalOrdinal > &exportLIDs, Distributor &distor, ReverseOption revOp) 
Redistribute data across memory images.  
Protected Attributes  
KMV  lclMV_ 
The KokkosClassic::MultiVector containing the compute buffer of data.  
Array< size_t >  whichVectors_ 
Indices of columns this multivector is viewing.  
bool  hasViewSemantics_ 
Whether this MultiVector has view semantics.  
Teuchos::RCP< const Map < LocalOrdinal, GlobalOrdinal, Node > >  map_ 
The Map over which this object is distributed.  
Teuchos::Array< Scalar >  imports_ 
Buffer into which packed data are imported (received from other processes).  
Teuchos::Array< size_t >  numImportPacketsPerLID_ 
Number of packets to receive for each receive operation.  
Teuchos::Array< Scalar >  exports_ 
Buffer from which packed data are exported (sent to other processes).  
Teuchos::Array< size_t >  numExportPacketsPerLID_ 
Number of packets to send for each send operation.  
Friends  
Methods for use only by experts  
void  removeEmptyProcessesInPlace (Teuchos::RCP< Tpetra::DistObject< PT, LO, GO, NT > > &input, const Teuchos::RCP< const Map< LO, GO, NT > > &newMap) 
void  removeEmptyProcessesInPlace (Teuchos::RCP< Tpetra::DistObject< PT, LO, GO, NT > > &input) 
Related Functions  
(Note that these are not member functions.)  
template<class Scalar , class LocalOrdinal , class GlobalOrdinal , class Node >  
MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >  createCopy (const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &src) 
Return a deep copy of the MultiVector src .  
template<class DS , class DL , class DG , class DN , class SS , class SL , class SG , class SN >  
void  deep_copy (MultiVector< DS, DL, DG, DN > &dst, const MultiVector< SS, SL, SG, SN > &src) 
Copy the contents of the MultiVector src into dst .  
View constructors, used only by nonmember constructors.  
struct  Details::CreateMultiVectorFromView 
MultiVector (const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &map, const Teuchos::ArrayRCP< Scalar > &view, size_t LDA, size_t NumVectors, EPrivateHostViewConstructor)  
View constructor with userallocated data.  
bool  vectorIndexOutOfRange (size_t VectorIndex) const 
template<class T >  
ArrayRCP< T >  getSubArrayRCP (ArrayRCP< T > arr, size_t j) const 
MultiVector (const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &map, Teuchos::ArrayRCP< Scalar > data, size_t LDA, Teuchos::ArrayView< const size_t > whichVectors, EPrivateComputeViewConstructor)  
Advanced constructorfor noncontiguous views.  
MultiVector (const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &map, Teuchos::ArrayRCP< Scalar > data, size_t LDA, size_t NumVectors, EPrivateComputeViewConstructor)  
Advanced constructor for contiguous views.  
MultiVector (const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &map, const KokkosClassic::MultiVector< Scalar, Node > &localMultiVector, EPrivateComputeViewConstructor)  
Advanced constructor for contiguous views.  
MultiVector (const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &map, const KokkosClassic::MultiVector< Scalar, Node > &localMultiVector, Teuchos::ArrayView< const size_t > whichVectors, EPrivateComputeViewConstructor)  
Advanced constructor for noncontiguous views.  
Implementation of Tpetra::DistObject  
ArrayRCP< Scalar >  ncview_ 
Nonconst host view created in createViewsNonConst().  
ArrayRCP< const Scalar >  cview_ 
Const host view created in createViews().  
virtual bool  checkSizes (const SrcDistObject &sourceObj) 
Whether data redistribution between sourceObj and this object is legal.  
virtual size_t  constantNumberOfPackets () const 
Number of packets to send per LID.  
virtual void  copyAndPermute (const SrcDistObject &sourceObj, size_t numSameIDs, const ArrayView< const LocalOrdinal > &permuteToLIDs, const ArrayView< const LocalOrdinal > &permuteFromLIDs) 
Perform copies and permutations that are local to this process.  
virtual void  packAndPrepare (const SrcDistObject &sourceObj, const ArrayView< const LocalOrdinal > &exportLIDs, Array< Scalar > &exports, const ArrayView< size_t > &numExportPacketsPerLID, size_t &constantNumPackets, Distributor &distor) 
Perform any packing or preparation required for communication.  
virtual void  unpackAndCombine (const ArrayView< const LocalOrdinal > &importLIDs, const ArrayView< const Scalar > &imports, const ArrayView< size_t > &numPacketsPerLID, size_t constantNumPackets, Distributor &distor, CombineMode CM) 
Perform any unpacking and combining after communication.  
void  createViews () const 
Hook for creating a const view.  
void  createViewsNonConst (KokkosClassic::ReadWriteOption rwo) 
Hook for creating a nonconst view.  
void  releaseViews () const 
Hook for releasing views. 
One or more distributed dense vectors.
A "multivector" contains one or more dense vectors. All the vectors in a multivector have the same distribution of rows in parallel over the communicator used to create the multivector. Multivectors containing more than one vector are useful for algorithms that solve multiple linear systems at once, or that solve for a cluster of eigenvalues and their corresponding eigenvectors at once. These "block" algorithms often have accuracy or performance advantages over corresponding algorithms that solve for only one vector at a time. For example, working with multiple vectors at a time allows Tpetra to use faster BLAS 3 routines for local computations. It may also reduce the number of parallel reductions.
The Vector class implements the MultiVector interface, so if you only wish to work with a single vector at a time, you may simply use Vector instead of MultiVector. However, if you are writing solvers or preconditioners, you would do better to write to the MultiVector interface and always assume that each MultiVector contains more than one vector. This will make your solver or preconditioner more compatible with other Trilinos packages, and it will also let you exploit the performance optimizations mentioned above.
Scalar  The type of the numerical entries of the vector(s). (You can use realvalued or complexvalued types here, unlike in Epetra, where the scalar type is always double .) 
LocalOrdinal  The type of local indices. Same as the LocalOrdinal template parameter of Map objects used by this matrix. (In Epetra, this is just int .) The default type is int , which should suffice for most users. This type must be big enough to store the local (per process) number of rows. 
GlobalOrdinal  The type of global indices. Same as the GlobalOrdinal template parameter of Map objects used by this matrix. (In Epetra, this is just int . One advantage of Tpetra over Epetra is that you can use a 64bit integer type here if you want to solve big problems.) The default type is LocalOrdinal . This type must be big enough to store the global (over all processes in the communicator) number of rows or columns. 
Node  A class implementing onnode sharedmemory parallel operations. It must implement the Kokkos Node API. The default Node type should suffice for most users. The actual default type depends on your Trilinos build options. 
GlobalOrdinal
type, which is int
, then the global number of rows or columns in the matrix may be no more than INT_MAX
, which for typical 32bit int
is (about two billion). If you want to solve larger problems, you must use a 64bit integer type here.Before reading the rest of this documentation, it helps to know something about the Teuchos memory management classes, in particular Teuchos::RCP, Teuchos::ArrayRCP, and Teuchos::ArrayView. You may also want to know about the differences between BLAS 1, 2, and 3 operations, and learn a little bit about MPI (the Message Passing Interface for distributedmemory programming). You won't have to use MPI directly to use MultiVector, but it helps to be familiar with the general idea of distributed storage of data over a communicator.
A multivector could be a view of some subset of another multivector's columns and rows. A view is like a pointer; it provides access to the original multivector's data without copying the data. There are no public constructors for creating a view, but any instance method with "view" in the name that returns an RCP<MultiVector> serves as a view constructor.
The subset of columns in a view need not be contiguous. For example, given a multivector X with 43 columns, it is possible to have a multivector Y which is a view of columns 1, 3, and 42 (zerobased indices) of X. We call such multivectors noncontiguous. They have the the property that isConstantStride() returns false.
Noncontiguous multivectors lose some performance advantages. For example, local computations may be slower, since Tpetra cannot use BLAS 3 routines (e.g., matrixmatrix multiply) on a noncontiguous multivectors without copying into temporary contiguous storage. Noncontiguous multivectors also affect the ability to access the data in certain ways, which we will explain below.
We have unfortunately overloaded the term "view." In the section above, we explained the idea of a "multivector which is a view of another multivector." This section is about "views of a multivector's data." If you want to read or write the actual values in a multivector, this is what you want. All the instance methods which return an ArrayRCP of Scalar data, or an ArrayRCP of ArrayRCP of Scalar data, return views to the data. These data are always local data, meaning that the corresponding rows of the multivector are owned by the calling process. You can't use these methods to access remote data (rows that do not belong to the calling process).
Data views may be either onedimensional (1D) or twodimensional (2D). A 1D view presents the data as a dense matrix in columnmajor order, returned as a single array. On the calling process, the matrix has getLocalLength() rows, getNumVectors() columns, and column stride getStride(). You may not get a 1D view of a noncontiguous multivector. If you need the data of a noncontiguous multivector in a 1D format, you may get a copy by calling get1dCopy(). A 2D view presents the data as an array of arrays, one array per column (i.e., vector in the multivector). The entries in each column are stored contiguously. You may get a 2D view of any multivector, whether or not it is noncontiguous.
Views are not necessarily just encapsulated pointers. The meaning of view depends in part on the Kokkos Node type (the Node template parameter). This matters in particular if you are running on a Graphics Processing Unit (GPU) device. You can tell at compile time whether you are running on a GPU by looking at the Kokkos Node type. (Currently, the only GPU Node type we provide is KokkosClassic::ThrustGPUNode. All other types are CPU Nodes.) If the Kokkos Node is a GPU Node type, then views always reside in host (CPU) memory, rather than device (GPU) memory. When you ask for a view, it copies data from the device to the host.
What happens next to your view depends on whether the view is const (readonly) or nonconst (read and write). Const views disappear (their host memory is deallocated) when the corresponding reference count (of the ArrayRCP) goes to zero. (Since the data were not changed, there is no need to update the original copy on the device.) When a nonconst view's reference count goes to zero, the view's data are copied back to device memory, thus "pushing" your changes on the host back to the device.
These devicehostdevice copy semantics on GPUs mean that we can only promise that a view is a snapshot of the multivector's data at the time the view was created. If you create a const view, then a nonconst view, then modify the nonconst view, the contents of the const view are undefined. For hostonly (CPU only, no GPU) Kokkos Nodes, views may be just encapsulated pointers to the data, so modifying a nonconst view will change the original data. For GPU Nodes, modifying a nonconst view will not change the original data until the view's reference count goes to zero. Furthermore, if the nonconst view's reference count never goes to zero, the nonconst view will never be copied back to device memory, and thus the original data will never be changed.
Tpetra was designed to allow different data representations underneath the same interface. This lets Tpetra run correctly and efficiently on many different kinds of hardware, including singlecore CPUs, multicore CPUs with NonUniform Memory Access (NUMA), and even "discrete compute accelerators" like Graphics Processing Units (GPUs). These different kinds of hardware all have in common the following:
These conclusions have practical consequences for the MultiVector interface. In particular, we have deliberately made it difficult for you to access data directly by raw pointer. This is because the underlying layout may not be what you expect. In some cases, you are not even allowed to dereference the raw pointer (for example, if it resides in GPU device memory, and you are working on the host CPU). This is why we require accessing the data through views.
The above section also explains why we do not offer a Scalar& operator[]
to access each entry of a vector directly. Direct access on GPUs would require implicitly creating an internal host copy of the device data. This would consume host memory, and it would not be clear when to write the resulting host data back to device memory. The resulting operator would violate users' performance expectations, since it would be much slower than raw array access. We have preferred in our design to expose what is expensive, by exposing data views and letting users control when to copy data between host and device.
"Directly" here means without views, using a device kernel if the data reside on the GPU.
There are two different options for direct access to the multivector's data. One is to use the optional RTI (Reduction / Transformation Interface) subpackage of Tpetra. You may enable this at Trilinos configure time by setting the CMake Boolean option Tpetra_ENABLE_RTI
to ON
. Be aware that building and using RTI requires that your C++ compiler support the language features in the new C++11 standard. RTI allows you to implement arbitrary elementwise operations over a vector, followed by arbitrary reductions over the elements of that vector. We recommend RTI for most users.
Another option is to access the local data through its Kokkos container data structure, KokkosClassic::MultiVector, and then use the Kokkos Node API to implement arbitrary operations on the data. We do not recommend this approach for most users. In particular, the local data structures are likely to change over the next few releases. If you find yourself wanting to try this option, please contact the Tpetra developers for recommendations. We will be happy to work with you.
A MultiVector's rows are distributed over processes in its (row) Map's communicator. A MultiVector is a DistObject; the Map of the DistObject tells which process in the communicator owns which rows. This means that you may use Import and Export operations to migrate between different distributions. Please refer to the documentation of Map, Import, and Export for more information.
MultiVector includes methods that perform parallel allreduces. These include inner products and various kinds of norms. All of these methods have the same blocking semantics as MPI_Allreduce.
A "multivector" contains one or more dense vectors. All the vectors in a multivector have the same distribution of rows in parallel over the communicator used to create the multivector. Multivectors containing more than one vector are useful for algorithms that solve multiple linear systems at once, or that solve for a cluster of eigenvalues and their corresponding eigenvectors at once. These "block" algorithms often have accuracy or performance advantages over corresponding algorithms that solve for only one vector at a time. For example, working with multiple vectors at a time allows Tpetra to use faster BLAS 3 routines for local computations. It may also reduce the number of parallel reductions.
The Vector class implements the MultiVector interface, so if you only wish to work with a single vector at a time, you may simply use Vector instead of MultiVector. However, if you are writing solvers or preconditioners, you would do better to write to the MultiVector interface and always assume that each MultiVector contains more than one vector. This will make your solver or preconditioner more compatible with other Trilinos packages, and it will also let you exploit the performance optimizations mentioned above.
Scalar  The type of the numerical entries of the vector(s). (You may use realvalued or complexvalued types here, unlike in Epetra, where the scalar type is always double .) The default is double (real, doubleprecision floatingpoint type). 
LocalOrdinal  The type of local indices. Same as the LocalOrdinal template parameter of Map objects used by this matrix. (In Epetra, this is just int .) The default type is int , which should suffice for most users. This type must be big enough to store the local (per process) number of rows. 
GlobalOrdinal  The type of global indices. Same as the GlobalOrdinal template parameter of Map objects used by this matrix. (In Epetra, this is just int . One advantage of Tpetra over Epetra is that you can use a 64bit integer type here if you want to solve big problems.) The default type is LocalOrdinal . This type must be big enough to store the global (over all processes in the communicator) number of rows or columns. 
Node  A class implementing onnode sharedmemory parallel operations. It must implement the Kokkos Node API. The default Node type should suffice for most users. The actual default type depends on your Trilinos build options. 
GlobalOrdinal
type, which is int
, then the global number of rows or columns in the matrix may be no more than INT_MAX
, which for typical 32bit int
is (about two billion). If you want to solve larger problems, you must use a 64bit integer type here.Before reading the rest of this documentation, it helps to know something about the Teuchos memory management classes, in particular Teuchos::RCP, Teuchos::ArrayRCP, and Teuchos::ArrayView. You may also want to know about the differences between BLAS 1, 2, and 3 operations, and learn a little bit about MPI (the Message Passing Interface for distributedmemory programming). You won't have to use MPI directly to use MultiVector, but it helps to be familiar with the general idea of distributed storage of data over a communicator.
A multivector could be a view of some subset of another multivector's columns and rows. A view is like a pointer; it provides access to the original multivector's data without copying the data. There are no public constructors for creating a view, but any instance method with "view" in the name that returns an RCP<MultiVector> serves as a view constructor.
The subset of columns in a view need not be contiguous. For example, given a multivector X with 43 columns, it is possible to have a multivector Y which is a view of columns 1, 3, and 42 (zerobased indices) of X. We call such multivectors noncontiguous. They have the the property that isConstantStride() returns false.
Noncontiguous multivectors lose some performance advantages. For example, local computations may be slower, since Tpetra cannot use BLAS 3 routines (e.g., matrixmatrix multiply) on a noncontiguous multivectors without copying into temporary contiguous storage. Noncontiguous multivectors also affect the ability to access the data in certain ways, which we will explain below.
We have unfortunately overloaded the term "view." In the section above, we explained the idea of a "multivector which is a view of another multivector." This section is about "views of a multivector's data." If you want to read or write the actual values in a multivector, this is what you want. All the instance methods which return an ArrayRCP of Scalar data, or an ArrayRCP of ArrayRCP of Scalar data, return views to the data. These data are always local data, meaning that the corresponding rows of the multivector are owned by the calling process. You can't use these methods to access remote data (rows that do not belong to the calling process).
Data views may be either onedimensional (1D) or twodimensional (2D). A 1D view presents the data as a dense matrix in columnmajor order, returned as a single array. On the calling process, the matrix has getLocalLength() rows, getNumVectors() columns, and column stride getStride(). You may not get a 1D view of a noncontiguous multivector. If you need the data of a noncontiguous multivector in a 1D format, you may get a copy by calling get1dCopy(). A 2D view presents the data as an array of arrays, one array per column (i.e., vector in the multivector). The entries in each column are stored contiguously. You may get a 2D view of any multivector, whether or not it is noncontiguous.
Views are not necessarily just encapsulated pointers. The meaning of view depends in part on the Kokkos Node type (the Node template parameter). This matters in particular if you are running on a Graphics Processing Unit (GPU) device. You can tell at compile time whether you are running on a GPU by looking at the Kokkos Node type. (Currently, the only GPU Node type we provide is KokkosClassic::ThrustGPUNode. All other types are CPU Nodes.) If the Kokkos Node is a GPU Node type, then views always reside in host (CPU) memory, rather than device (GPU) memory. When you ask for a view, it copies data from the device to the host.
What happens next to your view depends on whether the view is const (readonly) or nonconst (read and write). Const views disappear (their host memory is deallocated) when the corresponding reference count (of the ArrayRCP) goes to zero. (Since the data were not changed, there is no need to update the original copy on the device.) When a nonconst view's reference count goes to zero, the view's data are copied back to device memory, thus "pushing" your changes on the host back to the device.
These devicehostdevice copy semantics on GPUs mean that we can only promise that a view is a snapshot of the multivector's data at the time the view was created. If you create a const view, then a nonconst view, then modify the nonconst view, the contents of the const view are undefined. For hostonly (CPU only, no GPU) Kokkos Nodes, views may be just encapsulated pointers to the data, so modifying a nonconst view will change the original data. For GPU Nodes, modifying a nonconst view will not change the original data until the view's reference count goes to zero. Furthermore, if the nonconst view's reference count never goes to zero, the nonconst view will never be copied back to device memory, and thus the original data will never be changed.
Tpetra was designed to allow different data representations underneath the same interface. This lets Tpetra run correctly and efficiently on many different kinds of hardware, including singlecore CPUs, multicore CPUs with NonUniform Memory Access (NUMA), and even "discrete compute accelerators" like Graphics Processing Units (GPUs). These different kinds of hardware all have in common the following:
These conclusions have practical consequences for the MultiVector interface. In particular, we have deliberately made it difficult for you to access data directly by raw pointer. This is because the underlying layout may not be what you expect. In some cases, you are not even allowed to dereference the raw pointer (for example, if it resides in GPU device memory, and you are working on the host CPU). This is why we require accessing the data through views.
The above section also explains why we do not offer a Scalar& operator[]
to access each entry of a vector directly. Direct access on GPUs would require implicitly creating an internal host copy of the device data. This would consume host memory, and it would not be clear when to write the resulting host data back to device memory. The resulting operator would violate users' performance expectations, since it would be much slower than raw array access. We have preferred in our design to expose what is expensive, by exposing data views and letting users control when to copy data between host and device.
"Directly" here means without views, using a device kernel if the data reside on the GPU.
There are two different options for direct access to the multivector's data. One is to use the optional RTI (Reduction / Transformation Interface) subpackage of Tpetra. You may enable this at Trilinos configure time by setting the CMake Boolean option Tpetra_ENABLE_RTI
to ON
. Be aware that building and using RTI requires that your C++ compiler support the language features in the new C++11 standard. RTI allows you to implement arbitrary elementwise operations over a vector, followed by arbitrary reductions over the elements of that vector. We recommend RTI for most users.
Another option is to access the local data through its Kokkos container data structure, KokkosClassic::MultiVector, and then use the Kokkos Node API to implement arbitrary operations on the data. We do not recommend this approach for most users. In particular, the local data structures are likely to change over the next few releases. If you find yourself wanting to try this option, please contact the Tpetra developers for recommendations. We will be happy to work with you.
A MultiVector's rows are distributed over processes in its (row) Map's communicator. A MultiVector is a DistObject; the Map of the DistObject tells which process in the communicator owns which rows. This means that you may use Import and Export operations to migrate between different distributions. Please refer to the documentation of Map, Import, and Export for more information.
MultiVector includes methods that perform parallel allreduces. These include inner products and various kinds of norms. All of these methods have the same blocking semantics as MPI_Allreduce.
Definition at line 356 of file Tpetra_MultiVector_decl.hpp.
typedef Scalar Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::scalar_type 
The type of entries in the vector(s).
Reimplemented in Tpetra::BlockMultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Definition at line 368 of file Tpetra_MultiVector_decl.hpp.
typedef LocalOrdinal Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::local_ordinal_type 
The type of local indices.
Reimplemented from Tpetra::DistObject< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Reimplemented in Tpetra::BlockMultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Definition at line 370 of file Tpetra_MultiVector_decl.hpp.
typedef GlobalOrdinal Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::global_ordinal_type 
The type of global indices.
Reimplemented from Tpetra::DistObject< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Reimplemented in Tpetra::BlockMultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Definition at line 372 of file Tpetra_MultiVector_decl.hpp.
typedef Node Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::node_type 
The Kokkos Node type.
Reimplemented from Tpetra::DistObject< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Reimplemented in Tpetra::BlockMultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Definition at line 374 of file Tpetra_MultiVector_decl.hpp.
typedef Scalar Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::dot_type 
The type for inner product (dot) products.
This is not used and exists here purely for backwardscompatibility with KokkosRefactor.
Reimplemented in Tpetra::Vector< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Definition at line 380 of file Tpetra_MultiVector_decl.hpp.
typedef Scalar Tpetra::DistObject< Scalar , LocalOrdinal, GlobalOrdinal, Node >::packet_type [inherited] 
The type of each datum being sent or received in an Import or Export.
Note that this type does not always correspond to the Scalar
template parameter of subclasses.
Definition at line 183 of file Tpetra_DistObject_decl.hpp.
Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::MultiVector  (  ) 
Default constructor: makes a MultiVector with no rows or columns.
Definition at line 68 of file Tpetra_MultiVector_def.hpp.
Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::MultiVector  (  const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &  map, 
size_t  NumVectors,  
bool  zeroOut = true 

) 
Basic constuctor.
map  [in] Map describing the distribution of rows. 
NumVectors  [in] Number of vectors (columns). 
zeroOut  [in] Whether to initialize all the entries of the MultiVector to zero. 
Definition at line 76 of file Tpetra_MultiVector_def.hpp.
Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::MultiVector  (  const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &  source  ) 
Copy constructor.
Whether this does a deep copy or a shallow copy depends on whether source
has "view semantics." See discussion in the documentation of the twoargument copy constructor below.
Definition at line 110 of file Tpetra_MultiVector_def.hpp.
Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::MultiVector  (  const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &  source, 
const Teuchos::DataAccess  copyOrView  
) 
Copy constructor, with option to do shallow copy and mark the result as having "view semantics.".
If copyOrView is Teuchos::View, this constructor marks the result as having "view semantics." This means that copy construction or assignment (operator=) with the resulting object will always do a shallow copy, and will transmit view semantics to the result of the shallow copy. If copyOrView is Teuchos::Copy, this constructor always does a deep copy and marks the result as not having view semantics, whether or not source
has view semantics.
View semantics are a "forwards compatibility" measure for porting to the Kokkos refactor version of Tpetra. The latter only ever has view semantics. The "classic" version of Tpetra does not currently have view semantics by default, but this will change.
Definition at line 168 of file Tpetra_MultiVector_def.hpp.
Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::MultiVector  (  const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &  map, 
const Teuchos::ArrayView< const Scalar > &  A,  
size_t  LDA,  
size_t  NumVectors  
) 
Create multivector by copying twodimensional array of local data.
map  [in] The Map describing the distribution of rows of the multivector. 
view  [in] A view of columnmajor dense matrix data. The calling process will make a deep copy of this data. 
LDA  [in] The leading dimension (a.k.a. "stride") of the columnmajor input data. 
NumVectors  [in] The number of columns in the input data. This will be the number of vectors in the returned multivector. 
Definition at line 397 of file Tpetra_MultiVector_def.hpp.
Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::MultiVector  (  const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &  map, 
const Teuchos::ArrayView< const Teuchos::ArrayView< const Scalar > > &  ArrayOfPtrs,  
size_t  NumVectors  
) 
Create multivector by copying array of views of local data.
map  [in] The Map describing the distribution of rows of the multivector. 
ArrayOfPtrs  [in/out] Array of views of each column's data. The calling process will make a deep copy of this data. 
NumVectors  [in] The number of columns in the input data, and the number of elements in ArrayOfPtrs. This will be the number of vectors in the returned multivector. 
Definition at line 448 of file Tpetra_MultiVector_def.hpp.
Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::MultiVector  (  const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &  map, 
const Teuchos::ArrayRCP< Scalar > &  data,  
const size_t  LDA,  
const size_t  numVectors  
) 
Expert mode constructor.
map  [in] Map describing the distribution of rows. 
data  [in] Device pointer to the data (columnmajor) 
LDA  [in] Leading dimension (stride) of the data 
numVecs  [in] Number of vectors (columns) 
Definition at line 343 of file Tpetra_MultiVector_def.hpp.
Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::~MultiVector  (  )  [virtual] 
Destructor (virtual for memory safety of derived classes).
Definition at line 491 of file Tpetra_MultiVector_def.hpp.
Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::MultiVector  (  const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &  map, 
const Teuchos::ArrayRCP< Scalar > &  view,  
size_t  LDA,  
size_t  NumVectors,  
EPrivateHostViewConstructor  
)  [protected] 
View constructor with userallocated data.
Please consider this constructor DEPRECATED.
The tag says that views of the MultiVector are always host views, that is, they do not live on a separate device memory space (for example, on a GPU).
This member constructor is meant to be called by its nonmember constructor friend; it is not meant to be called by users (hence it is protected).
Definition at line 248 of file Tpetra_MultiVector_def.hpp.
Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::MultiVector  (  const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &  map, 
Teuchos::ArrayRCP< Scalar >  data,  
size_t  LDA,  
Teuchos::ArrayView< const size_t >  whichVectors,  
EPrivateComputeViewConstructor  
)  [protected] 
Advanced constructorfor noncontiguous views.
Please consider this constructor DEPRECATED.
Definition at line 303 of file Tpetra_MultiVector_def.hpp.
Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::MultiVector  (  const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &  map, 
Teuchos::ArrayRCP< Scalar >  data,  
size_t  LDA,  
size_t  NumVectors,  
EPrivateComputeViewConstructor  
)  [protected] 
Advanced constructor for contiguous views.
Please consider this constructor DEPRECATED.
Definition at line 276 of file Tpetra_MultiVector_def.hpp.
Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::MultiVector  (  const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &  map, 
const KokkosClassic::MultiVector< Scalar, Node > &  localMultiVector,  
EPrivateComputeViewConstructor  
)  [protected] 
Advanced constructor for contiguous views.
This version of the contiguous view constructor takes a previously constructed KokkosClassic::MultiVector, which views the local data. The local multivector should have been made using the appropriate offsetView* method of KokkosClassic::MultiVector.
Definition at line 229 of file Tpetra_MultiVector_def.hpp.
Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::MultiVector  (  const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &  map, 
const KokkosClassic::MultiVector< Scalar, Node > &  localMultiVector,  
Teuchos::ArrayView< const size_t >  whichVectors,  
EPrivateComputeViewConstructor  
)  [protected] 
Advanced constructor for noncontiguous views.
This version of the noncontiguous view constructor takes a previously constructed KokkosClassic::MultiVector, which is the correct view of the local data. The local multivector should have been made using the appropriate offsetView* method of KokkosClassic::MultiVector.
Definition at line 360 of file Tpetra_MultiVector_def.hpp.
Teuchos::RCP< MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node2 > > Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::clone  (  const Teuchos::RCP< Node2 > &  node2  )  const 
Create a cloned MultiVector for a different node type.
Definition at line 1514 of file Tpetra_MultiVector_decl.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::replaceGlobalValue  (  GlobalOrdinal  globalRow, 
size_t  vectorIndex,  
const Scalar &  value  
) 
Replace value, using global (row) index.
Replace the current value at row globalRow (a global index) and column vectorIndex with the given value. The column index is zero based.
globalRow
must be a valid global element on this process, according to the row Map. Definition at line 2894 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::sumIntoGlobalValue  (  GlobalOrdinal  globalRow, 
size_t  vectorIndex,  
const Scalar &  value  
) 
Add value to existing value, using global (row) index.
Add the given value to the existing value at row globalRow (a global index) and column vectorIndex. The column index is zero based.
globalRow
must be a valid global element on this process, according to the row Map. Definition at line 2918 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::replaceLocalValue  (  LocalOrdinal  myRow, 
size_t  vectorIndex,  
const Scalar &  value  
) 
Replace value, using local (row) index.
Replace the current value at row myRow (a local index) and column vectorIndex with the given value. The column index is zero based.
myRow
must be a valid local element on this process, according to the row Map. Definition at line 2836 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::sumIntoLocalValue  (  LocalOrdinal  myRow, 
size_t  vectorIndex,  
const Scalar &  value  
) 
Add value to existing value, using local (row) index.
Add the given value to the existing value at row myRow (a local index) and column vectorIndex. The column index is zero based.
myRow
must be a valid local element on this process, according to the row Map. Definition at line 2865 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::putScalar  (  const Scalar &  value  ) 
Set all values in the multivector with the given value.
Definition at line 1559 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::randomize  (  ) 
Set all values in the multivector to pseudorandom numbers.
srand()
and rand()
.Definition at line 1538 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::replaceMap  (  const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &  map  ) 
Replace the underlying Map in place.
map>isCompatible (this>getMap ())
. "Similar" means that the communicators have the same number of processes, though these need not be in the same order (have the same assignments of ranks) or represent the same communication contexts. It means the same thing as the MPI_SIMILAR return value of MPI_COMM_COMPARE. See MPI 3.0 Standard, Section 6.4.1.This method replaces this object's Map with the given Map. This relabels the rows of the multivector using the global IDs in the input Map. Thus, it implicitly applies a permutation, without actually moving data. If the new Map's communicator has more processes than the original Map's communicator, it "projects" the MultiVector onto the new Map by filling in missing rows with zeros. If the new Map's communicator has fewer processes than the original Map's communicator, the method "forgets about" any rows that do not exist in the new Map. (It mathematical terms, if one considers a MultiVector as a function from one vector space to another, this operation restricts the range.)
This method must always be called collectively on the communicator with the largest number of processes: either this object's current communicator (this>getMap()>getComm()
), or the new Map's communicator (map>getComm()
). If the new Map's communicator has fewer processes, then the new Map must be null on processes excluded from the original communicator, and the current Map must be nonnull on all processes. If the new Map has more processes, then it must be nonnull on all those processes, and the original Map must be null on those processes which are not in the new Map's communicator. (The latter case can only happen to a MultiVector to which a replaceMap() operation has happened before.)
this>getMap ()>getComm ()
). We reserve the right to do checking in debug mode that requires this method to be called collectively in order not to deadlock.Definition at line 1580 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::reduce  (  ) 
Sum values of a locally replicated multivector across all processes.
Definition at line 2769 of file Tpetra_MultiVector_def.hpp.
MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > & Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::operator=  (  const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &  source  ) 
Assignment operator.
If this MultiVector (the lefthand side of the assignment) has view semantics (getCopyOrView() == Teuchos::View
), then this does a shallow copy. Otherwise, it does a deep copy. The latter is the default behavior.
A deep copy has the following prerequisites:
this>getMap ()>isCompatible (source.getMap ());
Definition at line 2014 of file Tpetra_MultiVector_def.hpp.
Teuchos::RCP< MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > > Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::subCopy  (  const Teuchos::Range1D &  colRng  )  const 
Return a MultiVector with copies of selected columns.
colRng  [in] Inclusive, contiguous range of columns. [colRng.lbound(), colRng.ubound()] defines the range. 
Definition at line 2118 of file Tpetra_MultiVector_def.hpp.
Teuchos::RCP< MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > > Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::subCopy  (  const Teuchos::ArrayView< const size_t > &  cols  )  const 
Return a MultiVector with copies of selected columns.
Definition at line 2092 of file Tpetra_MultiVector_def.hpp.
Teuchos::RCP< const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > > Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::subView  (  const Teuchos::Range1D &  colRng  )  const 
Return a MultiVector with const views of selected columns.
colRng  [in] Inclusive, contiguous range of columns. [colRng.lbound(), colRng.ubound()] defines the range. 
Definition at line 2275 of file Tpetra_MultiVector_def.hpp.
Teuchos::RCP< const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > > Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::subView  (  const Teuchos::ArrayView< const size_t > &  cols  )  const 
Return a const MultiVector with const views of selected columns.
Definition at line 2222 of file Tpetra_MultiVector_def.hpp.
Teuchos::RCP< MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > > Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::subViewNonConst  (  const Teuchos::Range1D &  colRng  ) 
Return a MultiVector with views of selected columns.
colRng  [in] Inclusive, contiguous range of columns. [colRng.lbound(), colRng.ubound()] defines the range. 
Definition at line 2335 of file Tpetra_MultiVector_def.hpp.
Teuchos::RCP< MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > > Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::subViewNonConst  (  const Teuchos::ArrayView< const size_t > &  cols  ) 
Return a MultiVector with views of selected columns.
Definition at line 2309 of file Tpetra_MultiVector_def.hpp.
Teuchos::RCP< const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > > Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::offsetView  (  const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &  subMap, 
size_t  offset  
)  const 
Return a const view of a subset of rows.
Return a const (nonmodifiable) view of this MultiVector consisting of a subset of the rows, as specified by an offset and a subset Map of this MultiVector's current row Map. If you want X1 or X2 to be nonconst (modifiable) views, use offsetViewNonConst() with the same arguments. "View" means "alias": if the original (this) MultiVector's data change, the view will see the changed data.
subMap  [in] The row Map for the new MultiVector. This must be a subset Map of this MultiVector's row Map. 
offset  [in] The local row offset at which to start the view. 
Suppose that you have a MultiVector X, and you want to view X, on all processes in X's (MPI) communicator, as split into two row blocks X1 and X2. One could express this in Matlab notation as X = [X1; X2], except that here, X1 and X2 are views into X, rather than copies of X's data. This method assumes that the local indices of X1 and X2 are each contiguous, and that the local indices of X2 follow those of X1. If that is not the case, you cannot use views to divide X into blocks like this; you must instead use the Import or Export functionality, which copies the relevant rows of X.
Here is how you would construct the views X1 and X2.
using Teuchos::RCP; typedef Tpetra::Map<LO, GO, Node> map_type; typedef Tpetra::MultiVector<Scalar, LO, GO, Node> MV; MV X (...); // the input MultiVector // ... fill X with data ... // Map that on each process in X's communicator, // contains the global indices of the rows of X1. RCP<const map_type> map1 (new map_type (...)); // Map that on each process in X's communicator, // contains the global indices of the rows of X2. RCP<const map_type> map2 (new map_type (...)); // Create the first view X1. The second argument, the offset, // is the index of the local row at which to start the view. // X1 is the topmost block of X, so the offset is zero. RCP<const MV> X1 = X.offsetView (map1, 0); // Create the second view X2. X2 is directly below X1 in X, // so the offset is the local number of rows in X1. This is // the same as the local number of entries in map1. RCP<const MV> X2 = X.offsetView (map2, X1>getLocalLength ());
It is legal, in the above example, for X1 or X2 to have zero local rows on any or all process(es). In that case, the corresponding Map must have zero local entries on that / those process(es). In particular, if X2 has zero local rows on a process, then the corresponding offset on that process would be the number of local rows in X (and therefore in X1) on that process. This is the only case in which the sum of the local number of entries in subMap
(in this case, zero) and the offset
may equal the number of local entries in *this
.
Reimplemented in Tpetra::Vector< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Definition at line 2144 of file Tpetra_MultiVector_def.hpp.
Teuchos::RCP< MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > > Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::offsetViewNonConst  (  const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &  subMap, 
size_t  offset  
) 
Return a nonconst view of a subset of rows.
Return a nonconst (modifiable) view of this MultiVector consisting of a subset of the rows, as specified by an offset and a subset Map of this MultiVector's current row Map. If you want X1 or X2 to be const (nonmodifiable) views, use offsetView() with the same arguments. "View" means "alias": if the original (this) MultiVector's data change, the view will see the changed data, and if the view's data change, the original MultiVector will see the changed data.
subMap  [in] The row Map for the new MultiVector. This must be a subset Map of this MultiVector's row Map. 
offset  [in] The local row offset at which to start the view. 
See the documentation of offsetView() for a code example and an explanation of edge cases.
Reimplemented in Tpetra::Vector< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Definition at line 2183 of file Tpetra_MultiVector_def.hpp.
Teuchos::RCP< const Vector< Scalar, LocalOrdinal, GlobalOrdinal, Node > > Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::getVector  (  size_t  j  )  const 
Return a Vector which is a const view of column j.
Definition at line 2367 of file Tpetra_MultiVector_def.hpp.
Teuchos::RCP< Vector< Scalar, LocalOrdinal, GlobalOrdinal, Node > > Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::getVectorNonConst  (  size_t  j  ) 
Return a Vector which is a nonconst view of column j.
Definition at line 2392 of file Tpetra_MultiVector_def.hpp.
Teuchos::ArrayRCP< const Scalar > Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::getData  (  size_t  j  )  const 
Const view of the local values in a particular vector of this multivector.
Definition at line 1994 of file Tpetra_MultiVector_def.hpp.
Teuchos::ArrayRCP< Scalar > Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::getDataNonConst  (  size_t  j  ) 
View of the local values in a particular vector of this multivector.
Definition at line 2004 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::get1dCopy  (  Teuchos::ArrayView< Scalar >  A, 
size_t  LDA  
)  const 
Fill the given array with a copy of this multivector's local values.
A  [out] View of the array to fill. We consider A as a matrix with columnmajor storage. 
LDA  [in] Leading dimension of the matrix A. 
Definition at line 2413 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::get2dCopy  (  Teuchos::ArrayView< const Teuchos::ArrayView< Scalar > >  ArrayOfPtrs  )  const 
Fill the given array with a copy of this multivector's local values.
ArrayOfPtrs  [out] Array of arrays, one for each column of the multivector. On output, we fill ArrayOfPtrs[j] with the data for column j of this multivector. 
Definition at line 2449 of file Tpetra_MultiVector_def.hpp.
Teuchos::ArrayRCP< const Scalar > Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::get1dView  (  )  const 
Const persisting (1D) view of this multivector's local values.
This method assumes that the columns of the multivector are stored contiguously. If not, this method throws std::runtime_error.
Definition at line 2484 of file Tpetra_MultiVector_def.hpp.
Teuchos::ArrayRCP< Teuchos::ArrayRCP< const Scalar > > Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::get2dView  (  )  const 
Return const persisting pointers to values.
Definition at line 2565 of file Tpetra_MultiVector_def.hpp.
Teuchos::ArrayRCP< Scalar > Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::get1dViewNonConst  (  ) 
Nonconst persisting (1D) view of this multivector's local values.
This method assumes that the columns of the multivector are stored contiguously. If not, this method throws std::runtime_error.
Definition at line 2505 of file Tpetra_MultiVector_def.hpp.
Teuchos::ArrayRCP< Teuchos::ArrayRCP< Scalar > > Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::get2dViewNonConst  (  ) 
Return nonconst persisting pointers to values.
Definition at line 2527 of file Tpetra_MultiVector_def.hpp.
KokkosClassic::MultiVector< Scalar, Node > Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::getLocalMV  (  )  const 
A view of the underlying KokkosClassic::MultiVector object.
This method is for expert users only. It may change or be removed at any time.
Definition at line 2960 of file Tpetra_MultiVector_def.hpp.
TEUCHOS_DEPRECATED KokkosClassic::MultiVector< Scalar, Node > & Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::getLocalMVNonConst  (  ) 
A nonconst reference to a view of the underlying KokkosClassic::MultiVector object.
This method is for expert users only. It may change or be removed at any time.
Definition at line 2967 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::dot  (  const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &  A, 
const Teuchos::ArrayView< Scalar > &  dots  
)  const 
Compute the dot product of each corresponding pair of vectors (columns) in A and B.
The "dot product" is the standard Euclidean inner product. If the type of entries of the vectors (scalar_type) is complex, then A is transposed, not *this
. For example, if x and y each have one column, then x.dot (y, dots)
computes .
*this
and A have the same number of columns (vectors). dots
has at least as many entries as the number of columns in A.dots[j] == (this>getVector[j])>dot (* (A.getVector[j]))
Definition at line 1244 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::abs  (  const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &  A  ) 
Put elementwise absolute values of input Multivector in target: A = abs(this)
Definition at line 1863 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::reciprocal  (  const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &  A  ) 
Put elementwise reciprocal values of input Multivector in target, this(i,j) = 1/A(i,j).
Definition at line 1817 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::scale  (  const Scalar &  alpha  ) 
Scale in place: this = alpha*this
.
Replace this MultiVector with alpha times this MultiVector. This method will always multiply, even if alpha is zero. That means, for example, that if *this
contains NaN entries before calling this method, the NaN entries will remain after this method finishes.
Definition at line 1674 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::scale  (  Teuchos::ArrayView< const Scalar >  alpha  ) 
Scale each column in place: this[j] = alpha[j]*this[j]
.
Replace each column j of this MultiVector with alpha[j]
times the current column j of this MultiVector. This method will always multiply, even if all the entries of alpha are zero. That means, for example, that if *this
contains NaN entries before calling this method, the NaN entries will remain after this method finishes.
Definition at line 1704 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::scale  (  const Scalar &  alpha, 
const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &  A  
) 
Scale in place: this = alpha * A
.
Replace this MultiVector with scaled values of A. This method will always multiply, even if alpha is zero. That means, for example, that if *this
contains NaN entries before calling this method, the NaN entries will remain after this method finishes. It is legal for the input A to alias this MultiVector.
Definition at line 1775 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::update  (  const Scalar &  alpha, 
const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &  A,  
const Scalar &  beta  
) 
Update: this = beta*this + alpha*A
.
Update this MultiVector with scaled values of A. If beta is zero, overwrite *this
unconditionally, even if it contains NaN entries. It is legal for the input A to alias this MultiVector.
Definition at line 1900 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::update  (  const Scalar &  alpha, 
const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &  A,  
const Scalar &  beta,  
const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &  B,  
const Scalar &  gamma  
) 
Update: this = gamma*this + alpha*A + beta*B
.
Update this MultiVector with scaled values of A and B. If gamma is zero, overwrite *this
unconditionally, even if it contains NaN entries. It is legal for the inputs A or B to alias this MultiVector.
Definition at line 1943 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::norm1  (  const Teuchos::ArrayView< typename Teuchos::ScalarTraits< Scalar >::magnitudeType > &  norms  )  const 
Compute the onenorm of each vector (column).
The onenorm of a vector is the sum of squares of the magnitudes of the vector's entries. On exit, norms[k] is the onenorm of column k of this MultiVector.
Definition at line 1413 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::norm2  (  const Teuchos::ArrayView< typename Teuchos::ScalarTraits< Scalar >::magnitudeType > &  norms  )  const 
Compute the twonorm of each vector (column).
The twonorm of a vector is the standard Euclidean norm, the square root of the sum of squares of the magnitudes of the vector's entries. On exit, norms[k] is the twonorm of column k of this MultiVector.
Definition at line 1295 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::normInf  (  const Teuchos::ArrayView< typename Teuchos::ScalarTraits< Scalar >::magnitudeType > &  norms  )  const 
Compute the infinitynorm of each vector (column).
The infinitynorm of a vector is the maximum of the magnitudes of the vector's entries. On exit, norms[k] is the infinitynorm of column k of this MultiVector.
Definition at line 1451 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::normWeighted  (  const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &  weights, 
const Teuchos::ArrayView< typename Teuchos::ScalarTraits< Scalar >::magnitudeType > &  norms  
)  const 
Compute Weighted 2norm (RMS Norm) of each vector in multivector. The outcome of this routine is undefined for nonfloating point scalar types (e.g., int).
Definition at line 1341 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::meanValue  (  const Teuchos::ArrayView< Scalar > &  means  )  const 
Compute mean (average) value of each vector in multivector. The outcome of this routine is undefined for nonfloating point scalar types (e.g., int).
Definition at line 1487 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::multiply  (  Teuchos::ETransp  transA, 
Teuchos::ETransp  transB,  
const Scalar &  alpha,  
const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &  A,  
const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &  B,  
const Scalar &  beta  
) 
Matrixmatrix multiplication: this = beta*this + alpha*op(A)*op(B)
.
If beta is zero, overwrite *this
unconditionally, even if it contains NaN entries. This imitates the semantics of analogous BLAS routines like DGEMM.
Definition at line 2604 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::elementWiseMultiply  (  Scalar  scalarAB, 
const Vector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &  A,  
const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &  B,  
Scalar  scalarThis  
) 
Multiply a Vector A elementwise by a MultiVector B.
Compute this = scalarThis * this + scalarAB * B @ A
where </tt> denotes elementwise multiplication. In pseudocode, if C denotes
*this
MultiVector:
C(i,j) = scalarThis * C(i,j) + scalarAB * B(i,j) * A(i,1);
for all rows i and columns j of C.
B must have the same dimensions as
*this
, while A must have the same number of rows but a single column.
We do not require that A, B, and
*this
have compatible Maps, as long as the number of rows in A, B, and *this
on each process is the same. For example, one or more of these vectors might have a locally replicated Map, or a Map with a local communicator (MPI_COMM_SELF
). This case may occur in block relaxation algorithms when applying a diagonal scaling.
Definition at line 2739 of file Tpetra_MultiVector_def.hpp.
size_t Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::getNumVectors  (  )  const [inline] 
Number of columns in the multivector.
Definition at line 1231 of file Tpetra_MultiVector_def.hpp.
size_t Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::getLocalLength  (  )  const 
Local number of rows on the calling process.
Definition at line 503 of file Tpetra_MultiVector_def.hpp.
global_size_t Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::getGlobalLength  (  )  const 
Global number of rows in the multivector.
Definition at line 514 of file Tpetra_MultiVector_def.hpp.
size_t Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::getStride  (  )  const 
Stride between columns in the multivector.
This is only meaningful if isConstantStride()
returns true.
Definition at line 525 of file Tpetra_MultiVector_def.hpp.
bool Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::isConstantStride  (  )  const 
Whether this multivector has constant stride between columns.
Definition at line 496 of file Tpetra_MultiVector_def.hpp.
std::string Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::description  (  )  const [virtual] 
A simple oneline description of this object.
Reimplemented from Tpetra::DistObject< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Reimplemented in Tpetra::Vector< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Definition at line 2973 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::describe  (  Teuchos::FancyOStream &  out, 
const Teuchos::EVerbosityLevel  verbLevel = Teuchos::Describable::verbLevel_default 

)  const [virtual] 
Print the object with the given verbosity level to a FancyOStream.
out  [out] Output stream to which to print. For verbosity levels VERB_LOW and lower, only the process with rank 0 ("Proc 0") in the MultiVector's communicator prints. For verbosity levels strictly higher than VERB_LOW, all processes in the communicator need to be able to print to the output stream. 
verbLevel  [in] Verbosity level. The default verbosity (verbLevel=VERB_DEFAULT) is VERB_LOW. 
The amount and content of what this method prints depends on the verbosity level. In the list below, each higher level includes all the content of the previous levels, as well as its own content.
description()
.isConstantStride()
), and if so, what that stride is. (Stride may differ on different processes.)Reimplemented from Tpetra::DistObject< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Reimplemented in Tpetra::Vector< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Definition at line 2992 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::removeEmptyProcessesInPlace  (  const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &  newMap  )  [virtual] 
Remove processes owning zero rows from the Map and their communicator.
newMap  [in] This must be the result of calling the removeEmptyProcesses() method on the row Map. If it is not, this method's behavior is undefined. This pointer will be null on excluded processes. 
Reimplemented from Tpetra::DistObject< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Definition at line 3136 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::setCopyOrView  (  const Teuchos::DataAccess  copyOrView  )  [inline] 
Set whether this has copy (copyOrView = Teuchos::Copy) or view (copyOrView = Teuchos::View) semantics.
Definition at line 1113 of file Tpetra_MultiVector_decl.hpp.
Teuchos::DataAccess Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::getCopyOrView  (  )  const [inline] 
Get whether this has copy (copyOrView = Teuchos::Copy) or view (copyOrView = Teuchos::View) semantics.
Definition at line 1122 of file Tpetra_MultiVector_decl.hpp.
bool Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::checkSizes  (  const SrcDistObject &  sourceObj  )  [protected, virtual] 
Whether data redistribution between sourceObj
and this object is legal.
This method is called in DistObject::doTransfer() to check whether data redistribution between the two objects is legal.
Implements Tpetra::DistObject< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Definition at line 536 of file Tpetra_MultiVector_def.hpp.
size_t Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::constantNumberOfPackets  (  )  const [protected, virtual] 
Number of packets to send per LID.
Reimplemented from Tpetra::DistObject< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Definition at line 559 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::copyAndPermute  (  const SrcDistObject &  source, 
size_t  numSameIDs,  
const ArrayView< const LocalOrdinal > &  permuteToLIDs,  
const ArrayView< const LocalOrdinal > &  permuteFromLIDs  
)  [protected, virtual] 
Perform copies and permutations that are local to this process.
source  [in] On entry, the source object, from which we are distributing. We distribute to the destination object, which is *this object. 
numSameIDs  [in] The umber of elements that are the same on the source and destination (this) objects. These elements are owned by the same process in both the source and destination objects. No permutation occurs. 
numPermuteIDs  [in] The number of elements that are locally permuted between the source and destination objects. 
permuteToLIDs  [in] List of the elements that are permuted. They are listed by their LID in the destination object. 
permuteFromLIDs  [in] List of the elements that are permuted. They are listed by their LID in the source object. 
Implements Tpetra::DistObject< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Definition at line 916 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::packAndPrepare  (  const SrcDistObject &  source, 
const ArrayView< const LocalOrdinal > &  exportLIDs,  
Array< Scalar > &  exports,  
const ArrayView< size_t > &  numPacketsPerLID,  
size_t &  constantNumPackets,  
Distributor &  distor  
)  [protected, virtual] 
Perform any packing or preparation required for communication.
source  [in] Source object for the redistribution. 
exportLIDs  [in] List of the entries (as local IDs in the source object) we will be sending to other images. 
exports  [out] On exit, the buffer for data to send. 
numPacketsPerLID  [out] On exit, the implementation of this method must do one of two things: set numPacketsPerLID[i] to contain the number of packets to be exported for exportLIDs[i] and set constantNumPackets to zero, or set constantNumPackets to a nonzero value. If the latter, the implementation need not fill numPacketsPerLID. 
constantNumPackets  [out] On exit, 0 if numPacketsPerLID has variable contents (different size for each LID). If nonzero, then it is expected that the number of packets per LID is constant, and that constantNumPackets is that value. 
distor  [in] The Distributor object we are using. 
Implements Tpetra::DistObject< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Definition at line 1022 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::unpackAndCombine  (  const ArrayView< const LocalOrdinal > &  importLIDs, 
const ArrayView< const Scalar > &  imports,  
const ArrayView< size_t > &  numPacketsPerLID,  
size_t  constantNumPackets,  
Distributor &  distor,  
CombineMode  CM  
)  [protected, virtual] 
Perform any unpacking and combining after communication.
importLIDs  [in] List of the entries (as LIDs in the destination object) we received from other images. 
imports  [in] Buffer containing data we received. 
numPacketsPerLID  [in] If constantNumPackets is zero, then numPacketsPerLID[i] contains the number of packets imported for importLIDs[i]. 
constantNumPackets  [in] If nonzero, then numPacketsPerLID is constant (same value in all entries) and constantNumPackets is that value. If zero, then numPacketsPerLID[i] is the number of packets imported for importLIDs[i]. 
distor  [in] The Distributor object we are using. 
CM  [in] The combine mode to use when combining the imported entries with existing entries. 
Implements Tpetra::DistObject< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Definition at line 1104 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::createViews  (  )  const [protected, virtual] 
Hook for creating a const view.
doTransfer() calls this on the source object. By default, it does nothing, but the source object can use this as a hint to fetch data from a compute buffer on an offCPU device (such as a GPU) into host memory.
Reimplemented from Tpetra::DistObject< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Definition at line 3093 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::createViewsNonConst  (  KokkosClassic::ReadWriteOption  rwo  )  [protected, virtual] 
Hook for creating a nonconst view.
doTransfer() calls this on the destination (*this
) object. By default, it does nothing, but the destination object can use this as a hint to fetch data from a compute buffer on an offCPU device (such as a GPU) into host memory.
rwo  [in] Whether to create a writeonly or a readandwrite view. For Kokkos Node types where compute buffers live in a separate memory space (e.g., in the device memory of a discrete accelerator like a GPU), a writeonly view only requires copying from host memory to the compute buffer, whereas a readandwrite view requires copying both ways (once to read, from the compute buffer to host memory, and once to write, back to the compute buffer). 
Reimplemented from Tpetra::DistObject< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Definition at line 3105 of file Tpetra_MultiVector_def.hpp.
void Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::releaseViews  (  )  const [protected, virtual] 
Hook for releasing views.
doTransfer() calls this on both the source and destination objects, once it no longer needs to access that object's data. By default, this method does nothing. Implementations may use this as a hint to free host memory which is a view of a compute buffer, once the host memory view is no longer needed. Some implementations may prefer to mirror compute buffers in host memory; for these implementations, releaseViews() may do nothing.
Reimplemented from Tpetra::DistObject< Scalar, LocalOrdinal, GlobalOrdinal, Node >.
Definition at line 3116 of file Tpetra_MultiVector_def.hpp.
void Tpetra::DistObject< Scalar , LocalOrdinal, GlobalOrdinal, Node >::doImport  (  const SrcDistObject< Scalar, LocalOrdinal, GlobalOrdinal, Node > &  source, 
const Import< LocalOrdinal, GlobalOrdinal, Node > &  importer,  
CombineMode  CM  
)  [inherited] 
Import data into this object using an Import object ("forward mode").
The input DistObject is always the source of the data redistribution operation, and the *this
object is always the target.
If you don't know the difference between forward and reverse mode, then you probably want forward mode. Use this method with your precomputed Import object if you want to do an Import, else use doExport() with a precomputed Export object.
source  [in] The "source" object for redistribution. 
importer  [in] Precomputed data redistribution plan. Its source Map must be the same as the input DistObject's Map, and its target Map must be the same as this>getMap() . 
CM  [in] How to combine incoming data with the same global index. 
void Tpetra::DistObject< Scalar , LocalOrdinal, GlobalOrdinal, Node >::doImport  (  const SrcDistObject< Scalar, LocalOrdinal, GlobalOrdinal, Node > &  source, 
const Export< LocalOrdinal, GlobalOrdinal, Node > &  exporter,  
CombineMode  CM  
)  [inherited] 
Import data into this object using an Export object ("reverse mode").
The input DistObject is always the source of the data redistribution operation, and the *this
object is always the target.
If you don't know the difference between forward and reverse mode, then you probably want forward mode. Use the version of doImport() that takes a precomputed Import object in that case.
source  [in] The "source" object for redistribution. 
exporter  [in] Precomputed data redistribution plan. Its target Map must be the same as the input DistObject's Map, and its source Map must be the same as this>getMap() . (Note the difference from forward mode.) 
CM  [in] How to combine incoming data with the same global index. 
void Tpetra::DistObject< Scalar , LocalOrdinal, GlobalOrdinal, Node >::doExport  (  const SrcDistObject< Scalar, LocalOrdinal, GlobalOrdinal, Node > &  source, 
const Export< LocalOrdinal, GlobalOrdinal, Node > &  exporter,  
CombineMode  CM  
)  [inherited] 
Export data into this object using an Export object ("forward mode").
The input DistObject is always the source of the data redistribution operation, and the *this
object is always the target.
If you don't know the difference between forward and reverse mode, then you probably want forward mode. Use this method with your precomputed Export object if you want to do an Export, else use doImport() with a precomputed Import object.
source  [in] The "source" object for redistribution. 
exporter  [in] Precomputed data redistribution plan. Its source Map must be the same as the input DistObject's Map, and its target Map must be the same as this>getMap() . 
CM  [in] How to combine incoming data with the same global index. 
void Tpetra::DistObject< Scalar , LocalOrdinal, GlobalOrdinal, Node >::doExport  (  const SrcDistObject< Scalar, LocalOrdinal, GlobalOrdinal, Node > &  source, 
const Import< LocalOrdinal, GlobalOrdinal, Node > &  importer,  
CombineMode  CM  
)  [inherited] 
Export data into this object using an Import object ("reverse mode").
The input DistObject is always the source of the data redistribution operation, and the *this
object is always the target.
If you don't know the difference between forward and reverse mode, then you probably want forward mode. Use the version of doExport() that takes a precomputed Export object in that case.
source  [in] The "source" object for redistribution. 
importer  [in] Precomputed data redistribution plan. Its target Map must be the same as the input DistObject's Map, and its source Map must be the same as this>getMap() . (Note the difference from forward mode.) 
CM  [in] How to combine incoming data with the same global index. 
bool Tpetra::DistObject< Scalar , LocalOrdinal, GlobalOrdinal, Node >::isDistributed  (  )  const [inherited] 
Whether this is a globally distributed object.
For a definition of "globally distributed" (and its opposite, "locally replicated"), see the documentation of Map's isDistributed() method.
virtual Teuchos::RCP<const Map<LocalOrdinal,GlobalOrdinal,Node> > Tpetra::DistObject< Scalar , LocalOrdinal, GlobalOrdinal, Node >::getMap  (  )  const [inline, virtual, inherited] 
The Map describing the parallel distribution of this object.
Note that some Tpetra objects might be distributed using multiple Map objects. For example, CrsMatrix has both a row Map and a column Map. It is up to the subclass to decide which Map to use when invoking the DistObject constructor.
Definition at line 316 of file Tpetra_DistObject_decl.hpp.
void Tpetra::DistObject< Scalar , LocalOrdinal, GlobalOrdinal, Node >::print  (  std::ostream &  os  )  const [inherited] 
Print this object to the given output stream.
We generally assume that all MPI processes can print to the given stream.
virtual void Tpetra::DistObject< Scalar , LocalOrdinal, GlobalOrdinal, Node >::doTransfer  (  const SrcDistObject< Scalar, LocalOrdinal, GlobalOrdinal, Node > &  src, 
CombineMode  CM,  
size_t  numSameIDs,  
const Teuchos::ArrayView< const LocalOrdinal > &  permuteToLIDs,  
const Teuchos::ArrayView< const LocalOrdinal > &  permuteFromLIDs,  
const Teuchos::ArrayView< const LocalOrdinal > &  remoteLIDs,  
const Teuchos::ArrayView< const LocalOrdinal > &  exportLIDs,  
Distributor &  distor,  
ReverseOption  revOp  
)  [protected, virtual, inherited] 
Redistribute data across memory images.
src  [in] The source object, to redistribute into the target object, which is *this object. 
CM  [in] The combine mode that describes how to combine values that map to the same global ID on the same process. 
permuteToLIDs  [in] See copyAndPermute(). 
permuteFromLIDs  [in] See copyAndPermute(). 
remoteLIDs  [in] List of entries (as local IDs) in the destination object to receive from other processes. 
exportLIDs  [in] See packAndPrepare(). 
distor  [in/out] The Distributor object that knows how to redistribute data. 
revOp  [in] Whether to do a forward or reverse mode redistribution. 
MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > createCopy  (  const MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node > &  src  )  [related] 
Return a deep copy of the MultiVector src
.
Regarding Copy or View semantics: The returned MultiVector is always a deep copy of src
, but always has the same semantics as src
. That is, if src
has View semantics, then the returned MultiVector has View semantics, and if src
has Copy semantics, then the returned MultiVector has Copy semantics.
You may call src.getCopyOrView ()
to test the semantics of the input MultiVector src
. For example, the following will never trigger an assert:
MultiVector<double> dst = createCopy (src); assert (dst.getCopyOrView () == src.getCopyOrView ());
In the Kokkos refactor version of Tpetra, MultiVector always has View semantics. However, the above remarks still apply.
void deep_copy  (  MultiVector< DS, DL, DG, DN > &  dst, 
const MultiVector< SS, SL, SG, SN > &  src  
)  [related] 
Copy the contents of the MultiVector src
into dst
.
src
must be compatible with the Map of dst
. Copy the contents of the MultiVector src
into the MultiVector dst
. ("Copy the contents" means the same thing as "deep
copy.") The two MultiVectors need not necessarily have the same template parameters, but the assignment of their entries must make sense. Furthermore, their Maps must be compatible, that is, the MultiVectors' local dimensions must be the same on all processes.
This method must always be called as a collective operation on all processes over which the multivector is distributed. This is because the method reserves the right to check for compatibility of the two Maps, at least in debug mode, and throw if they are not compatible.
KMV Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::lclMV_ [protected] 
The KokkosClassic::MultiVector containing the compute buffer of data.
Definition at line 1135 of file Tpetra_MultiVector_decl.hpp.
Array<size_t> Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::whichVectors_ [protected] 
Indices of columns this multivector is viewing.
If this array has nonzero size, then this multivector is a view of another multivector (the "original" multivector). In that case, whichVectors_ contains the indices of the columns of the original multivector. Furthermore, isConstantStride() returns false in this case.
If this array has zero size, then this multivector is not a view of any other multivector. Furthermore, the stride between columns of this multivector is a constant: thus, isConstantStride() returns true.
Definition at line 1149 of file Tpetra_MultiVector_decl.hpp.
bool Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::hasViewSemantics_ [protected] 
Whether this MultiVector has view semantics.
"View semantics" means that if this MultiVector is on the right side of an operator=, the left side gets a shallow copy, and acquires view semantics. The Kokkos refactor version of MultiVector only ever has view semantics. The "classic" version of MultiVector currently does not have view semantics by default, but this will change.
You can set this for now by calling one of the constructors that accepts a Teuchos::DataAccess enum value.
Definition at line 1162 of file Tpetra_MultiVector_decl.hpp.
ArrayRCP<Scalar> Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::ncview_ [mutable, protected] 
Nonconst host view created in createViewsNonConst().
Definition at line 1309 of file Tpetra_MultiVector_decl.hpp.
ArrayRCP<const Scalar> Tpetra::MultiVector< Scalar, LocalOrdinal, GlobalOrdinal, Node >::cview_ [mutable, protected] 
Const host view created in createViews().
Definition at line 1312 of file Tpetra_MultiVector_decl.hpp.
Teuchos::RCP<const Map<LocalOrdinal,GlobalOrdinal,Node> > Tpetra::DistObject< Scalar , LocalOrdinal, GlobalOrdinal, Node >::map_ [protected, inherited] 
The Map over which this object is distributed.
Definition at line 611 of file Tpetra_DistObject_decl.hpp.
Teuchos::Array<Scalar > Tpetra::DistObject< Scalar , LocalOrdinal, GlobalOrdinal, Node >::imports_ [protected, inherited] 
Buffer into which packed data are imported (received from other processes).
Definition at line 615 of file Tpetra_DistObject_decl.hpp.
Teuchos::Array<size_t> Tpetra::DistObject< Scalar , LocalOrdinal, GlobalOrdinal, Node >::numImportPacketsPerLID_ [protected, inherited] 
Number of packets to receive for each receive operation.
This array is used in Distributor::doPosts() (and doReversePosts()) when starting the ireceive operation.
This may be ignored in doTransfer() if constantNumPackets is nonzero, indicating a constant number of packets per LID. (For example, MultiVector sets the constantNumPackets output argument of packAndPrepare() to the number of columns in the multivector.)
Definition at line 627 of file Tpetra_DistObject_decl.hpp.
Teuchos::Array<Scalar > Tpetra::DistObject< Scalar , LocalOrdinal, GlobalOrdinal, Node >::exports_ [protected, inherited] 
Buffer from which packed data are exported (sent to other processes).
Definition at line 630 of file Tpetra_DistObject_decl.hpp.
Teuchos::Array<size_t> Tpetra::DistObject< Scalar , LocalOrdinal, GlobalOrdinal, Node >::numExportPacketsPerLID_ [protected, inherited] 
Number of packets to send for each send operation.
This array is used in Distributor::doPosts() (and doReversePosts()) for preparing for the send operation.
This may be ignored in doTransfer() if constantNumPackets is nonzero, indicating a constant number of packets per LID. (For example, MultiVector sets the constantNumPackets output argument of packAndPrepare() to the number of columns in the multivector.)
Definition at line 642 of file Tpetra_DistObject_decl.hpp.