Tpetra Matrix/Vector Services Version of the Day

Tpetra implements linear algebra objects, such as sparse matrices and dense vectors. Tpetra is "hybrid parallel," meaning that it uses at least two levels of parallelism:
We say "distributed linear algebra" because Tpetra objects may be distributed over one or more parallel MPI processes. The sharedmemory programming models that Tpetra may use within a process include
Tpetra differs from Epetra, Trilinos' previous distributed linear algebra package, in the following ways:
Tpetra has native support for solving very large problems (with over 2 billion unknowns).
Tpetra lets you construct matrices and vectors with different kinds of data, such as floatingpoint types of different precision, or complexvalued types. Our goal is for Tpetra objects to be able to contain any type of data that implements a minimal compiletime interface. Epetra objects only support doubleprecision floatingpoint data (of type double
).
All of all classes in Tpetra utilize templates, which allows the user to specify any type they want. In some cases, the choice of data type allows increased functionality. For example, 64bit ordinals allow for problem sizes to break the 2 billion element barrier present in Epetra, whereas complex scalar types allow the native description and solution of complexvalued problems.
Most of the classes in Tpetra are templated according to the data types which constitute the class. These are the following:
Scalar:
A Scalar
is the type of values in the sparse matrix or dense vector. This is the type most likely to be changed by many users. The most common use cases are float
, double
, std::complex<float>
and std::complex<double>
. However, many other data types can be used, as long as they have specializations for Teuchos::ScalarTraits and Teuchos::SerializationTraits, and support the necessary arithmetic operations, such as addition, subtraction, division and multiplication.
LocalOrdinal:
A LocalOrdinal
is used to store indices representing local IDs. The standard use case, as well as the default for most classes, is int
. Any signed builtin integer type may be used. The reason why local and global ordinals may have different types is for efficiency. If the application allows it, using smaller local ordinals requires less storage and may improve performance of computational kernels such as sparse matrixvector multiply.
GlobalOrdinal:
A GlobalOrdinal
is used to store global indices and to describe global properties of a distributed object (e.g., global number of entries in a sparse matrix, or global number of rows in a vector.) The GlobalOrdinal
therefore dictates the maximum size of a distributed object.
Node:
Computational classes in Tpetra will also be templated on a Node
type. This node fulfills the Kokkos NodeAPI" and allows the Tpetra objects to perform parallel computation on one of a number of sharedmemory nodes, including multicore CPUs and GPUs. You can set the Node type to control what sharedmemory parallel programming model Tpetra will use. Most Tpetra users will want to learn about the following classes.
Parallel distributions: Tpetra::Map  Contains information used to distribute vectors, matrices and other objects. This class is analogous to Epetra's Epetra_Map class.
Distributed dense vectors: Tpetra::MultiVector, Tpetra::Vector  Provides vector services such as scaling, norms, and dot products.
Distributed sparse matrices: Tpetra::RowMatrix, Tpetra::CrsMatrix  Tpetra::RowMatrix is a abstract interface for rowdistributed sparse matrices. Tpetra::CrsMatrix is a specific implementation of Tpetra::RowMatrix, utilizing compressed row storage format. Both of these classes derive from Tpetra::Operator, the base class for linear operators.
Import/Export classes: Tpetra::Import and Tpetra::Export  Allows efficient transfer of objects built using one mapping to a new object with a new mapping. Supports local and global permutations, overlapping Schwarz operations and many other data movement operations.
MPI_COMM_WORLD
if building with MPI, and provides stub communication functionality if not building with MPI. Tpetra can be used mostly as a standalone package, with explicit dependencies on Teuchos and Kokkos. There are adapters allowing the use of Tpetra operators and multivectors in both the Belos linear solver package and the Anasazi eigensolver package.
Tpetra includes five lessons. The first shows how to initialize an application using Tpetra, with or without MPI. Following lessons demonstrate how to create and modify Tpetra data structures, and how to use Tpetra's abstractions to move data between different parallel distributions. The lessons include both sections meant for reading, and example code that builds and runs. In fact, the code passes nightly tests on many different platforms.