Trilinos/PETSc Interface

Introduction

The Epetra_PETScAIJMatrix class is a lightweight wrapper class for encapsulating PETSc serial and parallel AIJ matrices. Its intended audience is PETSc users who would like to build and apply Trilinos preconditioners. This class derives from the Epetra_RowMatrix class.

Epetra_PETScAIJMatrix utilizes callbacks for as many access and apply functions as possible. In particular, fetching individual rows and matrix-vector multiplies are callbacks.

Requirements

This class is available in Trilinos release 9.0. It has been tested in serial and parallel with PETSc version 2.3.3, built in static library mode.

Configuring and building the Trilinos libraries

We assume that you have already built PETSc in your home area.

In order to use this feature, you must specify PETSC_DIR, PETSC_ARCH, and "--with-petsc" when configuring Trilinos. You may optionally supply PETSC_LIB, which contains all link-time dependencies for PETSc.

Here are two ways of doing this. We'll assume that the Trilinos source directory is ~/Trilinos, the Trilinos build directory is ~/TrilinosBuild, and that your mpi compilers are in /usr/local/mpich-1.2.7p1.

setenv PETSC_DIR /home/joeblow/petsc-2.3.3-p12
setenv PETSC_ARCH linux-gnu-c-debug
cd ~/TrilinosBuild
../Trilinos/configure --with-mpi-compilers=/usr/local/mpich-1.2.7p1/bin --with-petsc \
                      --disable-default-packages --enable-amesos --enable-aztecoo --enable-epetra \
                      --enable-epetraext --enable-ifpack --enable-ml --enable-teuchos

or

cd ~/TrilinosBuild
../Trilinos/configure PETSC_DIR=/home/joeblow/petsc-2.3.3-p12 PETSC_ARCH=linux-gnu-c-debug \
                      --with-mpi-compilers=/usr/local/mpich-1.2.7p1/bin --with-petsc \
                      --disable-default-packages --enable-amesos --enable-aztecoo --enable-epetra \
                      --enable-epetraext --enable-ifpack --enable-ml --enable-teuchos

The following Trilinos packages must be enabled: Epetra (basic linear algebra), EpetraExt (contains the PETSc interface), and Teuchos (parameter lists, smart pointers, and other useful utility classes). We also strongly suggest enabling ML (algebraic multigrid), Ifpack (SOR, incomplete factorizations, and domain decomposition methods), and Amesos (sparse direct solvers) for a richer preconditioner set.

Linking a PETSc application to the Trilinos libraries

We would expect most applications to be linking to Trilinos as a set of third party libraries. Towards this end, in the build directory of each Trilinos package, there is a stub file called "Makefile.export.XX", where XX is the package name. The stub defines variables that contain all of the include and link dependencies for that particular package. For example, "Makefile.export.epetraext" defines EPETRAEXT_INCLUDES and EPETRAEXT_LIBS, which contain EpetraExt's include and link dependencies, respectively. An application developer simply includes this stub in her Makefile, and uses the variables in the appropriate compile and link lines.

The example in the following section can be built as an "external" application with this simple Makefile, assuming that Trilinos was build with ML enabled:

include /home/joeblow/TrilinosBuild/packages/ml/Makefile.export.ml
all:
    mpicxx -c -DHAVE_CONFIG_H ${ML_INCLUDES} petsc.cpp
    mpicxx -o petsc.exe petsc.o ${ML_LIBS}

Example: Solving a PETSc linear system with PETSc's CG preconditioned by Trilinos' algebraic multigrid

We now dissect the example Trilinos/packages/epetraext/examples/petsc/petsc.cpp. In this example, a PETSc aij matrix corresponding to the 2D Laplacian and right-hand side corresponding to the solution of all ones are constructed. The resulting linear system is solved twice, the first time with CG from AztecOO, the second time with CG from PETSc. In both cases, the preconditioner is algebraic multigrid (AMG) from the Trilinos package ML.

The include file "ml_config.h" contains definitions for all the appropriate Trilinos preprocessor macros. Of the following include files, "EpetraExt_PETScAIJMatrix.h" and "ml_MultiLevelPreconditioner.h" are required to wrap the PETSc matrix and create the ML preconditioner.

#include "ml_config.h"
#if defined(HAVE_PETSC) && defined(HAVE_ML_EPETRA) && defined(HAVE_ML_TEUCHOS) && defined(HAVE_ML_AZTECOO)
#include "petscksp.h"
#include "EpetraExt_PETScAIJMatrix.h"
#include "Epetra_Vector.h"
#include "Epetra_Map.h"
#include "ml_MultiLevelPreconditioner.h"
#include "AztecOO.h"
#include "Epetra_LinearProblem.h"

Here, we have omitted the details of the aij matrix construction. After the PETSc matrix is constructed, it is wrapped as an Epetra_PETScAIJMatrix:

  Epetra_PETScAIJMatrix epA(A);

Note that this is a lightweight wrap -- no matrix data is copied!

The parameter list for the multigrid preconditioner is created and populated. (For more information on multigrid options, please see the ML user's guide.)

  Teuchos::ParameterList mlList;
  /* Set recommended defaults for Poisson-like problems. */
  ML_Epetra::SetDefaults("SA",mlList);
  /* Specify how much information ML prints to screen.
     0 is the minimum (no output), 10 is the maximum. */
  mlList.set("ML output",10);

In this case, we are going to use symmetric Gauss-Seidel as the fine-grid smoother. Since this requires access to individual matrix rows, it is much more efficient to use the PETSc implementation, which will be optimized for the underlying aij data structure. (There is no performance penalty associated with Trilinos smoothers whose kernel is a matrix-vector multiply, as the multiply is simply a callback to PETSc's native implementation.) To do this, the KSP object kspSmoother is created and populated:

    ierr = KSPCreate(A->comm,&kspSmoother);CHKERRQ(ierr);
    ierr = KSPSetOperators(kspSmoother,A,A,SAME_NONZERO_PATTERN);CHKERRQ(ierr);
    ierr = KSPSetType(kspSmoother,KSPRICHARDSON);CHKERRQ(ierr);
    ierr = KSPSetTolerances(kspSmoother, 1e-12, 1e-50, 1e7,1);
    ierr = KSPSetInitialGuessNonzero(kspSmoother,PETSC_TRUE);CHKERRQ(ierr);
    ierr = KSPGetPC(kspSmoother,&pc);CHKERRQ(ierr);
    ierr = PCSetType(pc, PCSOR);CHKERRQ(ierr);
    ierr = PCSORSetSymmetric(pc,SOR_LOCAL_SYMMETRIC_SWEEP);CHKERRQ(ierr);
    ierr = PCSetFromOptions(pc);CHKERRQ(ierr);
    ierr = KSPSetUp(kspSmoother);CHKERRQ(ierr);

The fine grid smoother (level 0) is set to be of PETSc type

    mlList.set("smoother: type (level 0)","petsc");

and kspSmoother is placed in the parameter list.

Note that trying to set a PETSc smoother on any other level will generate an error. Also note that ML has many other smoothers available, e.g., polynomial and incomplete factorizations. Please see the ML User's guide, available from the ML homepage or in Trilinos/packages/ml/doc/mlguide.pdf, for more information.

    mlList.set("smoother: petsc ksp (level 0)",(void*)kspSmoother);

Finally, the ML AMG preconditioner is constructed.

  ML_Epetra::MultiLevelPreconditioner *Prec = new ML_Epetra::MultiLevelPreconditioner(epA,mlList);

The linear system is solved first using CG from the Trilinos package AztecOO.

  epu.PutScalar(0.0);
  Epetra_LinearProblem Problem(&epA, &epu, &epb);
  AztecOO solver(Problem);
  solver.SetAztecOption(AZ_solver, AZ_cg);
  solver.SetAztecOption(AZ_output, 1);
  solver.SetAztecOption(AZ_conv, AZ_noscaled);
  solver.SetPrecOperator(Prec);
  solver.Iterate(30, 1e-12);

The system is solved a second time using CG from PETSc. The AMG preconditioner Prec must be wrapped as a PETSc shell preconditioner

  ierr = KSPGetPC(ksp,&pc);CHKERRQ(ierr);
  ierr = PCSetType(pc,PCSHELL);CHKERRQ(ierr);
  ierr = PCShellSetApply(pc,ShellApplyML);CHKERRQ(ierr);
  ierr = PCShellSetContext(pc,(void*)Prec);CHKERRQ(ierr);
  ierr = PCShellSetName(pc,"ML AMG");CHKERRQ(ierr);

where the apply method is given by

PetscErrorCode ShellApplyML(void *ctx,Vec x,Vec y)
{
  PetscErrorCode  ierr;
  ML_Epetra::MultiLevelPreconditioner *mlp = (ML_Epetra::MultiLevelPreconditioner*)ctx;

  /* Wrap x and y as Epetra_Vectors. */
  PetscScalar *xvals,*yvals;
  ierr = VecGetArray(x,&xvals);CHKERRQ(ierr);
  Epetra_Vector epx(View,mlp->OperatorDomainMap(),xvals);
  ierr = VecGetArray(y,&yvals);CHKERRQ(ierr);
  Epetra_Vector epy(View,mlp->OperatorRangeMap(),yvals);

  /* Apply ML. */
  mlp->ApplyInverse(epx,epy);
  
  /* Clean up and return. */
  ierr = VecRestoreArray(x,&xvals);CHKERRQ(ierr);
  ierr = VecRestoreArray(y,&yvals);CHKERRQ(ierr);
  return 0;
} /*ShellApplyML*/

Other preconditioners, such as those provided by Ifpack or AztecOO, can be wrapped in a similar fashion.


Generated on Wed May 12 21:40:38 2010 for EpetraExt by  doxygen 1.4.7