Introduction
Meshes and GeometryPAMGEN: Inline MeshingLoad Balancing Capabilities
phdMesh: Unstructured Mesh Database
ABMesh: ArrayBased Mesh Database
TUCASA: Parallel Mesh File InterfaceIsorropia: Matrix Partitioning, Ordering and Coloring
Zoltan: Dynamic load balancing, partitioning, ordering and coloring
Trilinos' initial efforts in geometry and meshing capabilities will address three key components of mesh use in applications. Inline meshing (PAMGEN) allows simple geometries to be meshed onthefly within a parallel application, rather than requiring mesh generation as a filebased preprocessing step. More complex meshes stored in files must be efficiently read in parallel and initially distributed to processors (TUCASA). And databases to efficiently store and manage mesh data greatly simply the bookkeeping burden of parallel applications (phdMesh).
phdMesh: Unstructured Mesh DatabasePoint of Contact: H.C. Edwards (SNL)Status: phdMesh was released in Trilinos 9.0 in September 2008. Multiphysics parallel applications with mixed discretization schemes, adaptive unstructured meshes and parallel distributed data sets have inherent complexity that must be managed. phdMesh is a compact, flexible software component designed to manage parallel, heterogeneous and dynamic unstructured meshes. phdMesh was developed as part of the Mantevo project as a miniapplication that approximates and assesses the performance of much larger meshbased applications. The formal mesh model in phdMesh accommodates problem specifications through applicationdefined parts and fields; heterogeneous discretizations are accommodated through applicationdefined entities and connections. Computational efficiency is achieved by partitioning data into homogeneous kernels that may be operated on through a contiguous block of memory.


ABMesh: ArrayBased Mesh DatabasePoint of Contact: R. Drake (SNL)Status: ABMesh will be included in Trilinos 10.0 in September 2009. ABMesh provides a mesh database with efficient parallel arraybased data structures compatible with many mesh file formats (e.g., Exodus, Nemesis). ABMesh supports a large number of element types in two and three dimensions, element blocking for multimaterial simulations, and nodal and elemental boundary conditions. It will provide convenient native interfaces as well as ITAPScompatible iterfaces for greater interoperability within applications. 

TUCASA: Parallel Mesh File InterfacePoint of Contact: R. Drake (SNL)Status: TUCASA will be included in Trilinos 10.0 in September 2009.

The capabilities to be provided in Trilinos include matrix partitioning for rowbased, columnbased, and nonzerobased distributions of matrix data (Isorropia), as well as general partitioning and repartitioning capabilities for a wide variety of data, including mesh entities (Zoltan).
Isorropia: Matrix PartitioningPoint of Contact: E. Boman (SNL)Isorropia Page: http://trilinos.sandia.gov/packages/isorropia/ Status: Isorropia was enhanced and released in Trilinos v9.0. Isorropia is a repartitioning/rebalancing package that redistributes matrices and matrixgraphs in a parallel execution setting to enable more efficient matrix computations. Isorropia is the package to use for distributing EPetra data structures. Through an interface to the Zoltan library, it computes parallel matrix distributions that have balanced computational work and low interprocessor communication costs for common matrix operations. It also creates Epetra maps for redistributing matrices according to the computed data distributions, and migrates matrix data to the new distribution. Current development efforts include matrix ordering and twodimensional matrix partitioning interfaces. Isorropia also contains interfaces to Zoltan's parallel coloring and matrix ordering capabilities. Parallel matrix ordering can reduce matrix fill during direct factorizations. The interface provides access to both the matrix permutation vector and the separator trees used in reordering. Parallel coloring is an important capability for some preconditioners. Colors are assigned to matrix rows depending on the connectivity of rows through nonzero entries in the matrix. The interface provides both distanceone and distancetwo coloring. See below for more details on coloring and ordering. 

Zoltan: Dynamic load balancing, partitioning, coloring and orderingPoint of Contact: K. Devine (SNL)Zoltan Page: http://www.cs.sandia.gov/Zoltan Status: Zoltan is currently available for download from the Zoltan home page. It is also released in Trilinos 9.0 in September 2008. The Zoltan library includes a suite of partitioning and repartitioning algorithms for general applications. It can be used for distributing matrices, meshes, particles, agents, or any objects within a simulation. The partitioning and repartitioning algorithms include geometric methods (useful for particle and meshbased applications), and connectivitybase methods (such as graph and hypergraphpartitioning). Three interfaces to Zoltan exist (in order of decreasing maturity): the native Zoltan interface the Isorropia Epetra matrix interface, and the ITAPS mesh interface. The native Zoltan interface is datastructure neutral, so an application does not have to build or use specific data structures to use Zoltan. This design allows Zoltan to be used by a wide range of applications. Zoltan is widely used in the ASC community, and is a key component of the SciDAC CSCAPES and ITAPS projects. Current research efforts include scalable partitioning strategies for multicore architectures, matrix ordering algorithms, and twodimensional matrix partitioning. Using essentially the same interfaces, Zoltan also enables parallel matrix ordering and coloring. Zoltan provides consistent interfaces to the graph ordering algorithms in PTScotch and ParMETIS; using the same interface, users can switch between ordering methods to compare their effectiveness. Zoltan also provides native parallel distanceone and distancetwo graph coloring using the same graphbased interface. These algorithms are stateoftheart distributed memory implementations, as described in the following JPDC article: A Framework for Scalable Parallel Greedy Coloring on Distributed Memory Computers. 
