Mat

PETSc matrices (Mat) provide sparse and dense matrix storage with efficient parallel operations. They are essential for discretizing PDEs and setting up linear/nonlinear systems.

Overview

PETSc matrices support:

  • Sparse formats: AIJ (CSR), BAIJ (block CSR), and more
  • Dense format: For small matrices or dense operations
  • Parallel distribution: Row-based distribution across MPI processes
  • Matrix-free operations: Via MatShell for custom operators

Creating Matrices

Sparse Matrices (AIJ/CSR Format)

# Create sparse matrix with estimated non-zeros per row
A = MatSeqAIJ(petsclib, num_rows, num_cols, nnz_per_row)

# From Julia SparseMatrixCSC
using SparseArrays
S = sprand(100, 100, 0.1)
A = MatCreateSeqAIJ(petsclib, MPI.COMM_SELF, S)

# With varying non-zeros per row
nnz = PetscInt[5, 3, 4, ...]  # One value per row
A = MatSeqAIJ(petsclib, num_rows, num_cols, nnz)

Dense Matrices

# Wrap a Julia matrix (no copy)
julia_mat = rand(10, 10)
A = MatSeqDense(petsclib, julia_mat)

From DM Objects

# Create matrix with sparsity pattern from DM
A = DMCreateMatrix(dm)

Matrix Shell (Matrix-Free)

# Create a shell matrix with custom mult operation
A = MatShell(petsclib, m, n, mult_function, context)

Setting Values

# Set individual element (0-based internally, 1-based in Julia)
A[i, j] = value

# Use setvalues! for efficient batch insertion
setvalues!(A, rows, cols, values, INSERT_VALUES)

# For stencil-based assembly
setvalues!(A, stencil_row, stencil_col, value, INSERT_VALUES)

Assembly

Matrices must be assembled after setting values:

# Set all values first
A[1, 1] = 2.0
A[1, 2] = -1.0
# ...

# Then assemble
assemble!(A)

Common Operations

size(A)              # Get (rows, cols)
ownershiprange(A)    # Get rows owned by this process
setup!(A)            # Complete matrix setup

Functions

PETSc.MatPtrType
MatPtr(petsclib, mat::CMat)

Container type for a PETSc Mat that is just a raw pointer.

source
PETSc.MatShellType
MatShell(
    petsclib::PetscLib,
    obj::OType,
    comm::MPI.Comm,
    local_rows,
    local_cols,
    global_rows = LibPETSc.PETSC_DECIDE,
    global_cols = LibPETSc.PETSC_DECIDE,
)

Create a global_rows X global_cols PETSc shell matrix object wrapping obj with local size local_rows X local_cols.

The obj will be registered as an MATOP_MULT function and if if obj is a Function, then the multiply action obj(y,x); otherwise it calls mul!(y, obj, x).

if comm == MPI.COMM_SELF then the garbage connector can finalize the object, otherwise the user is responsible for calling destroy.

External Links

source
LinearAlgebra.mul!Method
mul!(y::PetscVec{PetscLib}, M::AbstractPetscMat{PetscLib}, x::PetscVec{PetscLib})

Computes y = M*x

source
PETSc.MatCreateSeqAIJMethod
M::PetscMat = MatCreateSeqAIJ(petsclib, comm, S)

Creates a PetscMat object from a Julia SparseMatrixCSC S in sequential AIJ format.

source
PETSc.MatSeqAIJMethod
mat = MatSeqAIJ(petsclib, num_rows, num_cols, nonzeros)

Create a PETSc serial sparse array using AIJ format (also known as a compressed sparse row or CSR format) of size num_rows X num_cols with nonzeros per row

If nonzeros is an Integer the same number of non-zeros will be used for each row, if nonzeros is a Vector{PetscInt} then one value must be specified for each row.

Memory allocation is handled by PETSc and garbage collection can be used.

External Links

source
PETSc.MatSeqAIJWithArraysMethod
B = MatSeqAIJWithArrays(petsclib, comm, A::SparseMatrixCSC)

Create a PETSc SeqAIJ matrix from a Julia SparseMatrixCSC. Since Julia uses CSC and PETSc AIJ uses CSR, we convert the format properly.

Arguments

  • petsclib: PETSc library instance
  • comm: MPI communicator
  • A: Julia sparse matrix in CSC format

Returns

  • B: PETSc matrix in AIJ (CSR) format
source
PETSc.destroyMethod
destroy(m::AbstractPetscMat)

Destroy a Mat (matrix) object and release associated resources.

This function is typically called automatically via finalizers when the object is garbage collected, but can be called explicitly to free resources immediately.

External Links

source
PETSc.ownershiprangeMethod
ownershiprange(mat::AbstractMat, [base_one = true])

The range of row indices owned by this processor, assuming that the mat is laid out with the first n1 rows on the first processor, next n2 rows on the second, etc. For certain parallel layouts this range may not be well defined.

If the optional argument base_one == true then base-1 indexing is used, otherwise base-0 index is used.

Note

unlike the C function, the range returned is inclusive (idx_first:idx_last)

External Links

source
PETSc.setvalues!Method
setvalues!(
    M::AbstractMat{PetscLib},
    row0idxs::Vector{PetscInt},
    col0idxs::Vector{PetscInt},
    rowvals::Array{PetscScalar},
    insertmode::InsertMode = INSERT_VALUES;
    num_rows = length(row0idxs),
    num_cols = length(col0idxs)
)

Set values of the matrix M with base-0 row and column indices row0idxs and col0idxs inserting the values rowvals.

If the keyword arguments num_rows or num_cols is specified then only the first num_rows * num_cols values of rowvals will be used.

External Links

source
PETSc.setvalues!Method
setvalues!(
    M::AbstractPetscMat{PetscLib},
    row0idxs::Vector{MatStencil},
    col0idxs::Vector{MatStencil},
    rowvals::Array{PetscScalar},
    insertmode::InsertMode = INSERT_VALUES;
    num_rows = length(row0idxs),
    num_cols = length(col0idxs)
)

Set values of the matrix M with base-0 row and column indices row0idxs and col0idxs inserting the values rowvals.

If the keyword arguments num_rows or num_cols is specified then only the first num_rows * num_cols values of rowvals will be used.

External Links

source