# Topology

`MPI.Dims_create`

— Function`newdims = Dims_create(nnodes::Integer, dims)`

A convenience function for selecting a balanced Cartesian grid of a total of `nnodes`

nodes, for example to use with `MPI.Cart_create`

.

`dims`

is an array or tuple of integers specifying the number of nodes in each dimension. The function returns an array `newdims`

of the same length, such that if `newdims[i] = dims[i]`

if `dims[i]`

is non-zero, and `prod(newdims) == nnodes`

, and values `newdims`

are as close to each other as possible.

`nnodes`

should be divisible by the product of the non-zero entries of `dims`

.

**External links**

`MPI.Cart_create`

— Function`comm_cart = Cart_create(comm::Comm, dims; periodic=map(_->false, dims), reorder=false)`

Create new MPI communicator with Cartesian topology information attached.

`dims`

is an array or tuple of integers specifying the number of MPI processes in each coordinate direction, and `periodic`

is an array or tuple of `Bool`

s indicating the periodicity of each coordinate. `prod(dims)`

must be less than or equal to the size of `comm`

; if it is smaller than some processes are returned a null communicator.

If `reorder == false`

then the rank of each process in the new group is identical to its rank in the old group, otherwise the function may reorder the processes.

See also `MPI.Dims_create`

.

**External links**

`MPI.Cart_get`

— Function`dims, periods, coords = Cart_get(comm::Comm)`

Obtain information on the Cartesian topology of dimension `N`

underlying the communicator `comm`

. This is specified by two `Cint`

arrays of `N`

elements for the number of processes and periodicity properties along each Cartesian dimension. A third `Cint`

array is returned, containing the Cartesian coordinates of the calling process.

**External links**

`MPI.Cart_coords`

— Function`coords = Cart_coords(comm::Comm, rank::Integer=Comm_rank(comm))`

Determine coordinates of a process with rank `rank`

in the Cartesian communicator `comm`

. If no `rank`

is provided, it returns the coordinates of the current process.

Returns an integer array of the 0-based coordinates. The inverse of `Cart_rank`

.

**External links**

`MPI.Cart_rank`

— Function`rank = Cart_rank(comm::Comm, coords)`

Determine process rank in communicator `comm`

with Cartesian structure. The `coords`

array specifies the 0-based Cartesian coordinates of the process. This is the inverse of `MPI.Cart_coords`

**External links**

`MPI.Cart_shift`

— Function`rank_source, rank_dest = Cart_shift(comm::Comm, direction::Integer, disp::Integer)`

Return the source and destination ranks associated to a shift along a given direction.

**External links**

`MPI.Cart_sub`

— Function`comm_sub = Cart_sub(comm::Comm, remain_dims)`

Create lower-dimensional Cartesian communicator from existent Cartesian topology.

`remain_dims`

should be a boolean vector specifying the dimensions that should be kept in the generated subgrid.

**External links**

`MPI.Cartdim_get`

— Function`ndims = Cartdim_get(comm::Comm)`

Return number of dimensions of the Cartesian topology associated with the communicator `comm`

.

**External links**

`MPI.Dist_graph_create`

— Function`graph_comm = Dist_graph_create(comm::Comm, sources::Vector{Cint}, degrees::Vector{Cint}, destinations::Vector{Cint}; weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}=UNWEIGHTED, reorder=false, infokws...)`

Create a new communicator from a given directed graph topology, described by incoming and outgoing edges on an existing communicator.

**Arguments**

`comm::Comm`

: The communicator on which the distributed graph topology should be induced.`sources::Vector{Cint}`

: An array with the ranks for which this call will specify outgoing edges.`degrees::Vector{Cint}`

: An array with the number of outgoing edges for each entry in the sources array.`destinations::Vector{Cint}`

: An array containing with lenght of the sum of the entries in the degrees array describing the ranks towards the edges point.`weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}=MPI.UNWEIGHTED`

: The edge weights of the specified edges.`reorder::Bool=false`

: If set true, then the MPI implementation can reorder the source and destination indices.

**Example**

We can generate a ring graph `1 --> 2 --> ... --> N --> 1`

, where N is the number of ranks in the communicator, as follows

```
julia> rank = MPI.Comm_rank(comm);
julia> N = MPI.Comm_size(comm);
julia> sources = Cint[rank];
julia> degrees = Cint[1];
julia> destinations = Cint[mod(rank-1, N)];
julia> graph_comm = Dist_graph_create(comm, sources, degrees, destinations)
```

**External links**

`MPI.Dist_graph_create_adjacent`

— Function`graph_comm = Dist_graph_create_adjacent(comm::Comm, sources::Vector{Cint}, destinations::Vector{Cint}; source_weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}=UNWEIGHTED, destination_weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}=UNWEIGHTED, reorder=false, infokws...)`

Create a new communicator from a given directed graph topology, described by local incoming and outgoing edges on an existing communicator.

**Arguments**

`comm::Comm`

: The communicator on which the distributed graph topology should be induced.`sources::Vector{Cint}`

: The local, incoming edges on the rank of the calling process.`destinations::Vector{Cint}`

: The local, outgoing edges on the rank of the calling process.`source_weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}=MPI.UNWEIGHTED`

: The edge weights of the local, incoming edges.`destinations_weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}=MPI.UNWEIGHTED`

: The edge weights of the local, outgoing edges.`reorder::Bool=false`

: If set true, then the MPI implementation can reorder the source and destination indices.

**Example**

We can generate a ring graph `1 --> 2 --> ... --> N --> 1`

, where N is the number of ranks in the communicator, as follows

```
julia> rank = MPI.Comm_rank(comm);
julia> N = MPI.Comm_size(comm);
julia> sources = Cint[mod(rank-1, N)];
julia> destinations = Cint[mod(rank+1, N)];
julia> graph_comm = Dist_graph_create_adjacent(comm, sources, destinations);
```

**External links**

`MPI.Dist_graph_neighbors_count`

— Function`indegree, outdegree, weighted = Dist_graph_neighbors_count(graph_comm::Comm)`

Return the number of in and out edges for the calling processes in a distributed graph topology and a flag indicating whether the distributed graph is weighted.

**Arguments**

`graph_comm::Comm`

: The communicator of the distributed graph topology.

**Example**

Let us assume the following graph `0 <--> 1 --> 2`

, which has no weights on its edges, then the process with rank 1 will obtain the following result from calling the function

```
julia> Dist_graph_neighbors_count(graph_comm)
(1,2,false)
```

**External links**

`MPI.Dist_graph_neighbors!`

— Function`Dist_graph_neighbors!(graph_comm::Comm, sources::Vector{Cint}, source_weights::Vector{Cint}, destinations::Vector{Cint}, destination_weights::Vector{Cint})`

Return the neighbors and edge weights of the calling process in a distributed graph topology.

**Arguments**

`graph_comm::Comm`

: The communicator of the distributed graph topology.`sources::Vector{Cint}`

: A preallocated vector, which will be filled with the ranks of the processes whose edges pointing towards the calling process. The length is exactly the indegree returned by`MPI.Dist_graph_neighbors_count`

.`source_weights::Vector{Cint}`

: A preallocated vector, which will be filled with the weights associated to the edges pointing towards the calling process. The length is exactly the indegree returned by`MPI.Dist_graph_neighbors_count`

.`destinations::Vector{Cint}`

: A preallocated vector, which will be filled with the ranks of the processes towards which the edges of the calling process point. The length is exactly the outdegree returned by`MPI.Dist_graph_neighbors_count`

.`destination_weights::Vector{Cint}`

: A preallocated vector, which will be filled with the weights associated to the edges of the outgoing edges of the calling process point. The length is exactly the outdegree returned by`MPI.Dist_graph_neighbors_count`

.

**Example**

Let us assume the following graph `0 <-3-> 1 -4-> 2`

, then the process with rank 1 will require to preallocate a sources vector of length 1 and a destination vector of length 2. The call will fill the vectors as follows:

```
julia> Dist_graph_neighbors!(graph_comm, sources, source_weights, destinations, destination_weights);
julia> sources
[0]
julia> source_weights
[3]
julia> destinations
[0,2]
julia> destination_weights
[3,4]
```

Note that the edge between ranks 0 and 1 can have a different weight depending on wether it is the incoming edge "`(0,1)"`

or the outgoing one "`(1,0)"`

.

**External links**

`Dist_graph_neighbors!(graph_comm::Comm, sources::Vector{Cint}, destinations::Vector{Cint})`

Return the neighbors of the calling process in a distributed graph topology without edge weights.

**Arguments**

`graph_comm::Comm`

: The communicator of the distributed graph topology.`sources::Vector{Cint}`

: A preallocated vector, which will be filled with the ranks of the processes whose edges pointing towards the calling process. The length is exactly the indegree returned by`MPI.Dist_graph_neighbors_count`

.`destinations::Vector{Cint}`

: A preallocated vector, which will be filled with the ranks of the processes towards which the edges of the calling process point. The length is exactly the outdegree returned by`MPI.Dist_graph_neighbors_count`

.

**Example**

Let us assume the following graph `0 <--> 1 --> 2`

, then the process with rank 1 will require to preallocate a sources vector of length 1 and a destination vector of length 2. The call will fill the vectors as follows:

```
julia> Dist_graph_neighbors!(graph_comm, sources, destinations);
julia> sources
[0]
julia> destinations
[0,2]
```

**External links**