Collective communication

Synchronization

MPI.BarrierFunction
Barrier(comm::Comm)

Blocks until comm is synchronized.

If comm is an intracommunicator, then it blocks until all members of the group have called it.

If comm is an intercommunicator, then it blocks until all members of the other group have called it.

External links

source
MPI.IbarrierFunction
Ibarrier(comm::Comm[, req::AbstractRequest = Request())

Blocks until comm is synchronized.

If comm is an intracommunicator, then it blocks until all members of the group have called it.

If comm is an intercommunicator, then it blocks until all members of the other group have called it.

External links

source

Broadcast

MPI.Bcast!Function
Bcast!(buf, comm::Comm; root::Integer=0)

Broadcast the buffer buf from root to all processes in comm.

See also

External links

source
MPI.BcastFunction
Bcast(obj, root::Integer, comm::Comm)

Broadcast the obj from root to all processes in comm. Returns the object. Currently obj must be isbits, i.e. isbitstype(typeof(obj)) == true.

source
MPI.bcastFunction
bcast(obj, comm::Comm; root::Integer=0)

Broadcast the object obj from rank root to all processes on comm. This is able to handle arbitrary data.

See also

source

Gather/Scatter

Gather

MPI.Gather!Function
Gather!(sendbuf, recvbuf, comm::Comm; root::Integer=0)

Each process sends the contents of the buffer sendbuf to the root process. The root process stores elements in rank order in the buffer buffer recvbuf.

sendbuf should be a Buffer object, or any object for which Buffer_send is defined, with the same length on all processes, and should be the same length on all processes.

On the root process, sendbuf can be MPI.IN_PLACE on the root process, in which case the corresponding entries in recvbuf are assumed to be already in place (this corresponds the behaviour of MPI_IN_PLACE in MPI_Gather). For example:

if root == MPI.Comm_rank(comm)
    MPI.Gather!(MPI.IN_PLACE, UBuffer(buf, count), comm; root=root)
else
    MPI.Gather!(buf, nothing, comm; root=root)
end

recvbuf on the root process should be a UBuffer, or can be an AbstractArray if the length can be determined from sendbuf. On non-root processes it is ignored and can be nothing.

See also

  • Gather for the allocating operation.
  • Gatherv! if the number of elements varies between processes.
  • Allgather! to send the result to all processes.

External links

source
MPI.GatherFunction
Gather(sendbuf, comm::Comm; root=0)

Each process sends the contents of the buffer sendbuf to the root process. The root allocates the output buffer and stores elements in rank order.

sendbuf can be an AbstractArray or a scalar, and should be the same length on all processes.

See also

External links

source
MPI.gatherFunction
gather(obj, comm::Comm; root::Integer=0)

Gather the objects obj from all ranks on comm to rank root. This is able to to handle arbitrary data. On root, it returns a vector of the objects, and nothing otherwise.

See also

source
MPI.Gatherv!Function
Gatherv!(sendbuf, recvbuf, comm::Comm; root::Integer=0)

Each process sends the contents of the buffer sendbuf to the root process. The root stores elements in rank order in the buffer recvbuf.

sendbuf should be a Buffer object, or any object for which Buffer_send is defined, with the same length on all processes.

On the root process, sendbuf can be MPI.IN_PLACE, in which case the corresponding entries in recvbuf are assumed to be already in place. For example

if root == MPI.Comm_rank(comm)
    Gatherv!(MPI.IN_PLACE, VBuffer(buf, counts), comm; root=root)
else
    Gatherv!(buf, nothing, comm; root=root)
end

recvbuf on the root process should be a VBuffer, or can be an AbstractArray if the length can be determined from sendbuf. On non-root processes it is ignored and can be nothing.

See also

  • Gather! if the number of elements is the same between processes.
  • Allgatherv! to send the result to all processes.

External links

source
MPI.Allgather!Function
Allgather!(sendbuf, recvbuf::UBuffer, comm::Comm)
Allgather!(sendrecvbuf::UBuffer, comm::Comm)

Each process sends the contents of sendbuf to the other processes, the result of which is stored in rank order into recvbuf.

sendbuf can be a Buffer object, or any object for which Buffer_send is defined, and should be the same length on all processes.

recvbuf can be a UBuffer, or can be an AbstractArray if the length can be determined from sendbuf.

If only one buffer sendrecvbuf is provided, then on each process the data to send is assumed to be in the area where it would receive its own contribution.

See also

  • Allgather for the allocating operation
  • Allgatherv! if the number of elements varies between processes.
  • Gather! to send only to a single root process

External links

source
MPI.AllgatherFunction
Allgather(sendbuf, comm)

Each process sends the contents of sendbuf to the other processes, who store the results in rank order allocating the output buffer.

sendbuf can be an AbstractArray or a scalar, and should be the same size on all processes.

See also

  • Allgather! for the mutating operation
  • Allgatherv! if the number of elements varies between processes.
  • Gather! to send only to a single root process

External links

source
MPI.Allgatherv!Function
Allgatherv!(sendbuf, recvbuf::VBuffer, comm::Comm)
Allgatherv!(sendrecvbuf::VBuffer, comm::Comm)

Each process sends the contents of sendbuf to all other process. Each process stores the received in the VBuffer recvbuf.

sendbuf can be a Buffer object, or any object for which Buffer_send is defined.

If only one buffer sendrecvbuf is provided, then for each process, the data to be sent is taken from the interval of recvbuf where it would store its own data.

See also

  • Gatherv! to send the result to a single process

External links

source
MPI.Neighbor_allgatherv!Function
Neighbor_allgatherv!(sendbuf::Buffer, recvbuf::VBuffer, comm::Comm)

Perform an all-gather communication along the directed edges of the graph with variable sized data.

See also MPI.Allgatherv!.

External links

source

Scatter

MPI.Scatter!Function
Scatter!(sendbuf::Union{UBuffer,Nothing}, recvbuf, comm::Comm;
    root::Integer=0)

Splits the buffer sendbuf in the root process into Comm_size(comm) chunks, sending the j-th chunk to the process of rank j-1 into the recvbuf buffer.

sendbuf on the root process should be a UBuffer (an Array can also be passed directly if the sizes can be determined from recvbuf). On non-root processes it is ignored, and nothing can be passed instead.

recvbuf is a Buffer object, or any object for which Buffer(recvbuf) is defined. On the root process, it can also be MPI.IN_PLACE, in which case it is unmodified. For example:

if root == MPI.Comm_rank(comm)
    MPI.Scatter!(UBuffer(buf, count), MPI.IN_PLACE, comm; root=root)
else
    MPI.Scatter!(nothing, buf, comm; root=root)
end

See also

  • Scatterv! if the number of elements varies between processes.

External links

source
MPI.ScatterFunction
Scatter(sendbuf, T, comm::Comm; root::Integer=0)

Splits the buffer sendbuf in the root process into Comm_size(comm) chunks, sending the j-th chunk to the process of rank j-1 as an object of type T.

See also

source
MPI.scatterFunction
scatter(objs::Union{AbstractVector, Nothing}, comm::Comm; root::Integer=0)

Sends the j-th element of objs in the root process to rank j-1 and returns it. On root, objs is expected to be a Comm_size(comm)-element vector. On the other ranks, it is ignored and can be nothing.

This method can handle arbitrary data.

See also

source
MPI.Scatterv!Function
Scatterv!(sendbuf, recvbuf, comm::Comm; root::Integer=0)

Splits the buffer sendbuf in the root process into Comm_size(comm) chunks and sends the jth chunk to the process of rank j-1 into the recvbuf buffer.

sendbuf on the root process should be a VBuffer. On non-root processes it is ignored, and nothing can be passed instead.

recvbuf is a Buffer object, or any object for which Buffer(recvbuf) is defined. On the root process, it can also be MPI.IN_PLACE, in which case it is unmodified. For example:

if root == MPI.Comm_rank(comm)
    MPI.Scatterv!(VBuffer(buf, counts), MPI.IN_PLACE, comm; root=root)
else
    MPI.Scatterv!(nothing, buf, comm; root=root)
end

See also

  • Scatter! if the number of elements are the same for all processes

External links

source

All-to-all

MPI.Alltoall!Function
Alltoall!(sendbuf::UBuffer, recvbuf::UBuffer, comm::Comm)
Alltoall!(sendrecvbuf::UBuffer, comm::Comm)

Every process divides the UBuffer sendbuf into Comm_size(comm) chunks of equal size, sending the j-th chunk to the process of rank j-1. Every process stores the data received from rank j-1 process in the j-th chunk of the buffer recvbuf.

rank    send buf                        recv buf
----    --------                        --------
 0      a,b,c,d,e,f       Alltoall      a,b,A,B,α,β
 1      A,B,C,D,E,F  ---------------->  c,d,C,D,γ,ψ
 2      α,β,γ,ψ,η,ν                     e,f,E,F,η,ν

If only one buffer sendrecvbuf is used, then data is overwritten.

See also

External links

source
MPI.AlltoallFunction
Alltoall(sendbuf::UBuffer, comm::Comm)

Every process divides the UBuffer sendbuf into Comm_size(comm) chunks of equal size, sending the j-th chunk to the process of rank j-1. Every process allocates the output buffer and stores the data received from the process on rank j-1 in the j-th chunk.

rank    send buf                        recv buf
----    --------                        --------
 0      a,b,c,d,e,f       Alltoall      a,b,A,B,α,β
 1      A,B,C,D,E,F  ---------------->  c,d,C,D,γ,ψ
 2      α,β,γ,ψ,η,ν                     e,f,E,F,η,ν

See also

External links

source
MPI.Neighbor_alltoall!Function
Neighbor_alltoall!(sendbuf::UBuffer, recvbuf::UBuffer, comm::Comm)

Perform an all-to-all communication along the directed edges of the graph with fixed size messages.

See also MPI.Alltoall!.

External links

source
MPI.Neighbor_alltoallv!Function
Neighbor_alltoallv!(sendbuf::VBuffer, recvbuf::VBuffer, graph_comm::Comm)

Perform an all-to-all communication along the directed edges of the graph with variable size messages.

See also MPI.Alltoallv!.

External links

source

Reduce/Scan

MPI.Reduce!Function
Reduce!(sendbuf, recvbuf, op, comm::Comm; root::Integer=0)
Reduce!(sendrecvbuf, op, comm::Comm; root::Integer=0)

Performs elementwise reduction using the operator op on the buffer sendbuf and stores the result in recvbuf on the process of rank root.

On non-root processes recvbuf is ignored, and can be nothing.

To perform the reduction in place, provide a single buffer sendrecvbuf.

See also

  • Reduce to handle allocation of the output buffer.
  • Allreduce!/Allreduce to send reduction to all ranks.
  • Op for details on reduction operators.

External links

source
MPI.ReduceFunction
recvbuf = Reduce(sendbuf, op, comm::Comm; root::Integer=0)

Performs elementwise reduction using the operator op on the buffer sendbuf, returning the result recvbuf on the process of rank root, and nothing on non-root processes.

sendbuf can also be a scalar, in which case recvbuf will be a value of the same type.

See also

External links

source
MPI.Allreduce!Function
Allreduce!(sendbuf, recvbuf, op, comm::Comm)
Allreduce!(sendrecvbuf, op, comm::Comm)

Performs elementwise reduction using the operator op on the buffer sendbuf, storing the result in the recvbuf of all processes in the group.

Allreduce! is equivalent to a Reduce! operation followed by a Bcast!, but can lead to better performance.

If only one sendrecvbuf buffer is provided, then the operation is performed in-place.

See also

  • Allreduce, to handle allocation of the output buffer.
  • Reduce!/Reduce to send reduction to a single rank.
  • Op for details on reduction operators.

External links

source
MPI.AllreduceFunction
recvbuf = Allreduce(sendbuf, op, comm)

Performs elementwise reduction using the operator op on the buffer sendbuf, returning the result in the recvbuf of all processes in the group.

sendbuf can also be a scalar, in which case recvbuf will be a value of the same type.

See also

  • Allreduce! for mutating or in-place operations.
  • Reduce!/Reduce to send reduction to a single rank.
  • Op for details on reduction operators.

External links

source
MPI.Scan!Function
Scan!(sendbuf, recvbuf, op, comm::Comm)
Scan!(sendrecvbuf, op, comm::Comm)

Inclusive prefix reduction (analagous to accumulate in Julia): recvbuf on rank i will contain the the result of reducing sendbuf by op from ranks 0:i.

If only a single buffer sendrecvbuf is provided, then operations will be performed in-place.

See also

  • Scan to handle allocation of the output buffer
  • Exscan!/Exscan for exclusive scan
  • Op for details on reduction operators.

External links

source
MPI.ScanFunction
recvbuf = Scan(sendbuf, op, comm::Comm)

Inclusive prefix reduction (analagous to accumulate in Julia): recvbuf on rank i will contain the the result of reducing sendbuf by op from ranks 0:i.

sendbuf can also be a scalar, in which case recvbuf will also be a scalar of the same type.

See also

  • Scan! for mutating or in-place operations
  • Exscan!/Exscan for exclusive scan
  • Op for details on reduction operators.

External links

source
MPI.Exscan!Function
Exscan!(sendbuf, recvbuf, op, comm::Comm)
Exscan!(sendrecvbuf, op, comm::Comm)

Exclusive prefix reduction (analagous to accumulate in Julia): recvbuf on rank i will contain the the result of reducing sendbuf by op from ranks 0:i-1. The recvbuf on rank 0 is ignored, and the recvbuf on rank 1 will contain the contents of sendbuf on rank 0.

If only a single sendrecvbuf is provided, then operations are performed in-place, and buf on rank 0 will remain unchanged.

See also

  • Exscan to handle allocation of the output buffer
  • Scan!/Scan for inclusive scan
  • Op for details on reduction operators.

External links

source
MPI.ExscanFunction
recvbuf = Exscan(sendbuf, op, comm::Comm)

Exclusive prefix reduction (analagous to accumulate in Julia): recvbuf on rank i will contain the the result of reducing sendbuf by op from ranks 0:i-1. The recvbuf on rank 0 is undefined, and the recvbuf on rank 1 will contain the contents of sendbuf on rank 0.

See also

  • Exscan! for mutating and in-place operations
  • Scan!/Scan for inclusive scan
  • Op for details on reduction operators.

External links

source