# Collective communication

## Synchronization

MPI.BarrierFunction
Barrier(comm::Comm)

Blocks until comm is synchronized.

If comm is an intracommunicator, then it blocks until all members of the group have called it.

If comm is an intercommunicator, then it blocks until all members of the other group have called it.

source
MPI.IbarrierFunction
Ibarrier(comm::Comm)

Blocks until comm is synchronized.

If comm is an intracommunicator, then it blocks until all members of the group have called it.

If comm is an intercommunicator, then it blocks until all members of the other group have called it.

source

MPI.bcastFunction
bcast(obj, root::Integer, comm::Comm)

Broadcast the object obj from rank root to all processes on comm. This is able to handle arbitrary data.

source

## Gather/Scatter

### Gather

MPI.Gather!Function
Gather!(sendbuf, recvbuf::Union{UBuffer,Nothing}, root::Integer, comm::Comm)

Each process sends the contents of the buffer sendbuf to the root process. The root process stores elements in rank order in the buffer buffer recvbuf.

sendbuf should be a Buffer object, or any object for which Buffer_send is defined, with the same length on all processes, and should be the same length on all processes.

On the root process, sendbuf can be MPI.IN_PLACE on the root process, in which case the corresponding entries in recvbuf are assumed to be already in place (this corresponds the behaviour of MPI_IN_PLACE in MPI_Gather). For example:

if root == MPI.Comm_rank(comm)
MPI.Gather!(MPI.IN_PLACE, UBuffer(buf, count), root, comm)
else
MPI.Gather!(buf, nothing, root, comm)
end

recvbuf on the root process should be a UBuffer, or can be an AbstractArray if the length can be determined from sendbuf. On non-root processes it is ignored and can be nothing.

source
MPI.GatherFunction
Gather(sendbuf, root, comm::Comm)

Each process sends the contents of the buffer sendbuf to the root process. The root allocates the output buffer and stores elements in rank order.

sendbuf can be an AbstractArray or a scalar, and should be the same length on all processes.

source
MPI.Gatherv!Function
Gatherv!(sendbuf, recvbuf::Union{VBuffer,Nothing}, root, comm)

Each process sends the contents of the buffer sendbuf to the root process. The root stores elements in rank order in the buffer recvbuf.

sendbuf should be a Buffer object, or any object for which Buffer_send is defined, with the same length on all processes.

On the root process, sendbuf can be MPI.IN_PLACE, in which case the corresponding entries in recvbuf are assumed to be already in place. For example

if root == MPI.Comm_rank(comm)
Gatherv!(MPI.IN_PLACE, VBuffer(buf, counts), root, comm)
else
Gatherv!(buf, nothing, root, comm)
end

recvbuf on the root process should be a VBuffer, or can be an AbstractArray if the length can be determined from sendbuf. On non-root processes it is ignored and can be nothing.

source
MPI.Allgather!Function
Allgather!(sendbuf, recvbuf::UBuffer, comm::Comm)
Allgather!(sendrecvbuf::UBuffer, comm::Comm)

Each process sends the contents of sendbuf to the other processes, the result of which is stored in rank order into recvbuf.

sendbuf can be a Buffer object, or any object for which Buffer_send is defined, and should be the same length on all processes.

recvbuf can be a UBuffer, or can be an AbstractArray if the length can be determined from sendbuf.

If only one buffer sendrecvbuf is provided, then on each process the data to send is assumed to be in the area where it would receive its own contribution.

source
MPI.AllgatherFunction
Allgather(sendbuf, comm)

Each process sends the contents of sendbuf to the other processes, who store the results in rank order allocating the output buffer.

sendbuf can be an AbstractArray or a scalar, and should be the same size on all processes.

source
MPI.Allgatherv!Function
Allgatherv!(sendbuf, recvbuf::VBuffer, comm::Comm)
Allgatherv!(sendrecvbuf::VBuffer, comm::Comm)

Each process sends the contents of sendbuf to all other process. Each process stores the received in the VBuffer recvbuf.

sendbuf can be a Buffer object, or any object for which Buffer_send is defined.

If only one buffer sendrecvbuf is provided, then for each process, the data to be sent is taken from the interval of recvbuf where it would store its own data.

source

### Scatter

MPI.Scatter!Function
Scatter!(sendbuf::Union{UBuffer,Nothing}, recvbuf, root::Integer, comm::Comm)

Splits the buffer sendbuf in the root process into Comm_size(comm) chunks, sending the j-th chunk to the process of rank j-1 into the recvbuf buffer.

sendbuf on the root process should be a UBuffer (an Array can also be passed directly if the sizes can be determined from recvbuf). On non-root processes it is ignored, and nothing can be passed instead.

recvbuf is a Buffer object, or any object for which Buffer(recvbuf) is defined. On the root process, it can also be MPI.IN_PLACE, in which case it is unmodified. For example:

if root == MPI.Comm_rank(comm)
MPI.Scatter!(UBuffer(buf, count), MPI.IN_PLACE, root, comm)
else
MPI.Scatter!(nothing, buf, root, comm)
end

source
MPI.Scatterv!Function
Scatterv!(sendbuf, T, root, comm)

Splits the buffer sendbuf in the root process into Comm_size(comm) chunks and sends the jth chunk to the process of rank j-1 into the recvbuf buffer.

sendbuf on the root process should be a VBuffer. On non-root processes it is ignored, and nothing can be passed instead.

recvbuf is a Buffer object, or any object for which Buffer(recvbuf) is defined. On the root process, it can also be MPI.IN_PLACE, in which case it is unmodified. For example:

if root == MPI.Comm_rank(comm)
MPI.Scatterv!(VBuffer(buf, counts), MPI.IN_PLACE, root, comm)
else
MPI.Scatterv!(nothing, buf, root, comm)
end

source

### All-to-all

MPI.Alltoall!Function
Alltoall!(sendbuf::UBuffer, recvbuf::UBuffer, comm::Comm)
Alltoall!(sendrecvbuf::UBuffer, comm::Comm)

Every process divides the UBuffer sendbuf into Comm_size(comm) chunks of equal size, sending the j-th chunk to the process of rank j-1. Every process stores the data received from rank j-1 process in the j-th chunk of the buffer recvbuf.

rank    send buf                        recv buf
----    --------                        --------
0      a,b,c,d,e,f       Alltoall      a,b,A,B,α,β
1      A,B,C,D,E,F  ---------------->  c,d,C,D,γ,ψ
2      α,β,γ,ψ,η,ν                     e,f,E,F,η,ν

If only one buffer sendrecvbuf is used, then data is overwritten.

source
MPI.AlltoallFunction
Alltoall(sendbuf::UBuffer, comm::Comm)

Every process divides the UBuffer sendbuf into Comm_size(comm) chunks of equal size, sending the j-th chunk to the process of rank j-1. Every process allocates the output buffer and stores the data received from the process on rank j-1 in the j-th chunk.

rank    send buf                        recv buf
----    --------                        --------
0      a,b,c,d,e,f       Alltoall      a,b,A,B,α,β
1      A,B,C,D,E,F  ---------------->  c,d,C,D,γ,ψ
2      α,β,γ,ψ,η,ν                     e,f,E,F,η,ν

source
Missing docstring.

Missing docstring for MPI.Alltoallv. Check Documenter's build log for details.

## Reduce/Scan

MPI.Reduce!Function
Reduce!(sendbuf, recvbuf, op, root::Integer, comm::Comm)
Reduce!(sendrecvbuf, op, root::Integer, comm::Comm)

Performs elementwise reduction using the operator op on the buffer sendbuf and stores the result in recvbuf on the process of rank root.

On non-root processes recvbuf is ignored, and can be nothing.

To perform the reduction in place, provide a single buffer sendrecvbuf.

source
MPI.ReduceFunction
recvbuf = Reduce(sendbuf, op, root::Integer, comm::Comm)

Performs elementwise reduction using the operator op on the buffer sendbuf, returning the result recvbuf on the process of rank root, and nothing on non-root processes.

sendbuf can also be a scalar, in which case recvbuf will be a value of the same type.

source
MPI.Allreduce!Function
Allreduce!(sendbuf, recvbuf, op, comm::Comm)
Allreduce!(sendrecvbuf, op, comm::Comm)

Performs elementwise reduction using the operator op on the buffer sendbuf, storing the result in the recvbuf of all processes in the group.

Allreduce! is equivalent to a Reduce! operation followed by a Bcast!, but can lead to better performance.

If only one sendrecvbuf buffer is provided, then the operation is performed in-place.

source
MPI.AllreduceFunction
recvbuf = Allreduce(sendbuf, op, comm)

Performs elementwise reduction using the operator op on the buffer sendbuf, returning the result in the recvbuf of all processes in the group.

sendbuf can also be a scalar, in which case recvbuf will be a value of the same type.

source
MPI.Scan!Function
Scan!(sendbuf, recvbuf, op, comm::Comm)
Scan!(sendrecvbuf, op, comm::Comm)

Inclusive prefix reduction (analagous to accumulate in Julia): recvbuf on rank i will contain the the result of reducing sendbuf by op from ranks 0:i.

If only a single buffer sendrecvbuf is provided, then operations will be performed in-place.

source
MPI.ScanFunction
recvbuf = Scan(sendbuf, op, comm::Comm)

Inclusive prefix reduction (analagous to accumulate in Julia): recvbuf on rank i will contain the the result of reducing sendbuf by op from ranks 0:i.

sendbuf can also be a scalar, in which case recvbuf will also be a scalar of the same type.

source
MPI.Exscan!Function
Exscan!(sendbuf, recvbuf, op, comm::Comm)
Exscan!(sendrecvbuf, op, comm::Comm)

Exclusive prefix reduction (analagous to accumulate in Julia): recvbuf on rank i will contain the the result of reducing sendbuf by op from ranks 0:i-1. The recvbuf on rank 0 is ignored, and the recvbuf on rank 1 will contain the contents of sendbuf on rank 0.

If only a single sendrecvbuf is provided, then operations are performed in-place, and buf on rank 0 will remain unchanged.

MPI.ExscanFunction
recvbuf = Exscan(sendbuf, op, comm::Comm)
Exclusive prefix reduction (analagous to accumulate in Julia): recvbuf on rank i will contain the the result of reducing sendbuf by op from ranks 0:i-1. The recvbuf on rank 0 is undefined, and the recvbuf on rank 1 will contain the contents of sendbuf on rank 0.