Collective communication
Synchronization
MPI.Barrier — FunctionBarrier(comm::Comm)Blocks until comm is synchronized.
If comm is an intracommunicator, then it blocks until all members of the group have called it.
If comm is an intercommunicator, then it blocks until all members of the other group have called it.
External links
MPI.Ibarrier — FunctionIbarrier(comm::Comm[, req::AbstractRequest = Request())Blocks until comm is synchronized.
If comm is an intracommunicator, then it blocks until all members of the group have called it.
If comm is an intercommunicator, then it blocks until all members of the other group have called it.
External links
Broadcast
MPI.Bcast! — FunctionBcast!(buf, comm::Comm; root::Integer=0)Broadcast the buffer buf from root to all processes in comm.
See also
External links
MPI.Bcast — FunctionBcast(obj, root::Integer, comm::Comm)Broadcast the obj from root to all processes in comm. Returns the object. Currently obj must be isbits, i.e. isbitstype(typeof(obj)) == true.
MPI.bcast — Functionbcast(obj, comm::Comm; root::Integer=0)Broadcast the object obj from rank root to all processes on comm. This is able to handle arbitrary data.
See also
Gather/Scatter
Gather
MPI.Gather! — FunctionGather!(sendbuf, recvbuf, comm::Comm; root::Integer=0)Each process sends the contents of the buffer sendbuf to the root process. The root process stores elements in rank order in the buffer buffer recvbuf.
sendbuf should be a Buffer object, or any object for which Buffer_send is defined, with the same length on all processes, and should be the same length on all processes.
On the root process, sendbuf can be MPI.IN_PLACE on the root process, in which case the corresponding entries in recvbuf are assumed to be already in place (this corresponds the behaviour of MPI_IN_PLACE in MPI_Gather). For example:
if root == MPI.Comm_rank(comm)
MPI.Gather!(MPI.IN_PLACE, UBuffer(buf, count), comm; root=root)
else
MPI.Gather!(buf, nothing, comm; root=root)
endrecvbuf on the root process should be a UBuffer, or can be an AbstractArray if the length can be determined from sendbuf. On non-root processes it is ignored and can be nothing.
See also
Gatherfor the allocating operation.Gatherv!if the number of elements varies between processes.Allgather!to send the result to all processes.
External links
MPI.Gather — FunctionGather(sendbuf, comm::Comm; root=0)Each process sends the contents of the buffer sendbuf to the root process. The root allocates the output buffer and stores elements in rank order.
sendbuf can be an AbstractArray or a scalar, and should be the same length on all processes.
See also
Gather!for the mutating operation.Gatherv!if the number of elements varies between processes.Allgather!/Allgatherto send the result to all processes.
External links
MPI.gather — Functiongather(obj, comm::Comm; root::Integer=0)Gather the objects obj from all ranks on comm to rank root. This is able to to handle arbitrary data. On root, it returns a vector of the objects, and nothing otherwise.
See also
MPI.Gatherv! — FunctionGatherv!(sendbuf, recvbuf, comm::Comm; root::Integer=0)Each process sends the contents of the buffer sendbuf to the root process. The root stores elements in rank order in the buffer recvbuf.
sendbuf should be a Buffer object, or any object for which Buffer_send is defined, with the same length on all processes.
On the root process, sendbuf can be MPI.IN_PLACE, in which case the corresponding entries in recvbuf are assumed to be already in place. For example
if root == MPI.Comm_rank(comm)
Gatherv!(MPI.IN_PLACE, VBuffer(buf, counts), comm; root=root)
else
Gatherv!(buf, nothing, comm; root=root)
endrecvbuf on the root process should be a VBuffer, or can be an AbstractArray if the length can be determined from sendbuf. On non-root processes it is ignored and can be nothing.
See also
Gather!if the number of elements is the same between processes.Allgatherv!to send the result to all processes.
External links
MPI.Allgather! — FunctionAllgather!(sendbuf, recvbuf::UBuffer, comm::Comm)
Allgather!(sendrecvbuf::UBuffer, comm::Comm)Each process sends the contents of sendbuf to the other processes, the result of which is stored in rank order into recvbuf.
sendbuf can be a Buffer object, or any object for which Buffer_send is defined, and should be the same length on all processes.
recvbuf can be a UBuffer, or can be an AbstractArray if the length can be determined from sendbuf.
If only one buffer sendrecvbuf is provided, then on each process the data to send is assumed to be in the area where it would receive its own contribution.
See also
Allgatherfor the allocating operationAllgatherv!if the number of elements varies between processes.Gather!to send only to a single root process
External links
MPI.Allgather — FunctionAllgather(sendbuf, comm)Each process sends the contents of sendbuf to the other processes, who store the results in rank order allocating the output buffer.
sendbuf can be an AbstractArray or a scalar, and should be the same size on all processes.
See also
Allgather!for the mutating operationAllgatherv!if the number of elements varies between processes.Gather!to send only to a single root process
External links
MPI.Allgatherv! — FunctionAllgatherv!(sendbuf, recvbuf::VBuffer, comm::Comm)
Allgatherv!(sendrecvbuf::VBuffer, comm::Comm)Each process sends the contents of sendbuf to all other process. Each process stores the received in the VBuffer recvbuf.
sendbuf can be a Buffer object, or any object for which Buffer_send is defined.
If only one buffer sendrecvbuf is provided, then for each process, the data to be sent is taken from the interval of recvbuf where it would store its own data.
See also
Gatherv!to send the result to a single process
External links
MPI.Neighbor_allgather! — FunctionNeighbor_allgather!(sendbuf::Buffer, recvbuf::UBuffer, comm::Comm)Perform an all-gather communication along the directed edges of the graph.
See also MPI.Allgather!.
External links
MPI.Neighbor_allgatherv! — FunctionNeighbor_allgatherv!(sendbuf::Buffer, recvbuf::VBuffer, comm::Comm)Perform an all-gather communication along the directed edges of the graph with variable sized data.
See also MPI.Allgatherv!.
External links
Scatter
MPI.Scatter! — FunctionScatter!(sendbuf::Union{UBuffer,Nothing}, recvbuf, comm::Comm;
root::Integer=0)Splits the buffer sendbuf in the root process into Comm_size(comm) chunks, sending the j-th chunk to the process of rank j-1 into the recvbuf buffer.
sendbuf on the root process should be a UBuffer (an Array can also be passed directly if the sizes can be determined from recvbuf). On non-root processes it is ignored, and nothing can be passed instead.
recvbuf is a Buffer object, or any object for which Buffer(recvbuf) is defined. On the root process, it can also be MPI.IN_PLACE, in which case it is unmodified. For example:
if root == MPI.Comm_rank(comm)
MPI.Scatter!(UBuffer(buf, count), MPI.IN_PLACE, comm; root=root)
else
MPI.Scatter!(nothing, buf, comm; root=root)
endSee also
Scatterv!if the number of elements varies between processes.
External links
MPI.Scatter — FunctionScatter(sendbuf, T, comm::Comm; root::Integer=0)Splits the buffer sendbuf in the root process into Comm_size(comm) chunks, sending the j-th chunk to the process of rank j-1 as an object of type T.
See also
MPI.scatter — Functionscatter(objs::Union{AbstractVector, Nothing}, comm::Comm; root::Integer=0)Sends the j-th element of objs in the root process to rank j-1 and returns it. On root, objs is expected to be a Comm_size(comm)-element vector. On the other ranks, it is ignored and can be nothing.
This method can handle arbitrary data.
See also
MPI.Scatterv! — FunctionScatterv!(sendbuf, recvbuf, comm::Comm; root::Integer=0)Splits the buffer sendbuf in the root process into Comm_size(comm) chunks and sends the jth chunk to the process of rank j-1 into the recvbuf buffer.
sendbuf on the root process should be a VBuffer. On non-root processes it is ignored, and nothing can be passed instead.
recvbuf is a Buffer object, or any object for which Buffer(recvbuf) is defined. On the root process, it can also be MPI.IN_PLACE, in which case it is unmodified. For example:
if root == MPI.Comm_rank(comm)
MPI.Scatterv!(VBuffer(buf, counts), MPI.IN_PLACE, comm; root=root)
else
MPI.Scatterv!(nothing, buf, comm; root=root)
endSee also
Scatter!if the number of elements are the same for all processes
External links
All-to-all
MPI.Alltoall! — FunctionAlltoall!(sendbuf::UBuffer, recvbuf::UBuffer, comm::Comm)
Alltoall!(sendrecvbuf::UBuffer, comm::Comm)Every process divides the UBuffer sendbuf into Comm_size(comm) chunks of equal size, sending the j-th chunk to the process of rank j-1. Every process stores the data received from rank j-1 process in the j-th chunk of the buffer recvbuf.
rank send buf recv buf
---- -------- --------
0 a,b,c,d,e,f Alltoall a,b,A,B,α,β
1 A,B,C,D,E,F ----------------> c,d,C,D,γ,ψ
2 α,β,γ,ψ,η,ν e,f,E,F,η,νIf only one buffer sendrecvbuf is used, then data is overwritten.
See also
Alltoallfor the allocating operation
External links
MPI.Alltoall — FunctionAlltoall(sendbuf::UBuffer, comm::Comm)Every process divides the UBuffer sendbuf into Comm_size(comm) chunks of equal size, sending the j-th chunk to the process of rank j-1. Every process allocates the output buffer and stores the data received from the process on rank j-1 in the j-th chunk.
rank send buf recv buf
---- -------- --------
0 a,b,c,d,e,f Alltoall a,b,A,B,α,β
1 A,B,C,D,E,F ----------------> c,d,C,D,γ,ψ
2 α,β,γ,ψ,η,ν e,f,E,F,η,νSee also
Alltoall!for the mutating operation
External links
MPI.Alltoallv! — FunctionAlltoallv!(sendbuf::VBuffer, recvbuf::VBuffer, comm::Comm)Similar to Alltoall!, except with different size chunks per process.
See also
External links
MPI.Neighbor_alltoall! — FunctionNeighbor_alltoall!(sendbuf::UBuffer, recvbuf::UBuffer, comm::Comm)Perform an all-to-all communication along the directed edges of the graph with fixed size messages.
See also MPI.Alltoall!.
External links
MPI.Neighbor_alltoallv! — FunctionNeighbor_alltoallv!(sendbuf::VBuffer, recvbuf::VBuffer, graph_comm::Comm)Perform an all-to-all communication along the directed edges of the graph with variable size messages.
See also MPI.Alltoallv!.
External links
Reduce/Scan
MPI.Reduce! — FunctionReduce!(sendbuf, recvbuf, op, comm::Comm; root::Integer=0)
Reduce!(sendrecvbuf, op, comm::Comm; root::Integer=0)Performs elementwise reduction using the operator op on the buffer sendbuf and stores the result in recvbuf on the process of rank root.
On non-root processes recvbuf is ignored, and can be nothing.
To perform the reduction in place, provide a single buffer sendrecvbuf.
See also
Reduceto handle allocation of the output buffer.Allreduce!/Allreduceto send reduction to all ranks.Opfor details on reduction operators.
External links
MPI.Reduce — Functionrecvbuf = Reduce(sendbuf, op, comm::Comm; root::Integer=0)Performs elementwise reduction using the operator op on the buffer sendbuf, returning the result recvbuf on the process of rank root, and nothing on non-root processes.
sendbuf can also be a scalar, in which case recvbuf will be a value of the same type.
See also
Reduce!for mutating and in-place operationsAllreduce!/Allreduceto send reduction to all ranks.Opfor details on reduction operators.
External links
MPI.Allreduce! — FunctionAllreduce!(sendbuf, recvbuf, op, comm::Comm)
Allreduce!(sendrecvbuf, op, comm::Comm)Performs elementwise reduction using the operator op on the buffer sendbuf, storing the result in the recvbuf of all processes in the group.
Allreduce! is equivalent to a Reduce! operation followed by a Bcast!, but can lead to better performance.
If only one sendrecvbuf buffer is provided, then the operation is performed in-place.
See also
Allreduce, to handle allocation of the output buffer.Reduce!/Reduceto send reduction to a single rank.Opfor details on reduction operators.
External links
MPI.Allreduce — Functionrecvbuf = Allreduce(sendbuf, op, comm)Performs elementwise reduction using the operator op on the buffer sendbuf, returning the result in the recvbuf of all processes in the group.
sendbuf can also be a scalar, in which case recvbuf will be a value of the same type.
See also
Allreduce!for mutating or in-place operations.Reduce!/Reduceto send reduction to a single rank.Opfor details on reduction operators.
External links
MPI.Scan! — FunctionScan!(sendbuf, recvbuf, op, comm::Comm)
Scan!(sendrecvbuf, op, comm::Comm)Inclusive prefix reduction (analogous to accumulate in Julia): recvbuf on rank i will contain the result of reducing sendbuf by op from ranks 0:i.
If only a single buffer sendrecvbuf is provided, then operations will be performed in-place.
See also
Scanto handle allocation of the output bufferExscan!/Exscanfor exclusive scanOpfor details on reduction operators.
External links
MPI.Scan — Functionrecvbuf = Scan(sendbuf, op, comm::Comm)Inclusive prefix reduction (analogous to accumulate in Julia): recvbuf on rank i will contain the result of reducing sendbuf by op from ranks 0:i.
sendbuf can also be a scalar, in which case recvbuf will also be a scalar of the same type.
See also
Scan!for mutating or in-place operationsExscan!/Exscanfor exclusive scanOpfor details on reduction operators.
External links
MPI.Exscan! — FunctionExscan!(sendbuf, recvbuf, op, comm::Comm)
Exscan!(sendrecvbuf, op, comm::Comm)Exclusive prefix reduction (analogous to accumulate in Julia): recvbuf on rank i will contain the result of reducing sendbuf by op from ranks 0:i-1. The recvbuf on rank 0 is ignored, and the recvbuf on rank 1 will contain the contents of sendbuf on rank 0.
If only a single sendrecvbuf is provided, then operations are performed in-place, and buf on rank 0 will remain unchanged.
See also
Exscanto handle allocation of the output bufferScan!/Scanfor inclusive scanOpfor details on reduction operators.
External links
MPI.Exscan — Functionrecvbuf = Exscan(sendbuf, op, comm::Comm)Exclusive prefix reduction (analogous to accumulate in Julia): recvbuf on rank i will contain the result of reducing sendbuf by op from ranks 0:i-1. The recvbuf on rank 0 is undefined, and the recvbuf on rank 1 will contain the contents of sendbuf on rank 0.
See also
Exscan!for mutating and in-place operationsScan!/Scanfor inclusive scanOpfor details on reduction operators.
External links