Collective communication
Synchronization
MPI.Barrier — Function.Broadcast
MPI.Bcast! — Function.Gather/Scatter
Gather
MPI.Allgather! — Function.Allgather!(sendbuf, recvbuf[, count::Integer=length(sendbuf)], comm::Comm)
Allgather!(sendrecvbuf, count::Integer, comm::Comm)Each process sends the first count elements of sendbuf to the other processes, who store the results in rank order into recvbuf.
If only one buffer sendrecvbuf is provided, then each process send data is assumed to be in the area where it would receive it's own contribution.
See also
Allgatherfor the allocating operationAllgatherv!/Allgathervif the number of elements varies between processes.Gather!to send only to a single root process
External links
MPI.Allgather — Function.Allgather(sendbuf[, count=length(sendbuf)], comm)Each process sends the first count elements of sendbuf to the other processes, who store the results in rank order allocating the output buffer.
See also
Allgather!for the mutating operationAllgatherv!/Allgathervif the number of elements varies between processes.Gather!to send only to a single root process
External links
MPI.Allgatherv! — Function.Allgatherv!(sendbuf, recvbuf, counts, comm)
Allgatherv!(sendrecvbuf, counts, comm)Each process sends the first counts[rank] elements of the buffer sendbuf to all other process. Each process stores the received data in rank order in the buffer recvbuf.
If only one buffer sendrecvbuf is provided, then for each process, the data to be sent is taken from the interval of recvbuf where it would store it's own data.
See also
Allgathervfor the allocating operationGatherv!/Gathervto send the result to a single process
External links
MPI.Allgatherv — Function.Allgatherv(sendbuf, counts, comm)Each process sends the first counts[rank] elements of the buffer sendbuf to all other process. Each process allocates an output buffer and stores the received data in rank order.
See also
Allgatherv!for the mutating operation.Gatherv!/Gathervto send the result to a single process.
External links
MPI.Gather! — Function.Gather!(sendbuf, recvbuf[, count::Integer=length(sendbuf)], root::Integer, comm::Comm)Each process sends the first count elements of the buffer sendbuf to the root process. The root process stores elements in rank order in the buffer buffer recvbuf.
sendbuf can be nothing on the root process, in which case the corresponding entries in recvbuf are assumed to be already in place (this corresponds the behaviour of MPI_IN_PLACE in MPI_Gather). For example
if root == MPI.Comm_rank(comm)
Gather!(nothing, buf, count, root, comm)
else
Gather!(buf, nothing, count, root, comm)
endrecvbuf on the root process should be a buffer of length count*Comm_size(comm), and on non-root processes it is ignored and can be nothing.
count should be the same for all processes.
See also
Gatherfor the allocating operation.Gatherv!if the number of elements varies between processes.Allgather!to send the result to all processes.
External links
MPI.Gather — Function.Gather(sendbuf[, count=length(sendbuf)], root, comm)Each process sends the first count elements of the buffer sendbuf to the root process. The root allocates the output buffer and stores elements in rank order.
See also
Gather!for the mutating operation.Gatherv!/Gathervif the number of elements varies between processes.Allgather!/Allgatherto send the result to all processes.
External links
MPI.Gatherv! — Function.Gatherv!(sendbuf, recvbuf, counts, root, comm)Each process sends the first counts[rank] elements of the buffer sendbuf to the root process. The root stores elements in rank order in the buffer recvbuf.
sendbuf can be nothing on the root process, in which case the corresponding entries in recvbuf are assumed to be already in place (this corresponds the behaviour of MPI_IN_PLACE in MPI_Gatherv). For example
if root == MPI.Comm_rank(comm)
Gatherv!(nothing, buf, counts, root, comm)
else
Gatherv!(buf, nothing, counts, root, comm)
endSee also
Gathervfor the allocating operationGather!Allgatherv!/Allgathervto send the result to all processes
External links
MPI.Gatherv — Function.Gatherv(sendbuf, counts, root, comm)Each process sends the first counts[rank] elements of the buffer sendbuf to the root process. The root allocates the output buffer and stores elements in rank order.
See also
Gatherv!for the mutating operationGather!/GatherAllgatherv!/Allgathervto send the result to all processes
External links
Scatter
MPI.Scatter! — Function.Scatter!(sendbuf, recvbuf[, count=length(recvbuf)], root::Integer, comm::Comm)Splits the buffer sendbuf in the root process into Comm_size(comm) chunks of length count, and sends the j-th chunk to the process of rank j into the recvbuf buffer.
sendbuf on the root process should be a buffer of length count*Comm_size(comm), and on non-root processes it is ignored and can be nothing.
recvbuf can be nothing on the root process, in which case it is unmodified (this corresponds the behaviour of MPI_IN_PLACE in MPI_Scatter). For example
if root == MPI.Comm_rank(comm)
Scatter!(buf, nothing, count, root, comm)
else
Scatter!(nothing, buf, count, root, comm)
endcount should be the same for all processes.
See also
External links
MPI.Scatter — Function.Scatter(sendbuf, count, root, comm)Splits the buffer sendbuf in the root process into Comm_size(comm) chunks and sends the j-th chunk to the process of rank j, allocating the output buffer.
See also
External links
MPI.Scatterv! — Function.Scatterv!(sendbuf, recvbuf, counts, root, comm)Splits the buffer sendbuf in the root process into Comm_size(comm) chunks of length counts[j] and sends the j-th chunk to the process of rank j into the recvbuf buffer, which must be of length at least count.
recvbuf can be nothing on the root process, in which case it is unmodified (this corresponds the behaviour of MPI_IN_PLACE in MPI_Scatterv). For example
if root == MPI.Comm_rank(comm)
Scatterv!(buf, nothing, counts, root, comm)
else
Scatterv!(nothing, buf, counts, root, comm)
endSee also
External links
MPI.Scatterv — Function.Scatterv(sendbuf, counts, root, comm)Splits the buffer sendbuf in the root process into Comm_size(comm) chunks of length counts[j] and sends the j-th chunk to the process of rank j, which allocates the output buffer
See also
External links
All-to-all
MPI.Alltoall! — Function.Alltoall!(sendbuf, recvbuf, count::Integer, comm::Comm)
Alltoall!(sendrecvbuf, count::Integer, comm::Comm)Every process divides the buffer sendbuf into Comm_size(comm) chunks of length count, sending the j-th chunk to the j-th process. Every process stores the data received from the j-th process in the j-th chunk of the buffer recvbuf.
rank send buf recv buf
---- -------- --------
0 a,b,c,d,e,f Alltoall a,b,A,B,α,β
1 A,B,C,D,E,F ----------------> c,d,C,D,γ,ψ
2 α,β,γ,ψ,η,ν e,f,E,F,η,νIf only one buffer sendrecvbuf then data is overwritten.
See also
Alltoallfor the allocating operation
External links
MPI.Alltoall — Function.Alltoall(sendbuf, count::Integer, comm::Comm)Every process divides the buffer sendbuf into Comm_size(comm) chunks of length count, sending the j-th chunk to the j-th process. Every process allocates the output buffer and stores the data received from the j-th process in the j-th chunk.
rank send buf recv buf
---- -------- --------
0 a,b,c,d,e,f Alltoall a,b,A,B,α,β
1 A,B,C,D,E,F ----------------> c,d,C,D,γ,ψ
2 α,β,γ,ψ,η,ν e,f,E,F,η,νSee also
Alltoall!for the mutating operation
External links
MPI.Alltoallv! — Function.MPI.Alltoallv — Function.Alltoallv(sendbuf, recvbuf, scounts::Vector, rcounts::Vector, comm::Comm)Similar to Alltoall, except with different size chunks per process.
See also
Alltoallv!for the mutating operation
External links
Reduce/Scan
MPI.Reduce! — Function.Reduce!(sendbuf, recvbuf[, count::Integer=length(sendbuf)], op, root::Integer, comm::Comm)
Reduce!(sendrecvbuf, op, root::Integer, comm::Comm)Performs elementwise reduction using the operator op on the first count elements of the buffer sendbuf and stores the result in recvbuf on the process of rank root.
On non-root processes recvbuf is ignored, and can be nothing.
To perform the reduction in place, provide a single buffer sendrecvbuf.
See also
Reduceto handle allocation of the output buffer.Allreduce!/Allreduceto send reduction to all ranks.Opfor details on reduction operators.
External links
MPI.Reduce — Function.recvbuf = Reduce(sendbuf, op, root::Integer, comm::Comm)Performs elementwise reduction using the operator op on the buffer sendbuf, returning the result recvbuf on the process of rank root, and nothing on non-root processes.
sendbuf can also be a scalar, in which case recvbuf will be a value of the same type.
See also
Reduce!for mutating and in-place operationsAllreduce!/Allreduceto send reduction to all ranks.Opfor details on reduction operators.
External links
MPI.Allreduce! — Function.Allreduce!(sendbuf, recvbuf[, count=length(sendbuf)], op, comm)
Allreduce!(sendrecvbuf, op, comm)Performs elementwise reduction using the operator op on the first count elements of the buffer sendbuf, storing the result in the recvbuf of all processes in the group.
Allreduce! is equivalent to a Reduce! operation followed by a Bcast!, but can lead to better performance.
If only one sendrecvbuf buffer is provided, then the operation is performed in-place.
See also
Allreduce, to handle allocation of the output buffer.Reduce!/Reduceto send reduction to a single rank.Opfor details on reduction operators.
External links
MPI.Allreduce — Function.recvbuf = Allreduce(sendbuf, op, comm)Performs elementwise reduction using the operator op on the buffer sendbuf, returning the result in the recvbuf of all processes in the group.
sendbuf can also be a scalar, in which case recvbuf will be a value of the same type.
See also
Allreduce!for mutating or in-place operations.Reduce!/Reduceto send reduction to a single rank.Opfor details on reduction operators.
External links
MPI.Scan! — Function.Scan!(sendbuf, recvbuf[, count::Integer], op, comm::Comm)
Scan!(buf[, count::Integer], op, comm::Comm)Inclusive prefix reduction (analagous to accumulate in Julia): recvbuf on rank i will contain the the result of reducing sendbuf by op from ranks 0:i.
If only a single buffer is provided, then operations will be performed in-place in buf.
See also
Scanto handle allocation of the output bufferExscan!/Exscanfor exclusive scanOpfor details on reduction operators.
External links
MPI.Scan — Function.recvbuf = Scan(sendbuf, op, comm::Comm)Inclusive prefix reduction (analagous to accumulate in Julia): recvbuf on rank i will contain the the result of reducing sendbuf by op from ranks 0:i.
sendbuf can also be a scalar, in which case recvbuf will also be a scalar of the same type.
See also
Scan!for mutating or in-place operationsExscan!/Exscanfor exclusive scanOpfor details on reduction operators.
External links
MPI.Exscan! — Function.Exscan!(sendbuf, recvbuf[, count::Integer], op, comm::Comm)
Exscan!(buf[, count::Integer], op, comm::Comm)Exclusive prefix reduction (analagous to accumulate in Julia): recvbuf on rank i will contain the the result of reducing sendbuf by op from ranks 0:i-1. The recvbuf on rank 0 is ignored, and the recvbuf on rank 1 will contain the contents of sendbuf on rank 0.
If only a single buf is provided, then operations are performed in-place, and buf on rank 0 will remain unchanged.
See also
Exscanto handle allocation of the output bufferScan!/Scanfor inclusive scanOpfor details on reduction operators.
External links
MPI.Exscan — Function.recvbuf = Exscan(sendbuf, op, comm::Comm)Exclusive prefix reduction (analagous to accumulate in Julia): recvbuf on rank i will contain the the result of reducing sendbuf by op from ranks 0:i-1. The recvbuf on rank 0 is undefined, and the recvbuf on rank 1 will contain the contents of sendbuf on rank 0.
See also
Exscan!for mutating and in-place operationsScan!/Scanfor inclusive scanOpfor details on reduction operators.
External links