Collective communication
Synchronization
MPI.Barrier
— FunctionBarrier(comm::Comm)
Blocks until comm
is synchronized.
If comm
is an intracommunicator, then it blocks until all members of the group have called it.
If comm
is an intercommunicator, then it blocks until all members of the other group have called it.
External links
Broadcast
MPI.Bcast!
— FunctionBcast!(buf[, count=length(buf)], root::Integer, comm::Comm)
Broadcast the first count
elements of the buffer buf
from root
to all processes.
External links
Gather/Scatter
Gather
MPI.Allgather!
— FunctionAllgather!(sendbuf, recvbuf[, count::Integer=length(sendbuf)], comm::Comm)
Allgather!(sendrecvbuf, count::Integer, comm::Comm)
Each process sends the first count
elements of sendbuf
to the other processes, who store the results in rank order into recvbuf
.
If only one buffer sendrecvbuf
is provided, then each process send data is assumed to be in the area where it would receive it's own contribution.
See also
Allgather
for the allocating operationAllgatherv!
/Allgatherv
if the number of elements varies between processes.Gather!
to send only to a single root process
External links
MPI.Allgather
— FunctionAllgather(sendbuf[, count=length(sendbuf)], comm)
Each process sends the first count
elements of sendbuf
to the other processes, who store the results in rank order allocating the output buffer.
See also
Allgather!
for the mutating operationAllgatherv!
/Allgatherv
if the number of elements varies between processes.Gather!
to send only to a single root process
External links
MPI.Allgatherv!
— FunctionAllgatherv!(sendbuf, recvbuf, counts, comm)
Allgatherv!(sendrecvbuf, counts, comm)
Each process sends the first counts[rank]
elements of the buffer sendbuf
to all other process. Each process stores the received data in rank order in the buffer recvbuf
.
If only one buffer sendrecvbuf
is provided, then for each process, the data to be sent is taken from the interval of recvbuf
where it would store it's own data.
See also
Allgatherv
for the allocating operationGatherv!
/Gatherv
to send the result to a single process
External links
MPI.Allgatherv
— FunctionAllgatherv(sendbuf, counts, comm)
Each process sends the first counts[rank]
elements of the buffer sendbuf
to all other process. Each process allocates an output buffer and stores the received data in rank order.
See also
Allgatherv!
for the mutating operation.Gatherv!
/Gatherv
to send the result to a single process.
External links
MPI.Gather!
— FunctionGather!(sendbuf, recvbuf[, count::Integer=length(sendbuf)], root::Integer, comm::Comm)
Each process sends the first count
elements of the buffer sendbuf
to the root
process. The root
process stores elements in rank order in the buffer buffer recvbuf
.
sendbuf
can be nothing
on the root process, in which case the corresponding entries in recvbuf
are assumed to be already in place (this corresponds the behaviour of MPI_IN_PLACE
in MPI_Gather
). For example
if root == MPI.Comm_rank(comm)
Gather!(nothing, buf, count, root, comm)
else
Gather!(buf, nothing, count, root, comm)
end
recvbuf
on the root process should be a buffer of length count*Comm_size(comm)
, and on non-root processes it is ignored and can be nothing
.
count
should be the same for all processes.
See also
Gather
for the allocating operation.Gatherv!
if the number of elements varies between processes.Allgather!
to send the result to all processes.
External links
MPI.Gather
— FunctionGather(sendbuf[, count=length(sendbuf)], root, comm)
Each process sends the first count
elements of the buffer sendbuf
to the root
process. The root
allocates the output buffer and stores elements in rank order.
See also
Gather!
for the mutating operation.Gatherv!
/Gatherv
if the number of elements varies between processes.Allgather!
/Allgather
to send the result to all processes.
External links
MPI.Gatherv!
— FunctionGatherv!(sendbuf, recvbuf, counts, root, comm)
Each process sends the first counts[rank]
elements of the buffer sendbuf
to the root
process. The root
stores elements in rank order in the buffer recvbuf
.
sendbuf
can be nothing
on the root process, in which case the corresponding entries in recvbuf
are assumed to be already in place (this corresponds the behaviour of MPI_IN_PLACE
in MPI_Gatherv
). For example
if root == MPI.Comm_rank(comm)
Gatherv!(nothing, buf, counts, root, comm)
else
Gatherv!(buf, nothing, counts, root, comm)
end
See also
Gatherv
for the allocating operationGather!
Allgatherv!
/Allgatherv
to send the result to all processes
External links
MPI.Gatherv
— FunctionGatherv(sendbuf, counts, root, comm)
Each process sends the first counts[rank]
elements of the buffer sendbuf
to the root
process. The root
allocates the output buffer and stores elements in rank order.
See also
Gatherv!
for the mutating operationGather!
/Gather
Allgatherv!
/Allgatherv
to send the result to all processes
External links
Scatter
MPI.Scatter!
— FunctionScatter!(sendbuf, recvbuf[, count=length(recvbuf)], root::Integer, comm::Comm)
Splits the buffer sendbuf
in the root
process into Comm_size(comm)
chunks of length count
, and sends the j
-th chunk to the process of rank j
into the recvbuf
buffer.
sendbuf
on the root process should be a buffer of length count*Comm_size(comm)
, and on non-root processes it is ignored and can be nothing
.
recvbuf
can be nothing
on the root process, in which case it is unmodified (this corresponds the behaviour of MPI_IN_PLACE
in MPI_Scatter
). For example
if root == MPI.Comm_rank(comm)
Scatter!(buf, nothing, count, root, comm)
else
Scatter!(nothing, buf, count, root, comm)
end
count
should be the same for all processes.
See also
External links
MPI.Scatter
— FunctionScatter(sendbuf, count, root, comm)
Splits the buffer sendbuf
in the root
process into Comm_size(comm)
chunks and sends the j
-th chunk to the process of rank j
, allocating the output buffer.
See also
External links
MPI.Scatterv!
— FunctionScatterv!(sendbuf, recvbuf, counts, root, comm)
Splits the buffer sendbuf
in the root
process into Comm_size(comm)
chunks of length counts[j]
and sends the j-th chunk to the process of rank j into the recvbuf
buffer, which must be of length at least count
.
recvbuf
can be nothing
on the root process, in which case it is unmodified (this corresponds the behaviour of MPI_IN_PLACE
in MPI_Scatterv
). For example
if root == MPI.Comm_rank(comm)
Scatterv!(buf, nothing, counts, root, comm)
else
Scatterv!(nothing, buf, counts, root, comm)
end
See also
External links
MPI.Scatterv
— FunctionScatterv(sendbuf, counts, root, comm)
Splits the buffer sendbuf
in the root
process into Comm_size(comm)
chunks of length counts[j]
and sends the j-th chunk to the process of rank j, which allocates the output buffer
See also
External links
All-to-all
MPI.Alltoall!
— FunctionAlltoall!(sendbuf, recvbuf, count::Integer, comm::Comm)
Alltoall!(sendrecvbuf, count::Integer, comm::Comm)
Every process divides the buffer sendbuf
into Comm_size(comm)
chunks of length count
, sending the j
-th chunk to the j
-th process. Every process stores the data received from the j
-th process in the j
-th chunk of the buffer recvbuf
.
rank send buf recv buf
---- -------- --------
0 a,b,c,d,e,f Alltoall a,b,A,B,α,β
1 A,B,C,D,E,F ----------------> c,d,C,D,γ,ψ
2 α,β,γ,ψ,η,ν e,f,E,F,η,ν
If only one buffer sendrecvbuf
then data is overwritten.
See also
Alltoall
for the allocating operation
External links
MPI.Alltoall
— FunctionAlltoall(sendbuf, count::Integer, comm::Comm)
Every process divides the buffer sendbuf
into Comm_size(comm)
chunks of length count
, sending the j
-th chunk to the j
-th process. Every process allocates the output buffer and stores the data received from the j
-th process in the j
-th chunk.
rank send buf recv buf
---- -------- --------
0 a,b,c,d,e,f Alltoall a,b,A,B,α,β
1 A,B,C,D,E,F ----------------> c,d,C,D,γ,ψ
2 α,β,γ,ψ,η,ν e,f,E,F,η,ν
See also
Alltoall!
for the mutating operation
External links
MPI.Alltoallv!
— FunctionAlltoallv!(sendbuf, recvbuf, scounts::Vector, rcounts::Vector, comm::Comm)
Similar to Alltoall!
, except with different size chunks per process.
See also
Alltoallv
for the allocating operation
External links
MPI.Alltoallv
— FunctionAlltoallv(sendbuf, recvbuf, scounts::Vector, rcounts::Vector, comm::Comm)
Similar to Alltoall
, except with different size chunks per process.
See also
Alltoallv!
for the mutating operation
External links
Reduce/Scan
MPI.Reduce!
— FunctionReduce!(sendbuf, recvbuf[, count::Integer=length(sendbuf)], op, root::Integer, comm::Comm)
Reduce!(sendrecvbuf, op, root::Integer, comm::Comm)
Performs elementwise reduction using the operator op
on the first count
elements of the buffer sendbuf
and stores the result in recvbuf
on the process of rank root
.
On non-root processes recvbuf
is ignored, and can be nothing
.
To perform the reduction in place, provide a single buffer sendrecvbuf
.
See also
Reduce
to handle allocation of the output buffer.Allreduce!
/Allreduce
to send reduction to all ranks.Op
for details on reduction operators.
External links
MPI.Reduce
— Functionrecvbuf = Reduce(sendbuf, op, root::Integer, comm::Comm)
Performs elementwise reduction using the operator op
on the buffer sendbuf
, returning the result recvbuf
on the process of rank root
, and nothing
on non-root processes.
sendbuf
can also be a scalar, in which case recvbuf
will be a value of the same type.
See also
Reduce!
for mutating and in-place operationsAllreduce!
/Allreduce
to send reduction to all ranks.Op
for details on reduction operators.
External links
MPI.Allreduce!
— FunctionAllreduce!(sendbuf, recvbuf[, count=length(sendbuf)], op, comm)
Allreduce!(sendrecvbuf, op, comm)
Performs elementwise reduction using the operator op
on the first count
elements of the buffer sendbuf
, storing the result in the recvbuf
of all processes in the group.
Allreduce!
is equivalent to a Reduce!
operation followed by a Bcast!
, but can lead to better performance.
If only one sendrecvbuf
buffer is provided, then the operation is performed in-place.
See also
Allreduce
, to handle allocation of the output buffer.Reduce!
/Reduce
to send reduction to a single rank.Op
for details on reduction operators.
External links
MPI.Allreduce
— Functionrecvbuf = Allreduce(sendbuf, op, comm)
Performs elementwise reduction using the operator op
on the buffer sendbuf
, returning the result in the recvbuf
of all processes in the group.
sendbuf
can also be a scalar, in which case recvbuf
will be a value of the same type.
See also
Allreduce!
for mutating or in-place operations.Reduce!
/Reduce
to send reduction to a single rank.Op
for details on reduction operators.
External links
MPI.Scan!
— FunctionScan!(sendbuf, recvbuf[, count::Integer], op, comm::Comm)
Scan!(buf[, count::Integer], op, comm::Comm)
Inclusive prefix reduction (analagous to accumulate
in Julia): recvbuf
on rank i
will contain the the result of reducing sendbuf
by op
from ranks 0:i
.
If only a single buffer is provided, then operations will be performed in-place in buf
.
See also
Scan
to handle allocation of the output bufferExscan!
/Exscan
for exclusive scanOp
for details on reduction operators.
External links
MPI.Scan
— Functionrecvbuf = Scan(sendbuf, op, comm::Comm)
Inclusive prefix reduction (analagous to accumulate
in Julia): recvbuf
on rank i
will contain the the result of reducing sendbuf
by op
from ranks 0:i
.
sendbuf
can also be a scalar, in which case recvbuf
will also be a scalar of the same type.
See also
Scan!
for mutating or in-place operationsExscan!
/Exscan
for exclusive scanOp
for details on reduction operators.
External links
MPI.Exscan!
— FunctionExscan!(sendbuf, recvbuf[, count::Integer], op, comm::Comm)
Exscan!(buf[, count::Integer], op, comm::Comm)
Exclusive prefix reduction (analagous to accumulate
in Julia): recvbuf
on rank i
will contain the the result of reducing sendbuf
by op
from ranks 0:i-1
. The recvbuf
on rank 0
is ignored, and the recvbuf
on rank 1
will contain the contents of sendbuf
on rank 0
.
If only a single buf
is provided, then operations are performed in-place, and buf
on rank 0 will remain unchanged.
See also
Exscan
to handle allocation of the output bufferScan!
/Scan
for inclusive scanOp
for details on reduction operators.
External links
MPI.Exscan
— Functionrecvbuf = Exscan(sendbuf, op, comm::Comm)
Exclusive prefix reduction (analagous to accumulate
in Julia): recvbuf
on rank i
will contain the the result of reducing sendbuf
by op
from ranks 0:i-1
. The recvbuf
on rank 0
is undefined, and the recvbuf
on rank 1
will contain the contents of sendbuf
on rank 0
.
See also
Exscan!
for mutating and in-place operationsScan!
/Scan
for inclusive scanOp
for details on reduction operators.
External links