Collective communication

Collective communication

Synchronization

MPI.BarrierFunction.
Barrier(comm::Comm)

Blocks until comm is synchronized.

If comm is an intracommunicator, then it blocks until all members of the group have called it.

If comm is an intercommunicator, then it blocks until all members of the other group have called it.

External links

source

Broadcast

MPI.Bcast!Function.
Bcast!(buf[, count=length(buf)], root::Integer, comm::Comm)

Broadcast the first count elements of the buffer buf from root to all processes.

External links

source

Gather/Scatter

Gather

MPI.Allgather!Function.
Allgather!(sendbuf, recvbuf[, count::Integer=length(sendbuf)], comm::Comm)
Allgather!(sendrecvbuf, count::Integer, comm::Comm)

Each process sends the first count elements of sendbuf to the other processes, who store the results in rank order into recvbuf.

If only one buffer sendrecvbuf is provided, then each process send data is assumed to be in the area where it would receive it's own contribution.

See also

External links

source
MPI.AllgatherFunction.
Allgather(sendbuf[, count=length(sendbuf)], comm)

Each process sends the first count elements of sendbuf to the other processes, who store the results in rank order allocating the output buffer.

See also

External links

source
MPI.Allgatherv!Function.
Allgatherv!(sendbuf, recvbuf, counts, comm)
Allgatherv!(sendrecvbuf, counts, comm)

Each process sends the first counts[rank] elements of the buffer sendbuf to all other process. Each process stores the received data in rank order in the buffer recvbuf.

If only one buffer sendrecvbuf is provided, then for each process, the data to be sent is taken from the interval of recvbuf where it would store it's own data.

See also

External links

source
MPI.AllgathervFunction.
Allgatherv(sendbuf, counts, comm)

Each process sends the first counts[rank] elements of the buffer sendbuf to all other process. Each process allocates an output buffer and stores the received data in rank order.

See also

External links

source
MPI.Gather!Function.
Gather!(sendbuf, recvbuf[, count::Integer=length(sendbuf)], root::Integer, comm::Comm)

Each process sends the first count elements of the buffer sendbuf to the root process. The root process stores elements in rank order in the buffer buffer recvbuf.

sendbuf can be nothing on the root process, in which case the corresponding entries in recvbuf are assumed to be already in place (this corresponds the behaviour of MPI_IN_PLACE in MPI_Gather). For example

if root == MPI.Comm_rank(comm)
    Gather!(nothing, buf, count, root, comm)
else
    Gather!(buf, nothing, count, root, comm)
end

recvbuf on the root process should be a buffer of length count*Comm_size(comm), and on non-root processes it is ignored and can be nothing.

count should be the same for all processes.

See also

  • Gather for the allocating operation.
  • Gatherv! if the number of elements varies between processes.
  • Allgather! to send the result to all processes.

External links

source
MPI.GatherFunction.
Gather(sendbuf[, count=length(sendbuf)], root, comm)

Each process sends the first count elements of the buffer sendbuf to the root process. The root allocates the output buffer and stores elements in rank order.

See also

External links

source
MPI.Gatherv!Function.
Gatherv!(sendbuf, recvbuf, counts, root, comm)

Each process sends the first counts[rank] elements of the buffer sendbuf to the root process. The root stores elements in rank order in the buffer recvbuf.

sendbuf can be nothing on the root process, in which case the corresponding entries in recvbuf are assumed to be already in place (this corresponds the behaviour of MPI_IN_PLACE in MPI_Gatherv). For example

if root == MPI.Comm_rank(comm)
    Gatherv!(nothing, buf, counts, root, comm)
else
    Gatherv!(buf, nothing, counts, root, comm)
end

See also

External links

source
MPI.GathervFunction.
Gatherv(sendbuf, counts, root, comm)

Each process sends the first counts[rank] elements of the buffer sendbuf to the root process. The root allocates the output buffer and stores elements in rank order.

See also

External links

source

Scatter

MPI.Scatter!Function.
Scatter!(sendbuf, recvbuf[, count=length(recvbuf)], root::Integer, comm::Comm)

Splits the buffer sendbuf in the root process into Comm_size(comm) chunks of length count, and sends the j-th chunk to the process of rank j into the recvbuf buffer.

sendbuf on the root process should be a buffer of length count*Comm_size(comm), and on non-root processes it is ignored and can be nothing.

recvbuf can be nothing on the root process, in which case it is unmodified (this corresponds the behaviour of MPI_IN_PLACE in MPI_Scatter). For example

if root == MPI.Comm_rank(comm)
    Scatter!(buf, nothing, count, root, comm)
else
    Scatter!(nothing, buf, count, root, comm)        
end

count should be the same for all processes.

See also

  • Scatter to allocate the output buffer.
  • Scatterv! if the number of elements varies between processes.

External links

source
MPI.ScatterFunction.
Scatter(sendbuf, count, root, comm)

Splits the buffer sendbuf in the root process into Comm_size(comm) chunks and sends the j-th chunk to the process of rank j, allocating the output buffer.

See also

  • Scatter! for the mutating operation.
  • Scatterv! if the number of elements varies between processes.

External links

source
MPI.Scatterv!Function.
Scatterv!(sendbuf, recvbuf, counts, root, comm)

Splits the buffer sendbuf in the root process into Comm_size(comm) chunks of length counts[j] and sends the j-th chunk to the process of rank j into the recvbuf buffer, which must be of length at least count.

recvbuf can be nothing on the root process, in which case it is unmodified (this corresponds the behaviour of MPI_IN_PLACE in MPI_Scatterv). For example

if root == MPI.Comm_rank(comm)
    Scatterv!(buf, nothing, counts, root, comm)
else
    Scatterv!(nothing, buf, counts, root, comm)
end

See also

External links

source
MPI.ScattervFunction.
Scatterv(sendbuf, counts, root, comm)

Splits the buffer sendbuf in the root process into Comm_size(comm) chunks of length counts[j] and sends the j-th chunk to the process of rank j, which allocates the output buffer

See also

External links

source

All-to-all

MPI.Alltoall!Function.
Alltoall!(sendbuf, recvbuf, count::Integer, comm::Comm)
Alltoall!(sendrecvbuf, count::Integer, comm::Comm)

Every process divides the buffer sendbuf into Comm_size(comm) chunks of length count, sending the j-th chunk to the j-th process. Every process stores the data received from the j-th process in the j-th chunk of the buffer recvbuf.

rank    send buf                        recv buf
----    --------                        --------
 0      a,b,c,d,e,f       Alltoall      a,b,A,B,α,β
 1      A,B,C,D,E,F  ---------------->  c,d,C,D,γ,ψ
 2      α,β,γ,ψ,η,ν                     e,f,E,F,η,ν

If only one buffer sendrecvbuf then data is overwritten.

See also

External links

source
MPI.AlltoallFunction.
Alltoall(sendbuf, count::Integer, comm::Comm)

Every process divides the buffer sendbuf into Comm_size(comm) chunks of length count, sending the j-th chunk to the j-th process. Every process allocates the output buffer and stores the data received from the j-th process in the j-th chunk.

rank    send buf                        recv buf
----    --------                        --------
 0      a,b,c,d,e,f       Alltoall      a,b,A,B,α,β
 1      A,B,C,D,E,F  ---------------->  c,d,C,D,γ,ψ
 2      α,β,γ,ψ,η,ν                     e,f,E,F,η,ν

See also

External links

source
MPI.Alltoallv!Function.
Alltoallv!(sendbuf, recvbuf, scounts::Vector, rcounts::Vector, comm::Comm)

Similar to Alltoall!, except with different size chunks per process.

See also

External links

source
MPI.AlltoallvFunction.
Alltoallv(sendbuf, recvbuf, scounts::Vector, rcounts::Vector, comm::Comm)

Similar to Alltoall, except with different size chunks per process.

See also

External links

source

Reduce/Scan

MPI.Reduce!Function.
Reduce!(sendbuf, recvbuf[, count::Integer=length(sendbuf)], op, root::Integer, comm::Comm)
Reduce!(sendrecvbuf, op, root::Integer, comm::Comm)

Performs elementwise reduction using the operator op on the first count elements of the buffer sendbuf and stores the result in recvbuf on the process of rank root.

On non-root processes recvbuf is ignored, and can be nothing.

To perform the reduction in place, provide a single buffer sendrecvbuf.

See also

  • Reduce to handle allocation of the output buffer.
  • Allreduce!/Allreduce to send reduction to all ranks.
  • Op for details on reduction operators.

External links

source
MPI.ReduceFunction.
recvbuf = Reduce(sendbuf, op, root::Integer, comm::Comm)

Performs elementwise reduction using the operator op on the buffer sendbuf, returning the result recvbuf on the process of rank root, and nothing on non-root processes.

sendbuf can also be a scalar, in which case recvbuf will be a value of the same type.

See also

External links

source
MPI.Allreduce!Function.
Allreduce!(sendbuf, recvbuf[, count=length(sendbuf)], op, comm)
Allreduce!(sendrecvbuf, op, comm)

Performs elementwise reduction using the operator op on the first count elements of the buffer sendbuf, storing the result in the recvbuf of all processes in the group.

Allreduce! is equivalent to a Reduce! operation followed by a Bcast!, but can lead to better performance.

If only one sendrecvbuf buffer is provided, then the operation is performed in-place.

See also

  • Allreduce, to handle allocation of the output buffer.
  • Reduce!/Reduce to send reduction to a single rank.
  • Op for details on reduction operators.

External links

source
MPI.AllreduceFunction.
recvbuf = Allreduce(sendbuf, op, comm)

Performs elementwise reduction using the operator op on the buffer sendbuf, returning the result in the recvbuf of all processes in the group.

sendbuf can also be a scalar, in which case recvbuf will be a value of the same type.

See also

  • Allreduce! for mutating or in-place operations.
  • Reduce!/Reduce to send reduction to a single rank.
  • Op for details on reduction operators.

External links

source
MPI.Scan!Function.
Scan!(sendbuf, recvbuf[, count::Integer], op, comm::Comm)
Scan!(buf[, count::Integer], op, comm::Comm)

Inclusive prefix reduction (analagous to accumulate in Julia): recvbuf on rank i will contain the the result of reducing sendbuf by op from ranks 0:i.

If only a single buffer is provided, then operations will be performed in-place in buf.

See also

  • Scan to handle allocation of the output buffer
  • Exscan!/Exscan for exclusive scan
  • Op for details on reduction operators.

External links

source
MPI.ScanFunction.
recvbuf = Scan(sendbuf, op, comm::Comm)

Inclusive prefix reduction (analagous to accumulate in Julia): recvbuf on rank i will contain the the result of reducing sendbuf by op from ranks 0:i.

sendbuf can also be a scalar, in which case recvbuf will also be a scalar of the same type.

See also

  • Scan! for mutating or in-place operations
  • Exscan!/Exscan for exclusive scan
  • Op for details on reduction operators.

External links

source
MPI.Exscan!Function.
Exscan!(sendbuf, recvbuf[, count::Integer], op, comm::Comm)
Exscan!(buf[, count::Integer], op, comm::Comm)

Exclusive prefix reduction (analagous to accumulate in Julia): recvbuf on rank i will contain the the result of reducing sendbuf by op from ranks 0:i-1. The recvbuf on rank 0 is ignored, and the recvbuf on rank 1 will contain the contents of sendbuf on rank 0.

If only a single buf is provided, then operations are performed in-place, and buf on rank 0 will remain unchanged.

See also

  • Exscan to handle allocation of the output buffer
  • Scan!/Scan for inclusive scan
  • Op for details on reduction operators.

External links

source
MPI.ExscanFunction.
recvbuf = Exscan(sendbuf, op, comm::Comm)

Exclusive prefix reduction (analagous to accumulate in Julia): recvbuf on rank i will contain the the result of reducing sendbuf by op from ranks 0:i-1. The recvbuf on rank 0 is undefined, and the recvbuf on rank 1 will contain the contents of sendbuf on rank 0.

See also

  • Exscan! for mutating and in-place operations
  • Scan!/Scan for inclusive scan
  • Op for details on reduction operators.

External links

source