Function reference
The following functions are currently wrapped, with the convention: MPI_Fun => MPI.Fun
Constants like MPI_SUM are wrapped as MPI.SUM. Note also that arbitrary Julia functions f(x,y) can be passed as reduction operations to the MPI Allreduce and Reduce functions.
Datatype functions
Julia Function (assuming import MPI) | Fortran Function |
|---|---|
MPI.Get_address | MPI_Get_address |
MPI.mpitype | MPI_Type_create_struct/MPI_Type_commit |
mpitype is not strictly a wrapper for MPI_Type_create_struct and MPI_Type_commit, it also is an accessor for previously created types.
Missing docstring for MPI.Get_address. Check Documenter's build log for details.
Missing docstring for MPI.mpitype. Check Documenter's build log for details.
Collective communication
The non-allocating Julia functions map directly to the corresponding MPI operations, after asserting that the size of the output buffer is sufficient to store the result.
The allocating Julia functions allocate an output buffer and then call the non-allocating method.
All-to-all collective communications support in place operations by passing MPI.IN_PLACE with the same syntax documented by MPI. One-to-All communications support it by calling the function *_in_place!, calls the MPI functions with the right syntax on root and non root process.
MPI.Allgather! — FunctionAllgather!(sendbuf, recvbuf, count, comm)Each process sends the first count elements of sendbuf to the other processes, who store the results in rank order into recvbuf.
If sendbuf==MPI.IN_PLACE the input data is assumed to be in the area of recvbuf where the process would receive it's own contribution.
Allgather!(buf, count, comm)Equivalent to Allgather!(MPI.IN_PLACE, buf, count, comm).
MPI.Allgather — FunctionAllgather(sendbuf[, count=length(sendbuf)], comm)Each process sends the first count elements of sendbuf to the other processes, who store the results in rank order allocating the output buffer.
MPI.Allgatherv! — FunctionAllgatherv!(sendbuf, recvbuf, counts, comm)Each process sends the first counts[rank] elements of the buffer sendbuf to all other process. Each process stores the received data in rank order in the buffer recvbuf.
if sendbuf==MPI.IN_PLACE every process takes the data to be sent is taken from the interval of recvbuf where it would store it's own data.
MPI.Allgatherv — FunctionAllgatherv(sendbuf, counts, comm)Each process sends the first counts[rank] elements of the buffer sendbuf to all other process. Each process allocates an output buffer and stores the received data in rank order.
MPI.Allreduce! — FunctionAllreduce!(sendbuf, recvbuf[, count=length(sendbuf)], op, comm)Performs op reduction on the first count elements of the buffer sendbuf storing the result in the recvbuf of all processes in the group.
All-reduce is equivalent to a Reduce! operation followed by a Bcast!, but can lead to better performance.
If sendbuf==MPI.IN_PLACE the data is read from recvbuf and then overwritten with the results.
To handle allocation of the output buffer, see Allreduce.
Allreduce!(buf, op, comm)Performs op reduction in place on the buffer sendbuf, overwriting it with the results on all the processes in the group.
Equivalent to calling Allreduce!(MPI.IN_PLACE, buf, op, comm)
MPI.Allreduce — FunctionAllreduce(sendbuf, op, comm)Performs op reduction on the buffer sendbuf, allocating and returning the output buffer in all processes of the group.
To specify the output buffer or perform the operation in pace, see Allreduce!.
MPI.Alltoall! — FunctionAlltoall!(sendbuf, recvbuf, count, comm)Every process divides the buffer sendbuf into Comm_size(comm) chunks of length count, sending the j-th chunk to the j-th process. Every process stores the data received from the j-th process in the j-th chunk of the buffer recvbuf.
rank send buf recv buf
---- -------- --------
0 a,b,c,d,e,f Alltoall a,b,A,B,α,β
1 A,B,C,D,E,F ----------------> c,d,C,D,γ,ψ
2 α,β,γ,ψ,η,ν e,f,E,F,η,νIf sendbuf==MPI.IN_PLACE, data is sent from the recvbuf and then overwritten.
MPI.Alltoall — FunctionAlltoall(sendbuf, count, comm)Every process divides the buffer sendbuf into Comm_size(comm) chunks of length count, sending the j-th chunk to the j-th process. Every process allocates the output buffer and stores the data received from the j-th process in the j-th chunk.
rank send buf recv buf
---- -------- --------
0 a,b,c,d,e,f Alltoall a,b,A,B,α,β
1 A,B,C,D,E,F ----------------> c,d,C,D,γ,ψ
2 α,β,γ,ψ,η,ν e,f,E,F,η,νMPI.Alltoallv! — FunctionAlltoallv!(sendbuf::T, recvbuf::T, scounts, rcounts, comm)MPI.IN_PLACE is not supported for this operation.
Missing docstring for MPI.Alltoallv. Check Documenter's build log for details.
MPI.Barrier — FunctionBarrier(comm::Comm)Blocks until comm is synchronized.
If comm is an intracommunicator, then it blocks until all members of the group have called it.
If comm is an intercommunicator, then it blocks until all members of the other group have called it.
MPI.Bcast! — FunctionBcast!(buf[, count=length(buf)], root, comm::Comm)Broadcast the first count elements of the buffer buf from root to all processes.
Missing docstring for MPI.Bcast!. Check Documenter's build log for details.
Missing docstring for MPI.Exscan. Check Documenter's build log for details.
MPI.Gather! — FunctionGather!(sendbuf, recvbuf, count, root, comm)Each process sends the first count elements of the buffer sendbuf to the root process. The root process stores elements in rank order in the buffer buffer recvbuf.
count should be the same for all processes. If the number of elements varies between processes, use Gatherv! instead.
To perform the reduction in place refer to Gather_in_place!.
MPI.Gather — FunctionGather(sendbuf[, count=length(sendbuf)], root, comm)Each process sends the first count elements of the buffer sendbuf to the root process. The root allocates the output buffer and stores elements in rank order.
MPI.Gather_in_place! — FunctionGather_in_place!(buf, count, root, comm)Each process sends the first count elements of the buffer buf to the root process. The root process stores elements in rank order in the buffer buffer buf, sending no data to itself.
This is functionally equivalent to calling
if root == MPI.Comm_rank(comm)
Gather!(MPI.IN_PLACE, buf, count, root, comm)
else
Gather!(buf, C_NULL, count, root, comm)
endMPI.Gatherv! — FunctionGatherv!(sendbuf, recvbuf, counts, root, comm)Each process sends the first counts[rank] elements of the buffer sendbuf to the root process. The root stores elements in rank order in the buffer recvbuf.
To perform the reduction in place refer to Gatherv_in_place!.
MPI.Gatherv — FunctionGatherv(sendbuf, counts, root, comm)Each process sends the first counts[rank] elements of the buffer sendbuf to the root process. The root allocates the output buffer and stores elements in rank order.
MPI.Gatherv_in_place! — FunctionGatherv_in_place!(buf, counts, root, comm)Each process sends the first counts[rank] elements of the buffer buf to the root process. The root allocates the output buffer and stores elements in rank order.
This is functionally equivalent to calling
if root == MPI.Comm_rank(comm)
Gatherv!(MPI.IN_PLACE, buf, counts, root, comm)
else
Gatherv!(buf, C_NULL, counts, root, comm)
endMPI.Reduce! — FunctionReduce!(sendbuf, recvbuf[, count=length(sendbuf)], op, root, comm)Performs op reduction on the first count elements of the buffer sendbuf and stores the result in recvbuf on the process of rank root.
On non-root processes recvbuf is ignored.
To perform the reduction in place, see Reduce_in_place!.
To handle allocation of the output buffer, see Reduce.
MPI.Reduce — FunctionReduce(sendbuf, count, op, root, comm)Performs op reduction on the buffer sendbuf and stores the result in an output buffer allocated on the process of rank root. An empty array will be returned on all other processes.
To specify the output buffer, see Reduce!.
To perform the reduction in place, see Reduce_in_place!.
MPI.Reduce_in_place! — FunctionReduce_in_place!(buf, count, op, root, comm)Performs op reduction on the first count elements of the buffer buf and stores the result on buf of the root process in the group.
This is equivalent to calling
if root == MPI.Comm_rank(comm)
Reduce!(MPI.IN_PLACE, buf, count, root, comm)
else
Reduce!(buf, C_NULL, count, root, comm)
endTo handle allocation of the output buffer, see Reduce.
To specify a separate output buffer, see Reduce!.
Missing docstring for MPI.Scan. Check Documenter's build log for details.
Missing docstring for MPI.Scan. Check Documenter's build log for details.
MPI.Scatter! — FunctionScatter!(sendbuf, recvbuf, count, root, comm)Splits the buffer sendbuf in the root process into Comm_size(comm) chunks and sends the j-th chunk to the process of rank j into the recvbuf buffer, which must be of length at least count.
count should be the same for all processes. If the number of elements varies between processes, use Scatter! instead.
To perform the reduction in place, see Scatter_in_place!.
To handle allocation of the output buffer, see Scatter.
MPI.Scatter — FunctionScatter(sendbuf, count, root, comm)Splits the buffer sendbuf in the root process into Comm_size(comm) chunks and sends the j-th chunk to the process of rank j, allocating the output buffer.
MPI.Scatter_in_place! — FunctionScatter_in_place!(buf, count, root, comm)Splits the buffer buf in the root process into Comm_size(comm) chunks and sends the j-th chunk to the process of rank j. No data is sent to the root process.
This is functionally equivalent to calling
if root == MPI.Comm_rank(comm)
Scatter!(buf, MPI.IN_PLACE, count, root, comm)
else
Scatter!(C_NULL, buf, count, root, comm)
endTo specify a separate output buffer, see Scatter!.
To handle allocation of the output buffer, see Scatter.
MPI.Scatterv! — FunctionScatterv!(sendbuf, recvbuf, counts, root, comm)Splits the buffer sendbuf in the root process into Comm_size(comm) chunks of length counts[j] and sends the j-th chunk to the process of rank j into the recvbuf buffer, which must be of length at least count.
To perform the reduction in place refer to Scatterv_in_place!.
MPI.Scatterv — FunctionScatterv(sendbuf, counts, root, comm)Splits the buffer sendbuf in the root process into Comm_size(comm) chunks of length counts[j] and sends the j-th chunk to the process of rank j, which allocates the output buffer
MPI.Scatterv_in_place! — FunctionScatterv_in_place(buf, counts, root, comm)Splits the buffer buf in the root process into Comm_size(comm) chunks of length counts[j] and sends the j-th chunk to the process of rank j into the buf buffer, which must be of length at least count. The root process sends nothing to itself.
This is functionally equivalent to calling
if root == MPI.Comm_rank(comm)
Scatterv!(buf, MPI.IN_PLACE, counts, root, comm)
else
Scatterv!(C_NULL, buf, counts, root, comm)
endOne-sided communication
MPI.Win_create — FunctionMPI.Win_create(base::Array, comm::Comm; infokws...)Create a window over the array base, returning a Win object used by these processes to perform RMA operations
This is a collective call over comm.
infokws are info keys providing optimization hints.
MPI.free should be called on the Win object once operations have been completed.
MPI.Win_create_dynamic — FunctionMPI.Win_create_dynamic(comm::Comm; infokws...)Create a dynamic window returning a Win object used by these processes to perform RMA operations
This is a collective call over comm.
infokws are info keys providing optimization hints.
MPI.free should be called on the Win object once operations have been completed.
MPI.Win_allocate_shared — Function(win, ptr) = MPI.Win_allocate_shared(T, len, comm::Comm; infokws...)Create and allocate a shared memory window for objects of type T of length len, returning a Win and a Ptr{T} object used by these processes to perform RMA operations
This is a collective call over comm.
infokws are info keys providing optimization hints.
MPI.free should be called on the Win object once operations have been completed.
Missing docstring for MPI.Win_shared_query. Check Documenter's build log for details.
Missing docstring for MPI.Win_attach. Check Documenter's build log for details.
Missing docstring for MPI.Win_detach. Check Documenter's build log for details.
Missing docstring for MPI.Win_fence. Check Documenter's build log for details.
Missing docstring for MPI.Win_flush. Check Documenter's build log for details.
Missing docstring for MPI.Win_free. Check Documenter's build log for details.
Missing docstring for MPI.Win_sync. Check Documenter's build log for details.
Missing docstring for MPI.Win_lock. Check Documenter's build log for details.
Missing docstring for MPI.Win_unlock. Check Documenter's build log for details.
Missing docstring for MPI.Get. Check Documenter's build log for details.
Missing docstring for MPI.Put. Check Documenter's build log for details.
Missing docstring for MPI.Fetch_and_op. Check Documenter's build log for details.
Missing docstring for MPI.Accumulate. Check Documenter's build log for details.
Missing docstring for MPI.Get_accumulate. Check Documenter's build log for details.
Info objects
MPI.Info — TypeInfo <: AbstractDict{Symbol,String}MPI.Info objects store key-value pairs, and are typically used for passing optional arguments to MPI functions.
Usage
These will typically be hidden from user-facing APIs by splatting keywords, e.g.
function f(args...; kwargs...)
info = Info(kwargs...)
# pass `info` object to `ccall`
endFor manual usage, Info objects act like Julia Dict objects:
info = Info(init=true) # keyword argument is required
info[key] = value
x = info[key]
delete!(info, key)If init=false is used in the costructor (the default), a "null" Info object will be returned: no keys can be added to such an object.
MPI.infoval — Functioninfoval(x)Convert Julia object x to a string representation for storing in an Info object.
The MPI specification allows passing strings, Boolean values, integers, and lists.