Function reference
The following functions are currently wrapped, with the convention: MPI_Fun => MPI.Fun
Constants like MPI_SUM
are wrapped as MPI.SUM
. Note also that arbitrary Julia functions f(x,y)
can be passed as reduction operations to the MPI Allreduce
and Reduce
functions.
Datatype functions
Julia Function (assuming import MPI ) | Fortran Function |
---|---|
MPI.Get_address | MPI_Get_address |
MPI.mpitype | MPI_Type_create_struct /MPI_Type_commit |
mpitype
is not strictly a wrapper for MPI_Type_create_struct
and MPI_Type_commit
, it also is an accessor for previously created types.
Missing docstring for MPI.Get_address
. Check Documenter's build log for details.
Missing docstring for MPI.mpitype
. Check Documenter's build log for details.
Point-to-point communication
Missing docstring for MPI.Cancel!
. Check Documenter's build log for details.
Missing docstring for MPI.Get_count
. Check Documenter's build log for details.
Missing docstring for MPI.Iprobe
. Check Documenter's build log for details.
MPI.Irecv!
— Function.Irecv!(buf::MPIBuffertype{T}, count::Integer, datatype::Datatype,
src::Integer, tag::Integer, comm::Comm) where T
Starts a nonblocking receive of up to count
elements of type datatype
into buf
from MPI rank src
of communicator comm
using with the message tag tag
Returns the communication Request
for the nonblocking receive.
Irecv!(buf::MPIBuffertype{T}, count::Integer, src::Integer, tag::Integer,
comm::Comm) where T
Starts a nonblocking receive of up to count
elements into buf
from MPI rank src
of communicator comm
using with the message tag tag
Returns the communication Request
for the nonblocking receive.
Irecv!(buf::Array{T}, src::Integer, tag::Integer, comm::Comm) where T
Starts a nonblocking receive into buf
from MPI rank src
of communicator comm
using with the message tag tag
Returns the communication Request
for the nonblocking receive.
MPI.Isend
— Function.Isend(buf::MPIBuffertype{T}, count::Integer, datatype::Datatype,
dest::Integer, tag::Integer, comm::Comm) where T
Starts a nonblocking send of count
elements of type datatype
from buf
to MPI rank dest
of communicator comm
using with the message tag tag
Returns the commication Request
for the nonblocking send.
Isend(buf::MPIBuffertype{T}, count::Integer, dest::Integer, tag::Integer,
comm::Comm) where T
Starts a nonblocking send of count
elements of buf
to MPI rank dest
of communicator comm
using with the message tag tag
Returns the commication Request
for the nonblocking send.
Isend(buf::Array{T}, dest::Integer, tag::Integer, comm::Comm) where T
Starts a nonblocking send of buf
to MPI rank dest
of communicator comm
using with the message tag tag
Returns the commication Request
for the nonblocking send.
Isend(obj::T, dest::Integer, tag::Integer, comm::Comm) where T
Starts a nonblocking send of obj
to MPI rank dest
of communicator comm
using with the message tag tag
.
Returns the commication Request
for the nonblocking send.
Missing docstring for MPI.Probe
. Check Documenter's build log for details.
MPI.Recv!
— Function.Recv!(buf::MPIBuffertype{T}, count::Integer, datatype::Datatype,
src::Integer, tag::Integer, comm::Comm) where T
Completes a blocking receive of up to count
elements of type datatype
into buf
from MPI rank src
of communicator comm
using with the message tag tag
Returns the Status
of the receive
Recv!(buf::MPIBuffertype{T}, count::Integer, src::Integer, tag::Integer,
comm::Comm) where T
Completes a blocking receive of up to count
elements into buf
from MPI rank src
of communicator comm
using with the message tag tag
Returns the Status
of the receive
Recv!(buf::Array{T}, src::Integer, tag::Integer, comm::Comm) where T
Completes a blocking receive into buf
from MPI rank src
of communicator comm
using with the message tag tag
Returns the Status
of the receive
MPI.Send
— Function.Send(buf::MPIBuffertype{T}, count::Integer, datatype::Datatype,
dest::Integer, tag::Integer, comm::Comm) where T
Complete a blocking send of count
elements of type datatype
from buf
to MPI rank dest
of communicator comm
using the message tag tag
Send(buf::MPIBuffertype{T}, count::Integer, dest::Integer, tag::Integer,
comm::Comm) where T
Complete a blocking send of count
elements of buf
to MPI rank dest
of communicator comm
using with the message tag tag
Send(buf::AbstractArray{T}, dest::Integer, tag::Integer, comm::Comm) where T
Complete a blocking send of buf
to MPI rank dest
of communicator comm
using with the message tag tag
Send(obj::T, dest::Integer, tag::Integer, comm::Comm) where T
Complete a blocking send of obj
to MPI rank dest
of communicator comm
using with the message tag tag
.
Missing docstring for MPI.Test!
. Check Documenter's build log for details.
Missing docstring for MPI.Testall!
. Check Documenter's build log for details.
Missing docstring for MPI.Testany!
. Check Documenter's build log for details.
Missing docstring for MPI.Testsome!
. Check Documenter's build log for details.
MPI.Wait!
— Function.Wait!(req::Request)
Wait on the request req
to be complete. Returns the Status
of the request.
MPI.Waitall!
— Function.Waitall!(reqs::Vector{Request})
Wait on all the requests in the array reqs
to be complete. Returns an arrays of the all the requests statuses.
MPI.Waitany!
— Function.Waitany!(reqs::Vector{Request})
Wait on any the requests in the array reqs
to be complete. Returns the index of the completed request and its Status
as a tuple.
Missing docstring for MPI.Waitsome!
. Check Documenter's build log for details.
Collective communication
The non-allocating Julia functions map directly to the corresponding MPI operations, after asserting that the size of the output buffer is sufficient to store the result.
The allocating Julia functions allocate an output buffer and then call the non-allocating method.
All-to-all collective communications support in place operations by passing MPI.IN_PLACE
with the same syntax documented by MPI. One-to-All communications support it by calling the function *_in_place!
, calls the MPI functions with the right syntax on root and non root process.
MPI.Allgather!
— Function.Allgather!(sendbuf, recvbuf, count, comm)
Each process sends the first count
elements of sendbuf
to the other processes, who store the results in rank order into recvbuf
.
If sendbuf==MPI.IN_PLACE
the input data is assumed to be in the area of recvbuf
where the process would receive it's own contribution.
Allgather!(buf, count, comm)
Equivalent to Allgather!(MPI.IN_PLACE, buf, count, comm)
.
MPI.Allgather
— Function.Allgather(sendbuf[, count=length(sendbuf)], comm)
Each process sends the first count
elements of sendbuf
to the other processes, who store the results in rank order allocating the output buffer.
MPI.Allgatherv!
— Function.Allgatherv!(sendbuf, recvbuf, counts, comm)
Each process sends the first counts[rank]
elements of the buffer sendbuf
to all other process. Each process stores the received data in rank order in the buffer recvbuf
.
if sendbuf==MPI.IN_PLACE
every process takes the data to be sent is taken from the interval of recvbuf
where it would store it's own data.
MPI.Allgatherv
— Function.Allgatherv(sendbuf, counts, comm)
Each process sends the first counts[rank]
elements of the buffer sendbuf
to all other process. Each process allocates an output buffer and stores the received data in rank order.
MPI.Allreduce!
— Function.Allreduce!(sendbuf, recvbuf[, count=length(sendbuf)], op, comm)
Performs op
reduction on the first count
elements of the buffer sendbuf
storing the result in the recvbuf
of all processes in the group.
All-reduce is equivalent to a Reduce!
operation followed by a Bcast!
, but can lead to better performance.
If sendbuf==MPI.IN_PLACE
the data is read from recvbuf
and then overwritten with the results.
To handle allocation of the output buffer, see Allreduce
.
Allreduce!(buf, op, comm)
Performs op
reduction in place on the buffer sendbuf
, overwriting it with the results on all the processes in the group.
Equivalent to calling Allreduce!(MPI.IN_PLACE, buf, op, comm)
MPI.Allreduce
— Function.Allreduce(sendbuf, op, comm)
Performs op
reduction on the buffer sendbuf
, allocating and returning the output buffer in all processes of the group.
To specify the output buffer or perform the operation in pace, see Allreduce!
.
MPI.Alltoall!
— Function.Alltoall!(sendbuf, recvbuf, count, comm)
Every process divides the buffer sendbuf
into Comm_size(comm)
chunks of length count
, sending the j
-th chunk to the j
-th process. Every process stores the data received from the j
-th process in the j
-th chunk of the buffer recvbuf
.
rank send buf recv buf
---- -------- --------
0 a,b,c,d,e,f Alltoall a,b,A,B,α,β
1 A,B,C,D,E,F ----------------> c,d,C,D,γ,ψ
2 α,β,γ,ψ,η,ν e,f,E,F,η,ν
If sendbuf==MPI.IN_PLACE
, data is sent from the recvbuf
and then overwritten.
MPI.Alltoall
— Function.Alltoall(sendbuf, count, comm)
Every process divides the buffer sendbuf
into Comm_size(comm)
chunks of length count
, sending the j
-th chunk to the j
-th process. Every process allocates the output buffer and stores the data received from the j
-th process in the j
-th chunk.
rank send buf recv buf
---- -------- --------
0 a,b,c,d,e,f Alltoall a,b,A,B,α,β
1 A,B,C,D,E,F ----------------> c,d,C,D,γ,ψ
2 α,β,γ,ψ,η,ν e,f,E,F,η,ν
MPI.Alltoallv!
— Function.Alltoallv!(sendbuf::T, recvbuf::T, scounts, rcounts, comm)
MPI.IN_PLACE
is not supported for this operation.
Missing docstring for MPI.Alltoallv
. Check Documenter's build log for details.
MPI.Barrier
— Function.Barrier(comm::Comm)
Blocks until comm
is synchronized.
If comm
is an intracommunicator, then it blocks until all members of the group have called it.
If comm
is an intercommunicator, then it blocks until all members of the other group have called it.
MPI.Bcast!
— Function.Bcast!(buf[, count=length(buf)], root, comm::Comm)
Broadcast the first count
elements of the buffer buf
from root
to all processes.
Missing docstring for MPI.Bcast!
. Check Documenter's build log for details.
Missing docstring for MPI.Exscan
. Check Documenter's build log for details.
MPI.Gather!
— Function.Gather!(sendbuf, recvbuf, count, root, comm)
Each process sends the first count
elements of the buffer sendbuf
to the root
process. The root
process stores elements in rank order in the buffer buffer recvbuf
.
count
should be the same for all processes. If the number of elements varies between processes, use Gatherv!
instead.
To perform the reduction in place refer to Gather_in_place!
.
MPI.Gather
— Function.Gather(sendbuf[, count=length(sendbuf)], root, comm)
Each process sends the first count
elements of the buffer sendbuf
to the root
process. The root
allocates the output buffer and stores elements in rank order.
MPI.Gather_in_place!
— Function.Gather_in_place!(buf, count, root, comm)
Each process sends the first count
elements of the buffer buf
to the root
process. The root
process stores elements in rank order in the buffer buffer buf
, sending no data to itself.
This is functionally equivalent to calling
if root == MPI.Comm_rank(comm)
Gather!(MPI.IN_PLACE, buf, count, root, comm)
else
Gather!(buf, C_NULL, count, root, comm)
end
MPI.Gatherv!
— Function.Gatherv!(sendbuf, recvbuf, counts, root, comm)
Each process sends the first counts[rank]
elements of the buffer sendbuf
to the root
process. The root
stores elements in rank order in the buffer recvbuf
.
To perform the reduction in place refer to Gatherv_in_place!
.
MPI.Gatherv
— Function.Gatherv(sendbuf, counts, root, comm)
Each process sends the first counts[rank]
elements of the buffer sendbuf
to the root
process. The root
allocates the output buffer and stores elements in rank order.
MPI.Gatherv_in_place!
— Function.Gatherv_in_place!(buf, counts, root, comm)
Each process sends the first counts[rank]
elements of the buffer buf
to the root
process. The root
allocates the output buffer and stores elements in rank order.
This is functionally equivalent to calling
if root == MPI.Comm_rank(comm)
Gatherv!(MPI.IN_PLACE, buf, counts, root, comm)
else
Gatherv!(buf, C_NULL, counts, root, comm)
end
MPI.Reduce!
— Function.Reduce!(sendbuf, recvbuf[, count=length(sendbuf)], op, root, comm)
Performs op
reduction on the first count
elements of the buffer sendbuf
and stores the result in recvbuf
on the process of rank root
.
On non-root processes recvbuf
is ignored.
To perform the reduction in place, see Reduce_in_place!
.
To handle allocation of the output buffer, see Reduce
.
MPI.Reduce
— Function.Reduce(sendbuf, count, op, root, comm)
Performs op
reduction on the buffer sendbuf
and stores the result in an output buffer allocated on the process of rank root
. An empty array will be returned on all other processes.
To specify the output buffer, see Reduce!
.
To perform the reduction in place, see Reduce_in_place!
.
MPI.Reduce_in_place!
— Function.Reduce_in_place!(buf, count, op, root, comm)
Performs op
reduction on the first count
elements of the buffer buf
and stores the result on buf
of the root
process in the group.
This is equivalent to calling
if root == MPI.Comm_rank(comm)
Reduce!(MPI.IN_PLACE, buf, count, root, comm)
else
Reduce!(buf, C_NULL, count, root, comm)
end
To handle allocation of the output buffer, see Reduce
.
To specify a separate output buffer, see Reduce!
.
Missing docstring for MPI.Scan
. Check Documenter's build log for details.
Missing docstring for MPI.Scan
. Check Documenter's build log for details.
MPI.Scatter!
— Function.Scatter!(sendbuf, recvbuf, count, root, comm)
Splits the buffer sendbuf
in the root
process into Comm_size(comm)
chunks and sends the j-th chunk to the process of rank j into the recvbuf
buffer, which must be of length at least count
.
count
should be the same for all processes. If the number of elements varies between processes, use Scatter!
instead.
To perform the reduction in place, see Scatter_in_place!
.
To handle allocation of the output buffer, see Scatter
.
MPI.Scatter
— Function.Scatter(sendbuf, count, root, comm)
Splits the buffer sendbuf
in the root
process into Comm_size(comm)
chunks and sends the j-th chunk to the process of rank j, allocating the output buffer.
MPI.Scatter_in_place!
— Function.Scatter_in_place!(buf, count, root, comm)
Splits the buffer buf
in the root
process into Comm_size(comm)
chunks and sends the j-th chunk to the process of rank j. No data is sent to the root
process.
This is functionally equivalent to calling
if root == MPI.Comm_rank(comm)
Scatter!(buf, MPI.IN_PLACE, count, root, comm)
else
Scatter!(C_NULL, buf, count, root, comm)
end
To specify a separate output buffer, see Scatter!
.
To handle allocation of the output buffer, see Scatter
.
MPI.Scatterv!
— Function.Scatterv!(sendbuf, recvbuf, counts, root, comm)
Splits the buffer sendbuf
in the root
process into Comm_size(comm)
chunks of length counts[j]
and sends the j-th chunk to the process of rank j into the recvbuf
buffer, which must be of length at least count
.
To perform the reduction in place refer to Scatterv_in_place!
.
MPI.Scatterv
— Function.Scatterv(sendbuf, counts, root, comm)
Splits the buffer sendbuf
in the root
process into Comm_size(comm)
chunks of length counts[j]
and sends the j-th chunk to the process of rank j, which allocates the output buffer
MPI.Scatterv_in_place!
— Function.Scatterv_in_place(buf, counts, root, comm)
Splits the buffer buf
in the root
process into Comm_size(comm)
chunks of length counts[j]
and sends the j-th chunk to the process of rank j into the buf
buffer, which must be of length at least count
. The root
process sends nothing to itself.
This is functionally equivalent to calling
if root == MPI.Comm_rank(comm)
Scatterv!(buf, MPI.IN_PLACE, counts, root, comm)
else
Scatterv!(C_NULL, buf, counts, root, comm)
end
One-sided communication
MPI.Win_create
— Function.MPI.Win_create(base::Array, comm::Comm; infokws...)
Create a window over the array base
, returning a Win
object used by these processes to perform RMA operations
This is a collective call over comm
.
infokws
are info keys providing optimization hints.
MPI.free
should be called on the Win
object once operations have been completed.
MPI.Win_create_dynamic
— Function.MPI.Win_create_dynamic(comm::Comm; infokws...)
Create a dynamic window returning a Win
object used by these processes to perform RMA operations
This is a collective call over comm
.
infokws
are info keys providing optimization hints.
MPI.free
should be called on the Win
object once operations have been completed.
MPI.Win_allocate_shared
— Function.(win, ptr) = MPI.Win_allocate_shared(T, len, comm::Comm; infokws...)
Create and allocate a shared memory window for objects of type T
of length len
, returning a Win
and a Ptr{T}
object used by these processes to perform RMA operations
This is a collective call over comm
.
infokws
are info keys providing optimization hints.
MPI.free
should be called on the Win
object once operations have been completed.
Missing docstring for MPI.Win_shared_query
. Check Documenter's build log for details.
Missing docstring for MPI.Win_attach
. Check Documenter's build log for details.
Missing docstring for MPI.Win_detach
. Check Documenter's build log for details.
Missing docstring for MPI.Win_fence
. Check Documenter's build log for details.
Missing docstring for MPI.Win_flush
. Check Documenter's build log for details.
Missing docstring for MPI.Win_free
. Check Documenter's build log for details.
Missing docstring for MPI.Win_sync
. Check Documenter's build log for details.
Missing docstring for MPI.Win_lock
. Check Documenter's build log for details.
Missing docstring for MPI.Win_unlock
. Check Documenter's build log for details.
Missing docstring for MPI.Get
. Check Documenter's build log for details.
Missing docstring for MPI.Put
. Check Documenter's build log for details.
Missing docstring for MPI.Fetch_and_op
. Check Documenter's build log for details.
Missing docstring for MPI.Accumulate
. Check Documenter's build log for details.
Missing docstring for MPI.Get_accumulate
. Check Documenter's build log for details.
Info objects
MPI.Info
— Type.Info <: AbstractDict{Symbol,String}
MPI.Info
objects store key-value pairs, and are typically used for passing optional arguments to MPI functions.
Usage
These will typically be hidden from user-facing APIs by splatting keywords, e.g.
function f(args...; kwargs...)
info = Info(kwargs...)
# pass `info` object to `ccall`
end
For manual usage, Info
objects act like Julia Dict
objects:
info = Info(init=true) # keyword argument is required
info[key] = value
x = info[key]
delete!(info, key)
If init=false
is used in the costructor (the default), a "null" Info
object will be returned: no keys can be added to such an object.
MPI.infoval
— Function.infoval(x)
Convert Julia object x
to a string representation for storing in an Info
object.
The MPI specification allows passing strings, Boolean values, integers, and lists.