Function reference

Function reference

The following functions are currently wrapped, with the convention: MPI_Fun => MPI.Fun

Constants like MPI_SUM are wrapped as MPI.SUM. Note also that arbitrary Julia functions f(x,y) can be passed as reduction operations to the MPI Allreduce and Reduce functions.

Datatype functions

Julia Function (assuming import MPI)Fortran Function
MPI.Get_addressMPI_Get_address
MPI.mpitypeMPI_Type_create_struct/MPI_Type_commit
Note

mpitype is not strictly a wrapper for MPI_Type_create_struct and MPI_Type_commit, it also is an accessor for previously created types.

Missing docstring.

Missing docstring for MPI.Get_address. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MPI.mpitype. Check Documenter's build log for details.

Point-to-point communication

Julia Function (assuming import MPI)Fortran Function
MPI.Cancel!MPI_Cancel
MPI.Get_countMPI_Get_count
MPI.IprobeMPI_Iprobe
MPI.Irecv!MPI_Irecv
MPI.IsendMPI_Isend
MPI.ProbeMPI_Probe
MPI.Recv!MPI_Recv
MPI.SendMPI_Send
MPI.Test!MPI_Test
MPI.Testall!MPI_Testall
MPI.Testany!MPI_Testany
MPI.Testsome!MPI_Testsome
MPI.Wait!MPI_Wait
MPI.Waitall!MPI_Waitall
MPI.Waitany!MPI_Waitany
MPI.Waitsome!MPI_Waitsome
Missing docstring.

Missing docstring for MPI.Cancel!. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MPI.Get_count. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MPI.Iprobe. Check Documenter's build log for details.

MPI.Irecv!Function.
Irecv!(buf::MPIBuffertype{T}, count::Integer, datatype::Datatype,
       src::Integer, tag::Integer, comm::Comm) where T

Starts a nonblocking receive of up to count elements of type datatype into buf from MPI rank src of communicator comm using with the message tag tag

Returns the communication Request for the nonblocking receive.

source
Irecv!(buf::MPIBuffertype{T}, count::Integer, src::Integer, tag::Integer,
       comm::Comm) where T

Starts a nonblocking receive of up to count elements into buf from MPI rank src of communicator comm using with the message tag tag

Returns the communication Request for the nonblocking receive.

source
Irecv!(buf::Array{T}, src::Integer, tag::Integer, comm::Comm) where T

Starts a nonblocking receive into buf from MPI rank src of communicator comm using with the message tag tag

Returns the communication Request for the nonblocking receive.

source
MPI.IsendFunction.
Isend(buf::MPIBuffertype{T}, count::Integer, datatype::Datatype,
      dest::Integer, tag::Integer, comm::Comm) where T

Starts a nonblocking send of count elements of type datatype from buf to MPI rank dest of communicator comm using with the message tag tag

Returns the commication Request for the nonblocking send.

source
Isend(buf::MPIBuffertype{T}, count::Integer, dest::Integer, tag::Integer,
      comm::Comm) where T

Starts a nonblocking send of count elements of buf to MPI rank dest of communicator comm using with the message tag tag

Returns the commication Request for the nonblocking send.

source
Isend(buf::Array{T}, dest::Integer, tag::Integer, comm::Comm) where T

Starts a nonblocking send of buf to MPI rank dest of communicator comm using with the message tag tag

Returns the commication Request for the nonblocking send.

source
Isend(obj::T, dest::Integer, tag::Integer, comm::Comm) where T

Starts a nonblocking send of obj to MPI rank dest of communicator comm using with the message tag tag.

Returns the commication Request for the nonblocking send.

source
Missing docstring.

Missing docstring for MPI.Probe. Check Documenter's build log for details.

MPI.Recv!Function.
Recv!(buf::MPIBuffertype{T}, count::Integer, datatype::Datatype,
      src::Integer, tag::Integer, comm::Comm) where T

Completes a blocking receive of up to count elements of type datatype into buf from MPI rank src of communicator comm using with the message tag tag

Returns the Status of the receive

source
Recv!(buf::MPIBuffertype{T}, count::Integer, src::Integer, tag::Integer,
      comm::Comm) where T

Completes a blocking receive of up to count elements into buf from MPI rank src of communicator comm using with the message tag tag

Returns the Status of the receive

source
Recv!(buf::Array{T}, src::Integer, tag::Integer, comm::Comm) where T

Completes a blocking receive into buf from MPI rank src of communicator comm using with the message tag tag

Returns the Status of the receive

source
MPI.SendFunction.
Send(buf::MPIBuffertype{T}, count::Integer, datatype::Datatype,
     dest::Integer, tag::Integer, comm::Comm) where T

Complete a blocking send of count elements of type datatype from buf to MPI rank dest of communicator comm using the message tag tag

source
Send(buf::MPIBuffertype{T}, count::Integer, dest::Integer, tag::Integer,
     comm::Comm) where T

Complete a blocking send of count elements of buf to MPI rank dest of communicator comm using with the message tag tag

source
Send(buf::AbstractArray{T}, dest::Integer, tag::Integer, comm::Comm) where T

Complete a blocking send of buf to MPI rank dest of communicator comm using with the message tag tag

source
Send(obj::T, dest::Integer, tag::Integer, comm::Comm) where T

Complete a blocking send of obj to MPI rank dest of communicator comm using with the message tag tag.

source
Missing docstring.

Missing docstring for MPI.Test!. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MPI.Testall!. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MPI.Testany!. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MPI.Testsome!. Check Documenter's build log for details.

MPI.Wait!Function.
Wait!(req::Request)

Wait on the request req to be complete. Returns the Status of the request.

source
MPI.Waitall!Function.
Waitall!(reqs::Vector{Request})

Wait on all the requests in the array reqs to be complete. Returns an arrays of the all the requests statuses.

source
MPI.Waitany!Function.
Waitany!(reqs::Vector{Request})

Wait on any the requests in the array reqs to be complete. Returns the index of the completed request and its Status as a tuple.

source
Missing docstring.

Missing docstring for MPI.Waitsome!. Check Documenter's build log for details.

Collective communication

Non-Allocating Julia FunctionAllocating Julia FunctionFortran FunctionSupports MPI_IN_PLACE
MPI.Allgather!MPI.AllgatherMPI_Allgather
MPI.Allgatherv!MPI.AllgathervMPI_Allgatherv
MPI.Allreduce!MPI.AllreduceMPI_Allreduce
MPI.Alltoall!MPI.AlltoallMPI_Alltoall
MPI.Alltoallv!MPI.AlltoallvMPI_Alltoallv
MPI.BarrierMPI_Barrier
MPI.Bcast!MPI.Bcast!MPI_Bcast
MPI.ExscanMPI_Exscan
MPI.Gather!MPI.GatherMPI_GatherGather_in_place!
MPI.Gatherv!MPI.GathervMPI_GathervGatherv_in_place!
MPI.Reduce!MPI.ReduceMPI_ReduceReduce_in_place!
MPI.ScanMPI.ScanMPI_Scanmissing
MPI.Scatter!MPI.ScatterMPI_ScatterScatter_in_place!
MPI.Scatterv!MPI.ScattervMPI_ScattervScatterv_in_place!

The non-allocating Julia functions map directly to the corresponding MPI operations, after asserting that the size of the output buffer is sufficient to store the result.

The allocating Julia functions allocate an output buffer and then call the non-allocating method.

All-to-all collective communications support in place operations by passing MPI.IN_PLACE with the same syntax documented by MPI. One-to-All communications support it by calling the function *_in_place!, calls the MPI functions with the right syntax on root and non root process.

MPI.Allgather!Function.
Allgather!(sendbuf, recvbuf, count, comm)

Each process sends the first count elements of sendbuf to the other processes, who store the results in rank order into recvbuf.

If sendbuf==MPI.IN_PLACE the input data is assumed to be in the area of recvbuf where the process would receive it's own contribution.

source
Allgather!(buf, count, comm)

Equivalent to Allgather!(MPI.IN_PLACE, buf, count, comm).

source
MPI.AllgatherFunction.
Allgather(sendbuf[, count=length(sendbuf)], comm)

Each process sends the first count elements of sendbuf to the other processes, who store the results in rank order allocating the output buffer.

source
MPI.Allgatherv!Function.
Allgatherv!(sendbuf, recvbuf, counts, comm)

Each process sends the first counts[rank] elements of the buffer sendbuf to all other process. Each process stores the received data in rank order in the buffer recvbuf.

if sendbuf==MPI.IN_PLACE every process takes the data to be sent is taken from the interval of recvbuf where it would store it's own data.

source
MPI.AllgathervFunction.
Allgatherv(sendbuf, counts, comm)

Each process sends the first counts[rank] elements of the buffer sendbuf to all other process. Each process allocates an output buffer and stores the received data in rank order.

source
MPI.Allreduce!Function.
Allreduce!(sendbuf, recvbuf[, count=length(sendbuf)], op, comm)

Performs op reduction on the first count elements of the buffer sendbuf storing the result in the recvbuf of all processes in the group.

All-reduce is equivalent to a Reduce! operation followed by a Bcast!, but can lead to better performance.

If sendbuf==MPI.IN_PLACE the data is read from recvbuf and then overwritten with the results.

To handle allocation of the output buffer, see Allreduce.

source
Allreduce!(buf, op, comm)

Performs op reduction in place on the buffer sendbuf, overwriting it with the results on all the processes in the group.

Equivalent to calling Allreduce!(MPI.IN_PLACE, buf, op, comm)

source
MPI.AllreduceFunction.
Allreduce(sendbuf, op, comm)

Performs op reduction on the buffer sendbuf, allocating and returning the output buffer in all processes of the group.

To specify the output buffer or perform the operation in pace, see Allreduce!.

source
MPI.Alltoall!Function.
Alltoall!(sendbuf, recvbuf, count, comm)

Every process divides the buffer sendbuf into Comm_size(comm) chunks of length count, sending the j-th chunk to the j-th process. Every process stores the data received from the j-th process in the j-th chunk of the buffer recvbuf.

rank    send buf                        recv buf
----    --------                        --------
 0      a,b,c,d,e,f       Alltoall      a,b,A,B,α,β
 1      A,B,C,D,E,F  ---------------->  c,d,C,D,γ,ψ
 2      α,β,γ,ψ,η,ν                     e,f,E,F,η,ν

If sendbuf==MPI.IN_PLACE, data is sent from the recvbuf and then overwritten.

source
MPI.AlltoallFunction.
Alltoall(sendbuf, count, comm)

Every process divides the buffer sendbuf into Comm_size(comm) chunks of length count, sending the j-th chunk to the j-th process. Every process allocates the output buffer and stores the data received from the j-th process in the j-th chunk.

rank    send buf                        recv buf
----    --------                        --------
 0      a,b,c,d,e,f       Alltoall      a,b,A,B,α,β
 1      A,B,C,D,E,F  ---------------->  c,d,C,D,γ,ψ
 2      α,β,γ,ψ,η,ν                     e,f,E,F,η,ν
source
MPI.Alltoallv!Function.
Alltoallv!(sendbuf::T, recvbuf::T, scounts, rcounts, comm)

MPI.IN_PLACE is not supported for this operation.

source
Missing docstring.

Missing docstring for MPI.Alltoallv. Check Documenter's build log for details.

MPI.BarrierFunction.
Barrier(comm::Comm)

Blocks until comm is synchronized.

If comm is an intracommunicator, then it blocks until all members of the group have called it.

If comm is an intercommunicator, then it blocks until all members of the other group have called it.

source
MPI.Bcast!Function.
Bcast!(buf[, count=length(buf)], root, comm::Comm)

Broadcast the first count elements of the buffer buf from root to all processes.

source
Missing docstring.

Missing docstring for MPI.Bcast!. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MPI.Exscan. Check Documenter's build log for details.

MPI.Gather!Function.
Gather!(sendbuf, recvbuf, count, root, comm)

Each process sends the first count elements of the buffer sendbuf to the root process. The root process stores elements in rank order in the buffer buffer recvbuf.

count should be the same for all processes. If the number of elements varies between processes, use Gatherv! instead.

To perform the reduction in place refer to Gather_in_place!.

source
MPI.GatherFunction.
Gather(sendbuf[, count=length(sendbuf)], root, comm)

Each process sends the first count elements of the buffer sendbuf to the root process. The root allocates the output buffer and stores elements in rank order.

source
MPI.Gather_in_place!Function.
Gather_in_place!(buf, count, root, comm)

Each process sends the first count elements of the buffer buf to the root process. The root process stores elements in rank order in the buffer buffer buf, sending no data to itself.

This is functionally equivalent to calling

if root == MPI.Comm_rank(comm)
    Gather!(MPI.IN_PLACE, buf, count, root, comm)
else
    Gather!(buf, C_NULL, count, root, comm)
end
source
MPI.Gatherv!Function.
Gatherv!(sendbuf, recvbuf, counts, root, comm)

Each process sends the first counts[rank] elements of the buffer sendbuf to the root process. The root stores elements in rank order in the buffer recvbuf.

To perform the reduction in place refer to Gatherv_in_place!.

source
MPI.GathervFunction.
Gatherv(sendbuf, counts, root, comm)

Each process sends the first counts[rank] elements of the buffer sendbuf to the root process. The root allocates the output buffer and stores elements in rank order.

source
MPI.Gatherv_in_place!Function.
Gatherv_in_place!(buf, counts, root, comm)

Each process sends the first counts[rank] elements of the buffer buf to the root process. The root allocates the output buffer and stores elements in rank order.

This is functionally equivalent to calling

if root == MPI.Comm_rank(comm)
    Gatherv!(MPI.IN_PLACE, buf, counts, root, comm)
else
    Gatherv!(buf, C_NULL, counts, root, comm)
end
source
MPI.Reduce!Function.
Reduce!(sendbuf, recvbuf[, count=length(sendbuf)], op, root, comm)

Performs op reduction on the first count elements of the buffer sendbuf and stores the result in recvbuf on the process of rank root.

On non-root processes recvbuf is ignored.

To perform the reduction in place, see Reduce_in_place!.

To handle allocation of the output buffer, see Reduce.

source
MPI.ReduceFunction.
Reduce(sendbuf, count, op, root, comm)

Performs op reduction on the buffer sendbuf and stores the result in an output buffer allocated on the process of rank root. An empty array will be returned on all other processes.

To specify the output buffer, see Reduce!.

To perform the reduction in place, see Reduce_in_place!.

source
MPI.Reduce_in_place!Function.
Reduce_in_place!(buf, count, op, root, comm)

Performs op reduction on the first count elements of the buffer buf and stores the result on buf of the root process in the group.

This is equivalent to calling

if root == MPI.Comm_rank(comm)
    Reduce!(MPI.IN_PLACE, buf, count, root, comm)
else
    Reduce!(buf, C_NULL, count, root, comm)
end

To handle allocation of the output buffer, see Reduce.

To specify a separate output buffer, see Reduce!.

source
Missing docstring.

Missing docstring for MPI.Scan. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MPI.Scan. Check Documenter's build log for details.

MPI.Scatter!Function.
Scatter!(sendbuf, recvbuf, count, root, comm)

Splits the buffer sendbuf in the root process into Comm_size(comm) chunks and sends the j-th chunk to the process of rank j into the recvbuf buffer, which must be of length at least count.

count should be the same for all processes. If the number of elements varies between processes, use Scatter! instead.

To perform the reduction in place, see Scatter_in_place!.

To handle allocation of the output buffer, see Scatter.

source
MPI.ScatterFunction.
Scatter(sendbuf, count, root, comm)

Splits the buffer sendbuf in the root process into Comm_size(comm) chunks and sends the j-th chunk to the process of rank j, allocating the output buffer.

source
MPI.Scatter_in_place!Function.
Scatter_in_place!(buf, count, root, comm)

Splits the buffer buf in the root process into Comm_size(comm) chunks and sends the j-th chunk to the process of rank j. No data is sent to the root process.

This is functionally equivalent to calling

if root == MPI.Comm_rank(comm)
    Scatter!(buf, MPI.IN_PLACE, count, root, comm)
else
    Scatter!(C_NULL, buf, count, root, comm)
end

To specify a separate output buffer, see Scatter!.

To handle allocation of the output buffer, see Scatter.

source
MPI.Scatterv!Function.
Scatterv!(sendbuf, recvbuf, counts, root, comm)

Splits the buffer sendbuf in the root process into Comm_size(comm) chunks of length counts[j] and sends the j-th chunk to the process of rank j into the recvbuf buffer, which must be of length at least count.

To perform the reduction in place refer to Scatterv_in_place!.

source
MPI.ScattervFunction.
Scatterv(sendbuf, counts, root, comm)

Splits the buffer sendbuf in the root process into Comm_size(comm) chunks of length counts[j] and sends the j-th chunk to the process of rank j, which allocates the output buffer

source
Scatterv_in_place(buf, counts, root, comm)

Splits the buffer buf in the root process into Comm_size(comm) chunks of length counts[j] and sends the j-th chunk to the process of rank j into the buf buffer, which must be of length at least count. The root process sends nothing to itself.

This is functionally equivalent to calling

if root == MPI.Comm_rank(comm)
    Scatterv!(buf, MPI.IN_PLACE, counts, root, comm)
else
    Scatterv!(C_NULL, buf, counts, root, comm)
end
source

One-sided communication

Julia Function (assuming import MPI)Fortran Function
MPI.Win_createMPI_Win_create
MPI.Win_create_dynamicMPI_Win_create_dynamic
MPI.Win_allocate_sharedMPI_Win_allocate_shared
MPI.Win_shared_queryMPI_Win_shared_query
MPI.Win_attachMPI_Win_attach
MPI.Win_detachMPI_Win_detach
MPI.Win_fenceMPI_Win_fence
MPI.Win_flushMPI_Win_flush
MPI.Win_freeMPI_Win_free
MPI.Win_syncMPI_Win_sync
MPI.Win_lockMPI_Win_lock
MPI.Win_unlockMPI_Win_unlock
MPI.GetMPI_Get
MPI.PutMPI_Put
MPI.Fetch_and_opMPI_Fetch_and_op
MPI.AccumulateMPI_Accumulate
MPI.Get_accumulateMPI_Get_accumulate
MPI.Win_createFunction.
MPI.Win_create(base::Array, comm::Comm; infokws...)

Create a window over the array base, returning a Win object used by these processes to perform RMA operations

This is a collective call over comm.

infokws are info keys providing optimization hints.

MPI.free should be called on the Win object once operations have been completed.

source
MPI.Win_create_dynamic(comm::Comm; infokws...)

Create a dynamic window returning a Win object used by these processes to perform RMA operations

This is a collective call over comm.

infokws are info keys providing optimization hints.

MPI.free should be called on the Win object once operations have been completed.

source
(win, ptr) = MPI.Win_allocate_shared(T, len, comm::Comm; infokws...)

Create and allocate a shared memory window for objects of type T of length len, returning a Win and a Ptr{T} object used by these processes to perform RMA operations

This is a collective call over comm.

infokws are info keys providing optimization hints.

MPI.free should be called on the Win object once operations have been completed.

source
Missing docstring.

Missing docstring for MPI.Win_shared_query. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MPI.Win_attach. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MPI.Win_detach. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MPI.Win_fence. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MPI.Win_flush. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MPI.Win_free. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MPI.Win_sync. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MPI.Win_lock. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MPI.Win_unlock. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MPI.Get. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MPI.Put. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MPI.Fetch_and_op. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MPI.Accumulate. Check Documenter's build log for details.

Missing docstring.

Missing docstring for MPI.Get_accumulate. Check Documenter's build log for details.

Info objects

MPI.InfoType.
Info <: AbstractDict{Symbol,String}

MPI.Info objects store key-value pairs, and are typically used for passing optional arguments to MPI functions.

Usage

These will typically be hidden from user-facing APIs by splatting keywords, e.g.

function f(args...; kwargs...)
    info = Info(kwargs...)
    # pass `info` object to `ccall`
end

For manual usage, Info objects act like Julia Dict objects:

info = Info(init=true) # keyword argument is required
info[key] = value
x = info[key]
delete!(info, key)

If init=false is used in the costructor (the default), a "null" Info object will be returned: no keys can be added to such an object.

source
MPI.infovalFunction.
infoval(x)

Convert Julia object x to a string representation for storing in an Info object.

The MPI specification allows passing strings, Boolean values, integers, and lists.

source