Previous: , Up: The c_mpi backend   [Contents][Index]


Implementation of multicasts

Implementation of multicasts

The MULTICAST statement (see Multicasting) enables any set of tasks to transmit data to any other set of tasks. The c_mpi backend implements MULTICAST using the simpler MPI_Bcast() when there is a single initiator (determined dynamically at run time) and MPI_Alltoallv() otherwise. In all cases, c_mpi creates an MPI communicator that contains exactly the tasks involved in the multicast (as senders and/or receivers).

MPI requires that all tasks involved in an MPI_Bcast() belong to the same MPI communicator. Hence, in the coNCePTuaL statement

TASK 4 MULTICASTS A 64 KILOBYTE MESSAGE TO TASKS r SUCH THAT r
IS IN {1, 3, 5, 7, 9}

the generated communicator contains tasks 1, 3, 4, 5, 7, and 9, even though task 4 transmits but does not receive any data.

Rather than repeatedly call MPI_Bcast(), many-to-one and many-to-many multicasts such as

TASKS from_tasks SUCH THAT from_tasks < 8 MULTICAST 3 2-KILOBYTE
MESSAGES to TASKS to_tasks SUCH THAT 3 DIVIDES to_tasks

perform a single MPI_Alltoallv(). Two message buffers are used in this case, one for all outgoing data and one for all incoming data.

Scott Pakin, pakin@lanl.gov