Skip to content

Handle Debugging:Request queries

Marc-Andre edited this page Apr 8, 2025 · 11 revisions

Request queries

Use cases

What to solve:

typedef ompi_request_.._t * MPI_Request;
typedef int MPI_Request;
int MPI_Irecv(void *buf, int count, MPI_Datatype datatype, int source,
              int tag, MPI_Comm comm, MPI_Request *request);


MPI_Request requests[2];
MPI_Irecv(..., requests);
MPI_File_iread(..., requests+1);

...
MPI_Waitall(2, requests);

Use case: stopped at MPI_Wait, what is request?

These functions are analogous to the mpid_comm_* functions, but for MPI_Request.

int mpid_request_query(mpid_process_handle_t*process,
                       langHandle request,  // The MPI handle (see mpid_type_query)
                       int language, // mpid_TYPE_LANG_C or mpid_TYPE_LANG_FORTRAN 
                       mpid_request_handle_t **handle);

Discussion about information on partitioned p2p communication

MPI_Psend_init(buffer, partitions, COUNT, MPI_DOUBLE, dest, tag,
                MPI_COMM_WORLD, MPI_INFO_NULL, &request);
// for(){ // multiple iterations
MPI_Start(&request);
for(i = 0; i < partitions-1; ++i)
{
     MPI_Pready(i, request);
}
MPI_Test(&request, &flag, MPI_STATUS_IGNORE); // flag will always be 0
MPI_Pready(partitions-1, request);
MPI_Wait(&request, MPI_STATUS_IGNORE);
// }
MPI_Request_free(&request);

Use case: If waiting on a partitioned communication request blocks: Are all partitions marked ready?

Can we query more details about the state of partitions? -> In general the implementation does not need to maintain a list of active partitions. Could implement Pready with a single counter and send out all data when everything is ready.

Can we query the number of active partitions? -> probably?

Can we query whether none, some or all partitions are active? -> hopefully :)

Basic query function:

mpid_request_query_basic(mpid_request_handle_t *handle,
                           enum mpid_request_info_bitmap_t *request_bitflags,
 --- TODO: More general information? codeptr?

);

with:

enum mpid_request_info_bitmap_t {
//
  MPID_REQUEST_INFO_ACTIVE = 0x1,
  MPID_REQUEST_INFO_COMPLETED = 0x2,
  MPID_REQUEST_INFO_PERSISTENT = 0x4,
  MPID_REQUEST_INFO_PARTITIONED = 0x8,

  MPID_REQUEST_INFO_NULL,
// p2p communication
  mpid_REQUEST_INFO_IRECV,
  mpid_REQUEST_INFO_ISEND, ...

// collective communication
  MPID_REQUEST_INFO_IBARRIER, ...
 
// I/O
  MPID_REQUEST_INFO_FILE_IREAD, ...
 
// RMA
  MPID_REQUEST_INFO_RPUT, ...
 
// Communicator creation
  MPID_REQUEST_INFO_COMM_IDUP, ...
 
// Generalized requests
  MPID_REQUEST_INFO_GREQUEST_START, ...

// Identify implementation dummy handle
  MPID_REQUEST_DUMMY
};

P2P query function:

mpid_request_query_p2p_info(mpid_request_handle_t *handle,
                            mpid_address_t  *buf, 
                            int64* count, 
                            mpid_datatype_handle_t *datatype, 
                            int64* peer,
                            int64* tag, 
                            mpid_comm_handle_t *comm                           
);

Collective query function:

mpid_request_query_collective_size_info(mpid_request_handle_t *handle,
        int64 *num_sendcounts,
        int64 *num_sdispls,
        int64 *num_sendtypes,
        int64 *num_recvcounts,
        int64 *num_rdispls,
        int64 *num_recvtypes);

mpid_request_query_collective_info(mpid_request_handle_t *handle,
        mpid_address_t  *sendbuf, 
        int64 max_sendcounts,
        int64 max_sdispls,
        int64 max_sendtypes,
        int64 max_recvcounts,
        int64 max_rdispls,
        int64 max_recvtypes,
        mpid_address_t  *sendcounts[],                     // len: 1 or size(comm)
        mpid_address_t  *sdispls[],                        // len: 1 or size(comm) 
        mpid_datatype_handle_t * sendtypes[], // len: 1 or size(comm)
        mpid_address_t  *recvbuf,
        mpid_address_t  *recvcounts[],                     // len: 1 or size(comm)
        mpid_address_t  *rdispls[],                        // len: 1 or size(comm)
        mpid_datatype_handle_t * recvtypes[], // len: 1 or size(comm)
        mpid_op_handle_t * op, 
        int root,  
        mpid_comm_handle_t * comm);

File query function:

mpid_request_query_file_info(mpid_request_handle_t *handle,
                             mpid_file_handle_t * file, 
                             mpid_address_t  *offset, 
                             mpid_address_t  *buf, 
                             mpid_datatype_handle_t * datatype);

RMA query function:

MPI source Example:

MPI_RPUT(origin_addr, origin_count, origin_datatype, target_rank, target_disp,
target_count, target_datatype, win, request);
MPI_RGET(origin_addr, origin_count, origin_datatype, target_rank, target_disp,
target_count, target_datatype, win, request)
MPI_RACCUMULATE(origin_addr, origin_count, origin_datatype, target_rank, target_disp,
target_count, target_datatype, op, win, request)
MPI_RGET_ACCUMULATE(origin_addr, origin_count, origin_datatype, result_addr,
result_count, result_datatype, target_rank, target_disp, target_count,
target_datatype, op, win, request)
mpid_request_query_rma_info(mpid_request_handle_t *handle,
                            mpid_address_t  *origin_addr,
                            int64 *origin_count,
                            mpid_datatype_handle_t * origin_datatype,
                            mpid_address_t  *result_addr,
                            int64 *result_count,
                            mpid_datatype_handle_t * result_datatype,
                            int64 *target_rank,
                            mpid_address_t  *target_displ,
                            int64 *target_count,
                            mpid_datatype_handle_t * target_datatype,
                            mpid_op_handle_t * op,
                            mpid_win_handle_t * win);
``

### Communicator query function:
MPI source Example:

```C
MPI_Comm_idup(comm, info, newcomm, request);
MPI_Comm_free(comm)
MPI_Wait(request)

Question to the forum: Is the newcomm variable initialized after MPI_Comm_idup returns? Can the application copy the handle after this function?

mpid_request_query_comm_info(mpid_request_handle_t *handle,
                          mpid_comm_handle_t * newcomm);

22/01/20 discussion: The newcomm handle would be marked as "incompletely created". Only limited information can be expected from such handle. The goal of this API is to reuse functionality already defined for comm handles.

Generalized requests query function:

MPI source Example:

See Example 13.1 in MPI-4.0

MPI_Grequest_start(query_fn, free_fn, cancel_fn, NULL, request);
/// do some work until
MPI_Grequest_complete(request);
mpid_request_query_grequest_info(mpid_request_handle_t *handle,
                                 mpid_address_t  *query_fn,
                                 mpid_address_t  *free_fn,
                                 mpid_address_t  *cancel_fn,
                                 mpid_address_t  *extra_state);

Free function:

Free a handle returned by the mpid_request_query() function.

int mpid_request_handle_free(mpid_request_handle_t *handle);

Session query function:

Query a handle returned by mpid_request_query() and, if found and valid, return the session this communicator was derived from

int mpid_request_query_session(mpid_request_handle_t *handle,
                               mpid_session_handle_t **request_session);

Clone this wiki locally