OpenMPI使用MINLOC减少

我目前正在研究一些图论问题的MPI代码,其中许多节点都可以包含答案和答案的长度。 为了让所有事情回到主节点,我正在为MPI_Gather做出答案,并尝试使用MPI_MINLOC操作来执行MPI_Reduce来确定谁拥有最短的解决方案。 现在我的数据类型存储的长度和节点ID被定义为(每个例子显示在许多网站上,如http://www.open-mpi.org/doc/v1.4/man3/MPI_Reduce.3.php):

struct minType
{
    float len;
    int index;
};

在每个节点上,我以下列方式初始化此结构的本地副本:

int commRank;
MPI_Comm_rank (MPI_COMM_WORLD, &commRank);
minType solutionLen;
solutionLen.len = 1e37;
solutionLen.index = commRank;

在执行结束时,我有一个MPI_Gather调用,它可以成功调用所有的解决方案(我将它们从内存中打印出来以验证它们),然后调用:

MPI_Reduce (&solutionLen, &solutionLen, 1, MPI_FLOAT_INT, MPI_MINLOC, 0, MPI_COMM_WORLD);

我的理解是这些论据应该是:

  • 数据源
  • 是结果的目标(仅在指定的根节点上有效)
  • 每个节点发送的项目数量
  • 数据类型(MPI_FLOAT_INT似乎是基于上述链接定义的)
  • 操作(MPI_MINLOC似乎也被定义)
  • 指定通讯组中的根节点ID
  • 通信组要等待。
  • 当我的代码使它达到reduce操作时,我得到这个错误:

    [compute-2-19.local:9754] *** An error occurred in MPI_Reduce
    [compute-2-19.local:9754] *** on communicator MPI_COMM_WORLD
    [compute-2-19.local:9754] *** MPI_ERR_ARG: invalid argument of some other kind
    [compute-2-19.local:9754] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
    --------------------------------------------------------------------------
    mpirun has exited due to process rank 0 with PID 9754 on
    node compute-2-19.local exiting improperly. There are two reasons this could occur:
    
    1. this process did not call "init" before exiting, but others in
    the job did. This can cause a job to hang indefinitely while it waits
    for all processes to call "init". By rule, if one process calls "init",
    then ALL processes must call "init" prior to termination.
    
    2. this process called "init", but exited without calling "finalize".
    By rule, all processes that call "init" MUST call "finalize" prior to
    exiting or it will be considered an "abnormal termination"
    
    This may have caused other processes in the application to be
    terminated by signals sent by mpirun (as reported here).
    --------------------------------------------------------------------------
    

    我承认会完全沉迷于此。 万一它很重要我正在基于CentOS 5.5的Rocks群集上使用OpenMPI 1.5.3(使用gcc 4.4构建)进行编译。


    我认为你不允许对输入和输出使用相同的缓冲区(前两个参数)。 手册页说:

    当通信器是内部通信器时,您可以在原地执行reduce操作(输出缓冲器用作输入缓冲器)。 使用变量MPI_IN_PLACE作为根进程sendbuf的值。 在这种情况下,输入数据是从接收缓冲区的根处获取的,它将由输出数据替换。

    链接地址: http://www.djcxy.com/p/56257.html

    上一篇: OpenMPI Reduce using MINLOC

    下一篇: detection on iOS for dates?