CUDA : sharing data between multiple devices?
in CUDA C Programming Guide, it is said that
... by design, a host thread can execute device code on only one device at any given time. As a consequence, multiple host threads are required to execute device code on multiple devices. Also, any CUDA resources created through the runtime in one host thread cannot be used by the runtime from another host thread...
What I wanted to do is make two GPUs share data on host(mapped memory),
but the manual is seemed to say that it is not possible.
Is there any solution for this
Maybe think about using something like MPI along with CUDA?
http://forums.nvidia.com/index.php?showtopic=30741
http://www.ncsa.illinois.edu/UserInfo/Training/Workshops/CUDA/presentations/tutorial-CUDA.html
When you are allocating the host memory, you should allocate using cudaHostAlloc()
and pass the cudaHostAllocPortable
flag. This will allow the memory to be accessed by multiple CUDA contexts.
Solution is to manually manage these common data. Even with SLI.
Cards do not really have shared memory in SLI mode - shared data must be copied from one to the other via the bus.
http://forums.nvidia.com/index.php?showtopic=30740
链接地址: http://www.djcxy.com/p/47412.html上一篇: 多个内核在cuda 4.0中
下一篇: CUDA:在多个设备之间共享数据?