About CUDA's architecture (SM, SP)

I am a person just starting the CUDA programming.
There seems to be a concept of SP SM and the CUDA architecture.
I'd tried to run the deviceQuery.cpp of sample source I think what works and SP SM development of their environment,
It has become not know which items whether the SP is any item in the SM.

I think item "(14) Multiprocessors, (8) CUDA Cores / MP" and that are true to the SP and SM, but I will correct understanding of the following?

SM = Multiprocessors = 14
SP = CUDA Cores/MP = 8
CUDA Cores = 14 * 8 = 112

By the way, the result of deviceQuery.cpp was following.

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTS 240
CUDA Driver Version / Runtime Version 5.5 / 5.5
CUDA Capability Major/Minor version number: 1.1
Total amount of global memory: 1024 MBytes (1073741824 bytes)
(14) Multiprocessors, ( 8) CUDA Cores/MP: 112 CUDA Cores
GPU Clock rate: 1620 MHz (1.62 GHz)
Memory Clock rate: 1100 Mhz
Memory Bus Width: 256-bit
Maximum Texture Dimension Size (x,y,z) 1D=(8192), 2D=(65536, 32768), 3
D=(2048, 2048, 2048)
Maximum Layered 1D Texture Size, (num) layers 1D=(8192), 512 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(8192, 8192), 512 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 16384 bytes
Total number of registers available per block: 8192
Warp size: 32
Maximum number of threads per multiprocessor: 768
Maximum number of threads per block: 512
Max dimension size of a thread block (x,y,z): (512, 512, 64)
Max dimension size of a grid size (x,y,z): (65535, 65535, 1)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 256 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): No
Device PCI Bus ID / PCI location ID: 9 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simu ltaneously) >


According to this you are correct:

SM = Streaming Multiprocessor

SP = Streaming Processor = CUDA Core

Total SP/CUDA Cores = number of SM * number of SP/CUDA Cores per SM

链接地址: http://www.djcxy.com/p/38472.html

上一篇: 哪款CUDA Toolkit版本适用于较旧的NVIDIA驱动程序

下一篇: 关于CUDA的架构(SM,SP)