CUDA - Parallel Reduction over one axis

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP


CUDA - Parallel Reduction over one axis



I am fairly new to CUDA programming and I am trying to write a CUDA-Kernel for parallel reduction over only 1 dimension of a 3-dimensional tensor which is a row-major flattened float array fed into the kernel.



In other words, I am trying to rewrite numpy.sum with limited axes combinations of axis=0, axis=1 and axis=2.



I have successfully implemented "reduce over axis 0" and "reduce over axis 1" but performance issues for "reduce over axis2" made me post a question here to ask for advice.



The kernel is launched with a 1-D grid and 1-D block configuration and it maps each thread into each element of reduced output tensor. So, it should be something like this:
Link to image



Here is my kernel:


__global__ void kernel_reduce_sum_3d_try02(
float* g_idata,
float* g_odata,
int dim0,
int dim1,
int dim2,
int overaxis0,
int overaxis1,
int overaxis2)
{

if (overaxis0 == 0 && overaxis1 == 0 && overaxis2 == 1) {
// static shared memory
__shared__ float smem_store[BLOCK_SIZE];

// set thread ID
//unsigned int tid = threadIdx.x;
unsigned int tid = threadIdx.x;

// global index
unsigned int idx = (blockIdx.x * blockDim.x + threadIdx.x);

// unrolling
float tmpSum0 = 0;
unsigned int i = 0;
unsigned int src_index ;
unsigned int _limit = (unsigned int)(dim0 * dim1 * dim2);

//Indices over output
int thrd_d0 = (idx) / (dim1*1);
int thrd_d1 = (idx - thrd_d0*dim1);

//Only for debugging
printf("tid: %03d tidx: %03dtd0: %02dtd1: %02dn",tid,idx,thrd_d0,thrd_d1);



for (i = 0; i < dim2; i++) {
src_index = thrd_d0*dim1*dim2 + thrd_d1 * dim2 + i;
if(idx<15) printf("idx: %d : src_index: %dn",idx,src_index);
if(src_index < _limit)
tmpSum0 += g_idata[src_index];
}


if (src_index + 0 < _limit) smem_store[tid + 0] = tmpSum0;
__syncthreads();

unsigned int oindx = (unsigned int)( thrd_d0*dim1 + thrd_d1 );
if (src_index + 0 <= _limit) g_odata[oindx + 0] = smem_store[tid + 0];
}
}



Question 1. Is there a better way to assign threads and blocks to compute output tensor?



Question 2. In my kernel how can I increase occupancy? (which is around 50%)



Question 3. How should I use shared memory for global memory read operations? (In case of a large 'dim2', each block should allocate huge shared memory buffer which is not good for concurrent warp execution and occupancy)




1 Answer
1



The top two optimization objectives for any CUDA programmer are:



For global memory, objective 2 means that we should strive for coalesced access when reading or writing to global memory. One example of coalesced access is when adjacent threads in a warp are reading adjacent locations in memory.



Does your kernel do this? It does not. Here's a trivial test case:


$ cat t1.cu
#include <stdio.h>
#define BLOCK_SIZE 256

__global__ void kernel_reduce_sum_3d_try02(
float* g_idata,
float* g_odata,
int dim0,
int dim1,
int dim2,
int overaxis0,
int overaxis1,
int overaxis2)
{

if (overaxis0 == 0 && overaxis1 == 0 && overaxis2 == 1) {
// static shared memory
__shared__ float smem_store[BLOCK_SIZE];

// set thread ID
//unsigned int tid = threadIdx.x;
unsigned int tid = threadIdx.x;

// global index
unsigned int idx = (blockIdx.x * blockDim.x + threadIdx.x);

// unrolling
float tmpSum0 = 0;
unsigned int i = 0;
unsigned int src_index ;
unsigned int _limit = (unsigned int)(dim0 * dim1 * dim2);

//Indices over output
int thrd_d0 = (idx) / (dim1*1);
int thrd_d1 = (idx - thrd_d0*dim1);

//Only for debugging
//printf("tid: %03d tidx: %03dtd0: %02dtd1: %02dn",tid,idx,thrd_d0,thrd_d1);



for (i = 0; i < dim2; i++) {
src_index = thrd_d0*dim1*dim2 + thrd_d1 * dim2 + i;
//if(idx<15) printf("idx: %d : src_index: %dn",idx,src_index);
if(src_index < _limit){
tmpSum0 += g_idata[src_index];
if ((blockIdx.x == 0) && (i == 0) && (threadIdx.x < 32)) printf("thread: %d, src_index: %un", threadIdx.x, src_index);
}


if (src_index + 0 < _limit) smem_store[tid + 0] = tmpSum0;
__syncthreads();

unsigned int oindx = (unsigned int)( thrd_d0*dim1 + thrd_d1 );
if (src_index + 0 <= _limit) g_odata[oindx + 0] = smem_store[tid + 0];
}
}
}


int main(){



size_t dim = 256;
size_t sz = dim*dim*dim*sizeof(float);
float *g_idata, *g_odata;
cudaMalloc(&g_idata, sz);
cudaMalloc(&g_odata, sz);
kernel_reduce_sum_3d_try02<<<dim/BLOCK_SIZE, BLOCK_SIZE>>>(
g_idata,
g_odata,
dim,
dim,
dim,
0,
0,
1);
cudaDeviceSynchronize();
}
$ nvcc -o t1 t1.cu
$ cuda-memcheck ./t1
========= CUDA-MEMCHECK
thread: 0, src_index: 0
thread: 1, src_index: 256
thread: 2, src_index: 512
thread: 3, src_index: 768
thread: 4, src_index: 1024
thread: 5, src_index: 1280
thread: 6, src_index: 1536
thread: 7, src_index: 1792
thread: 8, src_index: 2048
thread: 9, src_index: 2304
thread: 10, src_index: 2560
thread: 11, src_index: 2816
thread: 12, src_index: 3072
thread: 13, src_index: 3328
thread: 14, src_index: 3584
thread: 15, src_index: 3840
thread: 16, src_index: 4096
thread: 17, src_index: 4352
thread: 18, src_index: 4608
thread: 19, src_index: 4864
thread: 20, src_index: 5120
thread: 21, src_index: 5376
thread: 22, src_index: 5632
thread: 23, src_index: 5888
thread: 24, src_index: 6144
thread: 25, src_index: 6400
thread: 26, src_index: 6656
thread: 27, src_index: 6912
thread: 28, src_index: 7168
thread: 29, src_index: 7424
thread: 30, src_index: 7680
thread: 31, src_index: 7936
========= ERROR SUMMARY: 0 errors
$



We see that on one particular access, the threads in a warp are reading from locations that are separated by an index distance of 256 (ideally we would like this "distance" to be 1 as we go from thread to thread in a warp, for "coalesced" access to global memory).



So is this possible? It is possible for each of the 3 sum directions, although there will need to be somewhat different methodologies/realizations applied in each case. We'll come back to this.



We also want to expose enough parallelism, which can roughly translate into "be able to launch kernels with ~10,000 or more threads". The behavior here will vary by GPU architecture and other factors, but this is a reasonable starting point for general understanding. A kernel with 256 threads total (for example) would not be enough to saturate any current CUDA GPU.



Given that this sum function will work on 3-dimensional data, let's start by considering a 3-dimensional array of ~256 elements per dimension. If we chose to create just 256 threads, and work their way through the array, that would be too small for objective 1. So let's think about realizations that can handle 256x256 (or more) threads.



Case 1: summing in the X-direction



I will assume C-style storage, therefore the X direction will be in the direction of linear storage in memory. Therefore as we progress from element to element in a row, we are increasing memory storage index by 1. Summing along these "rows", therefore, will require that adjacent threads read adjacent elements belonging to a particular sum. Therefore to allow adjacent threads to do this, but then to cooperate and work together to form the sum, we will use a classical parallel reduction method. We will choose a grid of 256 threads per block (each block cooperating to form a sum) and one block per sum (65536 total blocks in this case, so 65536*256 total threads, meeting our first objective) and each block of 256 threads will be responsible for computing one row sum. To facilitate this, we will choose a number of blocks equal to the Y-dimension of the array (256 in our case) times the Z-dimension in our array (256 in our example) and 256 threads be block. Each block will be responsible for computing one sum. This will be the only case that will use or need shared memory.



Case 2: summing in the Y-direction



We could refer to this as "summing columns in the Y direction". In order to satisfy our second objective, we require that each thread in a warp read an adjacent element, but adjacent elements in memory now belong to separate Y-column sums. An efficient realization in this case, if we can keep enough threads busy, is to compute a separate sum in each thread. Each thread traverses down a Y-column, and keeps a running sum in a loop. A single Z-sheet (a slice in the X-Y plane) would only require 256 threads total to compute that for our example case, but we can increase the number of threads by working on all 256 sheets (in our example case) at the same time. So we can employ 256x256 = 65536 threads, meeting our first objective.



Case 3: summing in the Z-direction (your example case in your question)



This case is just a small variation on Case 2. Once again, to satisfy objective 2, we desire that adjacent threads in a warp read adjacent locations in memory. Once again, these belong to separate sums. Now, however, we want the threads to traverse columns in the Z-direction, rather than columns in the Y-direction. So the indexing will be slightly different, but overall the realization will look quite similar to Case 2. We will also employ a grid of 256x256 threads, where each thread is keeping a separate running sum. However the starting point for each running sum will be the first "sheet", then iterate through the next sheet and the next sheet in the Z-direction, summing "Z-columns", one per thread.



Here's an example demonstrating all 3 cases:


$ cat t2.cu
#include <stdio.h>
#include <assert.h>

// modifiable
typedef float mytype; // consider possibility of overflow e.g. for char types
const size_t dimX = 701;
const size_t dimY = 699;
const size_t dimZ = 698;

//not modifiable
const int nTPB=256;
template <typename T>
__global__ void kernel_reduce_sum_3d(
const T * __restrict__ g_idata,
T * __restrict__ g_odata,
const size_t dim0,
const size_t dim1,
const size_t dim2,
const bool overaxis0,
const bool overaxis1,
const bool overaxis2)
{
__shared__ T sm[nTPB];
if (overaxis0 && !overaxis1 && !overaxis2){
// Case 1 - sums in X-direction
// each threadblock is responsible for a separate row sum
size_t bidx = blockIdx.x;
size_t tidx = threadIdx.x;
sm[threadIdx.x] = 0;
while (tidx < dim0){
sm[threadIdx.x] += g_idata[bidx*dimX+tidx];
tidx += blockDim.x;} // block-stride loop
__syncthreads();
// parallel reduction
for (int i = blockDim.x>>1; i > 0; i>>=1){
if (threadIdx.x < i) sm[threadIdx.x] += sm[threadIdx.x + i];
__syncthreads();}
if (!threadIdx.x) g_odata[bidx] = sm[0];
}
else if (!overaxis0 && overaxis1 && !overaxis2){
// Case 2 - sums in Y-direction
// each thread is responsible for a separate Y-column sum
size_t idx = threadIdx.x+blockDim.x*blockIdx.x;
if (idx < (dim0*dim2)){
size_t tidx = idx%dim0 + (idx/dim0)*(dim0*dim1);
T tsum = 0;
for (size_t i = 0; i < dim1; i++){
tsum += g_idata[tidx];
tidx += dim0;}
g_odata[idx] = tsum;
}
}
else if (!overaxis0 && !overaxis1 && overaxis2){
// Case 3 - sums in Z-direction
// each thread is responsible for a separate Z-column sum
size_t idx = threadIdx.x+blockDim.x*blockIdx.x;
if (idx < (dim0*dim1)){
size_t tidx = idx;
T tsum = 0;
for (size_t i = 0; i < dim2; i++){
tsum += g_idata[tidx];
tidx += dim0*dim1;}
g_odata[idx] = tsum;
}
}
else assert(0); // not implemented!
}

template <typename T>
bool validate(T *data, size_t dim, T val){
for (size_t i = 0; i < dim; i++)
if (data[i] != val) {printf("mismatch at %lu, was: %f, should be: %fn", i, (float)(data[i]), (float)val); return false;}
return true;
}


int main(){

// setup input array of all 1
const size_t sz = dimX*dimY*dimZ*sizeof(mytype);
mytype *d_in, *d_out, *h_in, *h_out;
size_t rsz;
h_in = new mytype[dimX*dimY*dimZ];
assert(h_in != NULL);
cudaError_t err = cudaMalloc(&d_in, sz);
for (size_t i = 0; i < dimX*dimY*dimZ; i++) h_in[i] = 1;
if (err != cudaSuccess) {printf("cudaMalloc1 error: %sn", cudaGetErrorString(err)); return -1;}
for (size_t i = 0; i < dimX*dimY*dimZ; i++) h_in[i] = 1;
err = cudaMemcpy(d_in, h_in, sz, cudaMemcpyHostToDevice);
if (err != cudaSuccess) {printf("cudaMemcpy1 error: %sn", cudaGetErrorString(err)); return -1;}
// Test X-direction sums
printf("testing X-direction sumsn");
rsz = dimY*dimZ*sizeof(mytype);
h_out=new mytype[dimY*dimZ];
err = cudaMalloc(&d_out, rsz);
if (err != cudaSuccess) {printf("cudaMalloc2 error: %sn", cudaGetErrorString(err)); return -1;}
err = cudaMemset(d_out, 0, rsz);
if (err != cudaSuccess) {printf("cudaMemset1 error: %sn", cudaGetErrorString(err)); return -1;}
kernel_reduce_sum_3d<<<dimY*dimZ, nTPB>>>(d_in, d_out, dimX, dimY, dimZ, true, false, false);
err = cudaMemcpy(h_out, d_out, rsz, cudaMemcpyDeviceToHost);
if (err != cudaSuccess) {printf("result1 error: %sn", cudaGetErrorString(err)); return -1;}
if (!validate(h_out, dimY*dimZ, (mytype)dimX)) return -1;
cudaFree(d_out);
delete h_out;
// Test Y-direction sums
printf("testing Y-direction sumsn");
rsz = dimX*dimZ*sizeof(mytype);
h_out=new mytype[dimX*dimZ];
err = cudaMalloc(&d_out, rsz);
if (err != cudaSuccess) {printf("cudaMalloc3 error: %sn", cudaGetErrorString(err)); return -1;}
err = cudaMemset(d_out, 0, rsz);
if (err != cudaSuccess) {printf("cudaMemset2 error: %sn", cudaGetErrorString(err)); return -1;}
kernel_reduce_sum_3d<<<((dimX*dimZ)+(nTPB-1))/nTPB, nTPB>>>(d_in, d_out, dimX, dimY, dimZ, false, true, false);
err = cudaMemcpy(h_out, d_out, rsz, cudaMemcpyDeviceToHost);
if (err != cudaSuccess) {printf("result2 error: %sn", cudaGetErrorString(err)); return -1;}
if (!validate(h_out, dimX*dimZ, (mytype)dimY)) return -1;
cudaFree(d_out);
delete h_out;
// Test Z-direction sums
printf("testing Z-direction sumsn");
rsz = dimX*dimY*sizeof(mytype);
h_out=new mytype[dimX*dimY];
err = cudaMalloc(&d_out, rsz);
if (err != cudaSuccess) {printf("cudaMalloc4 error: %sn", cudaGetErrorString(err)); return -1;}
err = cudaMemset(d_out, 0, rsz);
if (err != cudaSuccess) {printf("cudaMemset3 error: %sn", cudaGetErrorString(err)); return -1;}
kernel_reduce_sum_3d<<<((dimX*dimY)+(nTPB-1))/nTPB, nTPB>>>(d_in, d_out, dimX, dimY, dimZ, false, false, true);
err = cudaMemcpy(h_out, d_out, rsz, cudaMemcpyDeviceToHost);
if (err != cudaSuccess) {printf("result3 error: %sn", cudaGetErrorString(err)); return -1;}
if (!validate(h_out, dimX*dimY, (mytype)dimZ)) return -1;
cudaFree(d_out);
delete h_out;
cudaFree(d_in);
err=cudaGetLastError();
if (err != cudaSuccess) {printf("CUDA error: %sn", cudaGetErrorString(err)); return -1;}
printf("success!n");
delete h_in;
return 0;
}
$ nvcc -lineinfo -o t2 t2.cu
$ cuda-memcheck ./t2
========= CUDA-MEMCHECK
testing X-direction sums
testing Y-direction sums
testing Z-direction sums
success!
========= ERROR SUMMARY: 0 errors
$



Addressing your questions, briefly:



Question 1. Is there a better way to assign threads and blocks to compute output tensor?



As indicated, your choice of indexing is non-optimal, because it does not result in coalesced access to global memory. The alternate realization I provided will result in coalesced reads from global memory.



Question 2. In my kernel how can I increase occupancy? (which is around 50%)



The figure of merit for this memory bound kernel is not occupancy, but whether or not the kernel runtime is approximately equal to the time it takes to read and write the data. The above kernel should satisfy this. If you satisfy this condition for a memory bound kernel, there is no further improvement possible, regardless of occupancy.



Question 3. How should I use shared memory for global memory read operations? (In case of a large 'dim2', each block should allocate huge shared memory buffer which is not good for concurrent warp execution and occupancy)



Optimal realizations such as what I have shown do not require shared memory at all for case 2 and case 3 (your case), and in case 1, we can craft a kernel that only requires a small amount of shared memory per block; not enough to be an occupancy concern or cause any other problem.





Thank you for your thorough answer, it covers all my questions and more.
– saleh jamali
yesterday






By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Popular posts from this blog

Keycloak server returning user_not_found error when user is already imported with LDAP

PHP parse/syntax errors; and how to solve them?

How to scale/resize CVPixelBufferRef in objective C, iOS