Originally Posted by
MutantJohn
Interesting. This is a good thread for me.
This also explains why I'm going gaga over GPU programming lately. And in a really good way. OpenCL also seems incredibly easy to learn from knowing CUDA. It's literally (figuratively literal) the same thing with different names.
Okay, so let me make sure that I understand this correctly,
GPGPU coding is amazing if you have the following conditions :
1. Memory access are done in a linear fashion (i.e. a contiguous array is being read from or written to). If the array is not contiguous, memory access must be linearly separated by the same constant for best performance.
2. The operations have minimum branching. Even though this seems to be extra true for GPUs this was already a well-established thought in high performance computing for a long time.
3. The operations largely consist of floating point numbers. Not doubles 'cause it's too much for a GPU to handle, right? And if accuracy is an issue, there's always an exact method somehow.
Am I missing anything? I'm really starting to like GPGPU computing and I want to continue with it because it's exactly up my alley and is exactly what I need.