So, in the Rust thread Elysia mentioned APUs.
Now, to me it seems like an APU is just a GPU and a CPU on the same "die". Well, at least one implementation of an APU is a CPU/GPU hybrid so let's just discuss that for the time being.
Do we expect this to change how we program very much? Wikipedia sites that one common version is a CPU hybrid with an OpenCL compatible GPU.
I won't want to have to use OpenCL. I don't like the fact that my GPU kernels have to be written as strings and that any compilation bugs are found at run-time. But that's just something I guess I need to get used to. I think CUDA is more friendly for C programmers but that's because I heard it's really just a C API in which case, go nVidia; it's pretty sexy.
Also, nVidia has Project Denver which is a ARM CPU with an nVidia GPU so I imagine it'll be OpenCL compatible through its CUDA packages like it is now.
But aside from these hardware specifics, should we expect to see much of a programming paradigm shift? Like, what differences should we see if there are any?
Writing separate parallel kernels isn't exactly a radical departure from what we have now although it is more interesting to think of launching instances of a kernel for every point in a particle simulation, for instance, and how thread ID's are how we identify points in the array. But that's one isolated incident.