Well I have some interesting results.
I timed both ways in milliseconds. "Mid time" here stands for the first part of assigning values to the bitmap/array. The second part is after checking the bitmap/array and assigning the final values to the bitmap. The bitmap conversion code was pretty much exactly what I posted in my solution earlier.
Texture size 1024x1024
Bitmap conversion mid time:
50.069981
Bitmap conversion final time:
105.990868
Array mid time:
4.291205
Array final time:
2142.164307
Texture size 2048x2048
Bitmap conversion mid time:
200.142319
Bitmap conversion final time:
418.508026
Array mid time:
23.939423
Array final time:
8596.512695
Texture size 4096x4096
Bitmap conversion mid time:
790.517334
Bitmap conversion final time:
1655.713867
Array mid time:
111.133240
Array final time:
34683.953125
My conclusion is that it is accessing the values in the array that is taking the most time. Since the bitmap only needs to access one row of pixels at a time there are a lot less checks. For the array I tried float arr* and std::vector<float>, both were fairly similar in times. The time for calculating the conversion itself is also a very small part of these times and negligible for the situation.
This means I save both memory and a whole lot of time. Pretty cool.
Goes to show you never know about these things.