Just a quick question really... I know that since Vista, Microsoft has considered hardware acceleration of audio to be deprecated. The reasoning behind this was presumably that a modern CPU could cope with mixing and effects while doing everything else at the same time.
Testing this hypothesis, I've recently tried to write a simple software audio mixer that can mix together up to 8 sounds of varying lengths. I tried doing this with ten 100ms buffers and found that it is laggy as hell(?)
Much like other code designs, the mixer runs in its own thread and waits on two event objects, one coming from the waveOut device and one from the creating thread indicating when it would like to play another sound.
Initial testing seems to indicate that the mechanism works fine, the buffers just aren't long enough to compensate for the time it takes to mix each individual sample. But 100ms?
I've seen an official sample that Microsoft provides which demonstrates a software mixer DLL. The core elements of it are written in assembly language.
I know that for operations like this you can take advantage of MMX or SIMD to process blocks of data more efficiently than using code a C/C++ compiler might initially generate, but is it still really that high-latency interface of legend, even on a modern computer?
Yes, I know I could use DirectSound, but look at how fast GDI goes these days. Why is the standard audio interface still so dodgy?