Based on the specs, I would agree with you. I don't know that you have to use a byte array for the original data really; masking would seem to be the way to go. (Although a byte array for the output would work.)
Printable View
I think my confusion came from that I wanīt to encode 12bit samples. Right now I can only assume that itīs encoded just like 16bit but with the highest legal sample value being 4095. I guess that I will have to try and see. :-) I donīt remember exactly how I ended up solving it the last time but, I think masking was used to get the value of the last byte with the the 2 least significant bits.
If it happens to be encoded into two bytes I think I could use something like this:
Where 00001111 11111111 will become 01111111 01111100
Code:UInt16 sample = 0xfff;
UInt8 outputBuf[2];
sample <<= 3;
memcpy(&outputBuf[0], &sample, 2);
outputBuf[0] >>= 1;
Subsonics, based on the spec for MIDI you posted, it looks like you can't just shift 16-bits... Instead you need to split up into groups of 7 bits because the spec calls for the high bit to be zero in each byte. Only 7 bits are used. So not only shifting the data left but spreading it out some as you cross the byte boundaries.
Iīm not sure what you mean now. In the post above I have done that, the MSB is padded with zero in both bytes. First I shift three bits to the left which leaves me with one leading zero, then I copy the short across two bytes in the output buffer and make a right shift to the last byte. If you refer to the original post then thatīs only one part in solving this that I couldnīt figure out how to do. But I donīt think I will need it now as Iīm pretty sure that the left justifying that is mentioned in the doc only is relevant for the least significant byte.