I have a program that I am optimizing. I desided to use a bit of in-line ASM to see if I'd notice any change in speed. I compiled the original version (before using ASM) and then the new version as a seperate file.
Testing them back and forth doing the same exact tasks, I can't tell if it's faster, slower, or the same. I mean, it seems exactly the same speed, but I don't know. I guess since it's so close, it more than likely doesn't really matter how I choose to program it, but completely out of curiousity which would normally speed things up a bit (even if only by microseconds)? I have posted my code below.Which would be some-number tics faster?
Code:
/* Original */
Bits[Input1] = (Bits[Input1] & 240) >> 4;
// insert statements here...
LB = 192 - LB - 224;
BitFlag++;
Code:
/* This replaces the above entirely */
Bits[Input1] = ProcessBits(Bits[Input1]);
// insert statements here...
LB = ProcessLB();
static char ProcessBits(char Byte1)
{
__asm
{
mov al, Byte1;
and al, 240;
shr al, 4;
}
}
static char ProcessLB()
{
__asm
{
mov al, 192;
sub al, LB;
sub al, 224;
inc BitFlag;
}
}