# Thread: Testing Application's speed

1. ## Testing Application's speed

Is there a way to test how long it takes for an application to do something?

For example like running two different sorting algorithms, and test which one takes less time to complete.

2. If you used quick-sort, a stop-watch will do
If you used bubble-sort, try a sundial.

3. But sorting is not the only thing im testing

4. Code:
```#include <iostream>
#include <fstream>
#include <iomanip>
#include <windows.h>
using namespace std;

int main()
{
fstream myfile;
myfile.open("myfile.txt", ios::out);
DWORD dwStart = GetTickCount();
int count3 = 0;
int count2 = 25000;

printf("Couting to a 1,000,000 :   \n\n");
for(int count = 0; count != 1000000; count++)
{
count3++;
if(count == count2)
{
printf("\b-");
Sleep(25);
}
if(count == count2+1)
{
printf("\b/");
Sleep(25);
}
if(count == count2+2)
{
printf("\b-");
Sleep(25);
}
if(count == count2+3)
{
printf("\b\\");
Sleep(25);
}
if(count == count2+4)
{
printf("\b|");
Sleep(25);
}
if(count == count2+5)
{
printf("\b/");
Sleep(25);
}
if(count == count2+6)
{
printf("\b-");
Sleep(25);
}
if(count == count2+7)
{
printf("\b\\");
Sleep(25);
}
if(count == count2+8)
{
printf("\b|");
Sleep(25);
}
if(count == count2+9)
{
printf("\b/");
Sleep(25);
count2+=25000;
}
myfile << setw(10) << count;
if(count3 == 5)
{
myfile << "\n";
count3 = 0;
}
}
DWORD dwEnd = GetTickCount();
printf("\bYour out put file is \'myfile.txt\' .");
myfile << "\n";
myfile << setw(4) << "The proccedure was completed in " << ((dwEnd-dwStart)-9000) << " ms.";
printf("\nThe proccedure was completed in %d ms.",((dwEnd-dwStart)-9000));
myfile.close();
cin.get();
//remove("myfile.txt");
return 0;
}```
Code:
```#include <iostream>
#include <windows.h>

using namespace std;

int main()
{
int value;
DWORD start, end;
start = GetTickCount();
value = system("notepad.exe");
end = GetTickCount();
cout << "Process took " << end - start << "ms." << endl << endl;
cout << "Notepad.exe returned " << value << endl << endl;
cin.get();
return 0;
}```
**Notice**
Your program wont continue untill the program it runs is
exited either by you exiting it or by it running it course.

P.S.
I wrote this along time ago, back before i liked "cout" better then
"printf". This is just to give you an idea.

5. here's a nice portable way of doing it:
Code:
```#include<iostream>      //for visual output
#include<ctime>         //for the actual timer

int main()
{
char*spinner="|/-\\";   //just for some visual output
short int x=0;          //to iterate through spinner
time_t start=clock();   //get the time before it starts

for(short int i=0;i<32000;i++)  //this would be your algorithm
{
std::cout<<'\b'<<spinner[x]<<std::flush;        //to slow the loop a bit
x=(x>3?0:x+1);  //so it doesn't iterate past the end
}

//output the end timing results - here you would use simething like
//time_t end=clock(); if you wanted to save the output for a later
//time.
std::cout<<"\nThat ran in "<<clock()-start<<" CPU Cycles"<<std::endl;
return 0;
}```

6. that good code, major, but
if i remember correctly the lowest
amout of time you can return is seconds.

GetTickCount returns milliseconds (+/- 10)

i think linux has utime or something and it suppose to return
pretty low times with pretty good accuracy.

7. Originally Posted by ILoveVectors
that good code, major, but
if i remember correctly the lowest
amout of time you can return is seconds.
erm, if using CLOCKS_PER_SEC is working for you, all it takes is a simple multiplication to get miliseconds...

but when you're timing this like this, it really doesn't matter if you have a 'real' time or not, because you're timing things relatively...

unless I'm missing something here...

8. if you time in the process like the time for a loop to complete
unless your running it on a 2Mhz computer, i think that
in most cases milliseconds, real milliseconds
not just a multiplcation problem to reach the approximate
milliseconds, then i dont think time_t is the best choice.

Here is an example why.

let say we have a loop that counts to 1,000,000,000

if we use time_t the results could be like this

Comp time_t GetTickCount (results displayed in ms)
1Ghz 2000ms 1997ms
1.2Ghz 2000ms 1888ms
1.3Ghz 2000ms 1765ms
1.5Ghz 2000ms 1537ms

These arent nessarrily valid results from a test, but it
to point out that according to time_t all these processor run
this in the same amount of time. And to get more accurate
result the test would ahve to be run longer time with time_t
to see any real result in time difference. so if your testing alot of
systems it may be more effiecent to use a time procedure
that can get the millisecond or lower. that my opinion
and im sticking to it

9. I see your point, but generally that really doesn't matter... all that matters is relative speed of different algorithms... like Salem said, on large amounts of data, a quick-sort is faster than a bubble sort, no matter what processer/archetecture you run it on.

10. yea but he didnt want to know what was faster, he wanted
to know how to time it the best way.

but yea i just like arguing, id argue with a tree if it would argue back.

Popular pages Recent additions