Hi guys, just a quick question. How do you usually make a functional testing, ie. checking a function to work as expected? What are the steps? Thanks.
Hi guys, just a quick question. How do you usually make a functional testing, ie. checking a function to work as expected? What are the steps? Thanks.
ERROR: Brain not found. Please insert a new brain!
“Do nothing which is of no use.” - Miyamoto Musashi.
How would you tell if a car works?
Code://try //{ if (a) do { f( b); } while(1); else do { f(!b); } while(1); //}
Well, I guess I would try driving it around. If I manage to get somewhere and home again, it obviously works fine, right?
@g4j31a5: Come up with various inputs (ie, the function parameters) and do your best to decide what the function should output for these, without actually using the function. Then feed the inputs to the function and check if the output is what you think it should be. Each set of inputs, expected output, and actual output is a "test case". The mechanism you use to feed these to the function and record the results is a "test harness". Depending on the nature of the function, it may be easy to come up with a list of hundreds of test cases; if not, try to use at least 5-10. These should mostly be valid input (and cover the range of what "valid" input would be), but 20% or so should be invalid inputs, so you can test if the function can handle erroneous data nicely -- eg, it doesn't just seg fault, it reports an error, and the error is correct. Note that "report an error" does not mean a printf/cout statement. It is better to use the return value, or to throw an exception, etc. This way, the caller can decide how to deal with the problem.
Testing of that sort is more akin to getting your car tuned and checked in a shop . You may want to keep your test cases around so that if you change something inside the function, you can easily double check that you didn't break anything.
Last edited by MK27; 03-19-2012 at 11:27 AM.
C programming resources:
GNU C Function and Macro Index -- glibc reference manual
The C Book -- nice online learner guide
Current ISO draft standard
CCAN -- new CPAN like open source library repository
3 (different) GNU debugger tutorials: #1 -- #2 -- #3
cpwiki -- our wiki on sourceforge
It may depend on what the function does. For most functions, it's fairly obvious what the results should be. For some, it's not.
For my geometry utilities, I used pristine test cases (geometries easily figured by hand) to compare results. Then moved on to concave, but still easily calculated geometries, such as pipes. My 3D modeler can calculate the area of a any number of primitives, so so for the pipe, I would find the area of an end cap and multiply it by the length. Simple to compare, but wasn't so simple to fix.
Then there is error reporting. Situations that I know should not happen or can happen and would screw up the findings. I dump numbers to a file for viewing later on to see what case was.
Originally Posted by g4j31a5Originally Posted by MK27Heh, I had the impression that "functional testing" refers to testing of functionality (whether the program functions as designed), whereas testing of individual functions (as units of code) is "unit testing".Originally Posted by Cynic
Look up a C++ Reference and learn How To Ask Questions The Smart WayOriginally Posted by Bjarne Stroustrup (2000-10-14)
Maybe I'm reading too much, or little, into what brewbuck said, but I think he has the right of it.
For most code you'll never prove that it is correct... only that it seems to behave correctly.
Soma
This reminds me of the quote attributed to Knuth: "Beware of bugs in the above code; I have only proved it correct, not tried it."Originally Posted by phantomotap
Look up a C++ Reference and learn How To Ask Questions The Smart WayOriginally Posted by Bjarne Stroustrup (2000-10-14)
True, but testing is not about correctness proving, it is about checking to see if something seems to behave correctly .
Also worth noting: proving something correct does not actually mean it will then seem to behave correctly, if you have misconceived the application of what you have proven correct.
I don't test what I write as much as I should (because I enjoy debugging later so much) but I did read somewhere that "studies have shown" that the number of bugs found in say 10 poorly tested units over the next 6 months will be much higher than those found in 10 similar but well tested units, even if the exact same people wrote all the code -- regardless of whether anything is "proven correct" in the process.
C programming resources:
GNU C Function and Macro Index -- glibc reference manual
The C Book -- nice online learner guide
Current ISO draft standard
CCAN -- new CPAN like open source library repository
3 (different) GNU debugger tutorials: #1 -- #2 -- #3
cpwiki -- our wiki on sourceforge
*shrug*True, but testing is not about correctness proving, it is about checking to see if something seems to behave correctly.
I view this as being the differences between "Correct" and "Accurate".
Granted, they are my definitions, but that really is almost exactly how I use "Correct" and "Accurate".
And just in case it comes up: "Consistent" being "If it fails, for pities sake make it fail the same way every time.".
I really don't think your mind could conceive of a truer statement. ^_~Also worth noting: proving something correct does not actually mean it will then seem to behave correctly, if you have misconceived the application of what you have proven correct.
Soma
True. Generally speaking, testing can only prove lack of correctness and cannot prove correctness. An error is detected in testing is positive proof of the existence of an error. However, the converse is not true: the absence of an error in test can (and, in practice, usually does) just mean the test cases were incomplete and did not exercise faulty code.
Proving completeness of a set of test cases is challenging, particularly for non-trivial code.
I would have to agree with Grumpy.
On a similarly related note, does anyone know of a better way to test an assembly-language function I wrote? It is designed to be a "left-trim string" function, taking a string and an offset, and trimming the string from the left that many characters, then copying it into an output string.
For example, if the offset was 3 and the input was
"asdf asdf asdf"
"af4tr3cf sf gffff"
it would produce:
"f asdf asdf"
"tr3cf sf gffff"
I did the following tests, and they all worked:
- Zero offset (just copied the string)
- Null string (made output = null)
- One line (trimmed normally)
- Offset longer than line length (works, any lines shorter than offset were removed during trimming)
- Offset equal to line length (leaves lines, works correctly)
- And a couple more tests of standard strings
Does anyone know of any more "complete" tests I could do?
@memcpy
I don't see where "lines" come into this, it's a string trimming/copy function. What is your "one line" test? What are your "couple more tests of standard strings"?
Most of the tests you did are corner/edge cases. Those usually check bizarre stuff that might break your code outright, i.e. how it performs under abnormal usage, like a caller passing in NULL pointers. The only other possible corner case I can think of would be an empty string, "". It probably falls under the zero offset or offset longer than line length cases, but it is the type of input that is commonly missed in designing and/or testing the algo, and the kind of input that could easily break an algo.
Also, you want to test some non-corner/edge cases. Perhaps these are the one line and standard strings tests, but I'm not sure. They should be run to verify that your code works as expected under normal operation, such as the examples you gave.
Lastly, since it's assembly, how are you treating the offset parameter? If it's signed, you need to check what happens if you pass in negative values or you could redefine it to take an unsigned number and use unsigned operations when computing offset.
Wow, thanks for all the replies. And yeah, my bad. laserlight was right, it should've been called unit testing instead of functional testing. So in a way, testing should consists of these:
1. Test cases for inputs.
2. Predetermined values to be compared with the output which are calculated manually by the tester.
3. The error reporting.
Did I miss anything else?
I actually got a few tests which required me to make a program that checks that a particular function works as intended. For example:
Write a program to check that std::ldexp(x, exp) == x * 2^exp across ldexp() input domain.
Considering that question, what I should do is this:
1. Wrote a list of test cases (eg. 1: x = 1, exp = 2; 2: x=-25, exp = -3; 3: x = 11, exp = 0; 4: x = -25, exp = 0; etc...)
2. Do an "assert(std::ldexp(x, exp) == (x * (2^exp)));" in a loop for each of the test cases's values.
Is this right? Also, another question. How do you make a list of good test cases?
Thanks.
ERROR: Brain not found. Please insert a new brain!
“Do nothing which is of no use.” - Miyamoto Musashi.
Since x is a floating point number, you may want to test against values 0 < x < 1 as well. In fact, you may want to expand that to -1 < x < 1.