There is only one sure-fire way to eliminate bugs from a program. That involves hard work. To give a precise description of requirements (and the problem to be addressed or the task that the software is to achieve). To understand what each code construct (whether it is a declaration/definition, an expression, or even a preprocessor directive) you use contributes to meeting those requirements. To define test cases that will fail unless the software meets the requirements, or - if the test cases are passed - can be used to provide a body of defendable evidence about how well the requirements are met.
Instead, what we have is a lot of rules of thumb, often based on a false premise that a few small simple tricks can eliminate bugs from a program. The thing is, if you dig into the places where many of those rules originate (journal articles, teacher guides, coding standards) their primary goal is to be easy to use, teach (it is easier to get a novice to add a "= 0" than it is to get them to think about how a variable contributes to the code working as required) and (in a lot of cases) be automatically enforced (eg by some software tool, such as a compiler, that enforces coding standards). The secondary goal (at best) of these rules is that they can, in some hopefully realistic scenarios, be demonstrated to reduce incidence of bugs in software.
In practice, some authors do express a wish that, by encouraging programmers to follow their chosen rules of thumb or coding guidelines, that they will encourage programmers to think more carefully about their code, and thereby reduce bug counts. In most cases, those authors wish in vain - most programmers learn to follow the "rules of thumb" without understanding, as a habit rather than something they think about, and therefore eventually run into the inconvenient pitfalls.
printf("\nEnter the number of pennies:\n");
printf("\nEnter the number of nickels:\n");
printf("\nEnter the number of dimes:\n");
printf("\nEnter the number of quarters:\n");
printf("\nYou have $%.2f dollars\n",dollars);
> Thank you very much!
Yes, you got a spoon-fed answer from some overly enthusiastic newbie poster.
The real question is, have you learnt anything from this exercise (except perhaps that there's a sucker to answer your question somewhere).
Will you feel as confident with next weeks assignment (which will be harder)?
Nonetheless, it is still arbitrary, and it is probably a mistake to use i without further assignment if scanf fails. The correct approach to handling such an input error is not to check if i still has a value of 0 after the scanf call since indeed 0 is a valid input; rather, it would be to check the return value of scanf.Quote:
Originally Posted by c99tutorial
But hey go ahead and be hostile.
Thus I don't think Salem had you in mind when he talks about an "overly enthusiastic newbie poster"
I liked your post, Tomwa - LTA's also. You gave plenty of info, and LTA's code was so clear that there was no need to explain anything further.
The forum does have a problem, that if the OP posts no code, we quickly become a "please give me the codez" gathering spot, for lazy students.
Clearly, that was NOT the case here. The OP's code was "close", imo.
Well done, both of you.