why some c programs are not run even if there is no error in compile time??????????
Printable View
why some c programs are not run even if there is no error in compile time??????????
Such as ?Quote:
why some c programs are not run even if there is no error in compile time??????????
Usually because you made a run-time error and not a compile-time error.
For more information, we need to see code!
You mean like this one:
Because the compiler does not check stuff like this. Perhaps it should, and this is a performance trade-off in the compiler.Code:#include <stdio.h>
#include <string.h>
int main() {
char *overflowme;
strcpy(overflowme,"this will overwrite some memory");
printf("%s", overflowme);
return 0;
}
On the other hand, programming need not be quite so easy as throwing darts blindfolded (which really is easy if you don't care what you hit). You have to know what you are doing. To know you have to learn, partially thru a process of trial and error. It's not nearly as painful as learning to ride a bike, and you already did that (hopefully).
Lesson: Don't expect the compiler to fix everything for you, and be happy when it does.
A compiler doesn't check if you're doing meaningful things. It only checks if you're doing correct things.
I don't think many compilers will complain at this:
Even though it is clearly undefined behaviour.Code:#include <stdio.h>
int main(void)
{
printf("The mystery is solved: *NULL is %d\n", *(int *)NULL);
return 0;
}
Compilers can't check logic errors or care much whether there are even any output statements. That should be obvious.
They can, and some do.
The compiler just checks to make sure you use correct syntax, it doesn't check to make sure your code is coherent.
The warbling of the instantiated was blue.
Perfectly good syntax, with no meaning whatsoever.
No, some do. 99% of the compilers do not, but some, like MSVC can check for problems like this. It's not good to dismiss out of hand. But it's certainly not good to always rely on such a feature.
And if you do this:
GCC -O3 will just ignore the statements in red as if they didn't exist, because they are meaningless in the context of the program.Code:int mostlypointless (int x) {
int a = 12, i;
for (i=0; i<10; i++) a = a*i+a;
return x+1;
}
The only way to notice this (I guess) is if you use massive meaningless loops for profiling. Compiled gcc -O3, they won't happen.
Newer compilers for other languages are 'attempting' to do this and it comes off to me as 100% annoying. Maybe it's because I don't like my hand being held while I code.
It's extremely helpful to offer such code analysis, because it can find problems in your code.
Naturally, it is also slower, so there should always be an option to turn it off. Otherwise the compiler is flawed.
Or perhaps you don't like the compiler removing certain code in optimizations?
>> They can, and some do.
To a very limited extent, but for the most part they can't.
When it comes to incorrect manipulation of certain things, 'const' does a good job for protecting against this - if for some reason the programmer doesn't want to use it, that's his problem.
Const doesn't help very much at all. It's mostly for for convenience and efficiency.
Hmm... I disagree. That's definitely not the task of a compiler. Abachler said it best.
That type of code analysis can and should be done by specialized tools only. MSVC offers those tools in its Team System releases. Tools like Code Metrics, C/C++ Code Analysis Tools, Application Verifier, Line Level Sampling and even Code Profiler (and there's even more).
Note that by "compiler", I mean the actual task of code compilation. In this context I don't agree such tasks should be implemented. In fact I suspect most of them couldn't, since better diagnostics can only be achieved after compilation and during program execution with access to debugging symbols, which is what most of the tools above do. On the other hand, if we consider the tasks of a compiler (lexical analysis, pre-processing, parsing and code generation with/without optimization) we can see there is really no place for such analysis as the excellent example provided by Mk27 demonstrates.
Again, go back to abachler statement. There's an underlying truth to it that basically says it's not the task of the compiler to make sure your code works as you want it to.
Actually, since the compiler is well understood in what the code does, where better to perform the static analysis?
It's the perfect place for such things. And so I do believe the compiler should do analysis. Heck, in Visual Studio, the compiler does the analyzing, doesn't it?
Run-time tools cannot be tied to the compiler, of course. But they're another subject.
But, but... what I'm trying to say is that the best place to make most of these analysis is at runtime because it's almost certainly much cheaper. This is almost certainly true for compiled languages. 3 problems:
- I cannot fathom the amount of hideous code necessary for a compiler to be able to check code logic during compilation. From the point of view of the developers of such tools, how can that be beneficial if he same task can be performed much easily during runtime, where most of these problems are more easily diagnosed? Do I have evidence of this fact? Not really. What do I know. My recent foray into creating a rudimentary scripting language of my own was followed by total failure... 30 minutes later. But I suspect I'm right. :)
- The amount of extra processor work needed makes this task useless in the context of program compilation. For large projects where this type of analysis is actually very important, a project rebuild could take an inordinate amount of time to execute. Why then, when that analysis can be performed outside the context of compilation and in a much quicker fashion? Why delay compilation time if you can do code analysis outside it, before or after?
- I'm not sure how a compiler should report code analysis issues. It's one thing to report obvious problems with your code. But obvious problems are rarely the domain of Code Analysis. If a user is willingly taking advantage of processor features, or otherwise knowingly cheating in order to gain performance benefits, they will not appreciate if their compiler throws in a warning. On a large project warnings could amount to a big number and having to sort through all that to check what warnings are meaningful to them and what aren't isn't fun. Code Analysis being done by the compiler would have to include a large number of new options to make for this, adding to the already huge army of options for straight compilation. On the other hand, don't you think those tricks/cheats are better analysed at runtime where they will be in effect?
Err... no?Quote:
Heck, in Visual Studio, the compiler does the analyzing, doesn't it?
Not that I know of. You have minimal features on this regard. Nothing on the realm of Code Analysis.
I'm afraid I think you don't seem to understand static analysis.
I can list some benefits of it:
- Alerts the developer of the problems in an early stage. In companies, the dev builds are usually sent off to the testers afterwards to test for errors and the longer it takes to find a problem, the more costly it is.
- Static analysis can detect potential problems in the code that you might not detect at runtime. How about some buffer overrun in some code that is rarely seen or used at runtime? What if it's some code that only occurs at certain conditions? The harder it is to find these problems. The program might have to be running a certain amount of time before the problem can occur. Static analysis can detect these before they pop up as runtime errors.
- Static analysis can also detect code problems (not bugs). Such as poor hierarchies, intertwined classes, etc.
Furthermore, runtime analysis is also often expensive.
Now, it is a good idea to be able to select which things the compiler reports. This is true, and this is why Microsoft has added rule sets to VS10 to allow companies to ignore certain kinds of warnings. They can even export them and file them as bugs to be fixed later, and the compiler won't report the problems.
And a developer must not always do a code analysis all the time. I would say it's usually a good idea to run once you've completed a feature or so, especially before checking in in a team.
And yes, it's in the compiler (/analyze switch). Although, it might not do everything. It might be some separate process that actually does the real analyzing. I can't say... I must investigate, I think.
Oh, I do. But what I don't understand is why you aren't agreeing with me :p
If I'm putting emphasis on runtime analysis, it is because a lot of code analysis is actually done during runtime. However, if you read carefully enough, you'll see I'm not ignoring statistical analysis. I'm however objecting to it being done during compilation time while using arguments that my bloated ego had though would have put an end to this discussion.
Well, I tried :)
I put forth my argument why static analysis is a good thing™, but you seem to put too much into runtime analysis. Both are good.
Well, both are good, no?
The debate however is about merging this functionality with the compiler. That's where we differ significantly. I see no advantages whatsoever.
In the context of an Integrated Development Environment (and this include gcc on linux, if you have organized your own set of development tools, even if you program on eddie) there is no reason in fact. Tools can be designed to sit alongside other tools and thus minimize any performance impact. A compiler that tries to be a lint is a better compiler. But it's also a slower compiler, and harder to use compiler and an harder to maintain compiler. Which probably makes it a useless compiler... or a compiler that does redundant work considering the same lint tool could sit on its own outside the context of code compilation while still delivering all that nice lint functionality.
So that's the argument against static analysis being moved into compilation. Necessarily I can understand some of that analysis moving into compilation. That's the case of a subset of buffer overrun analysis. MSVC, for instance, does this well with /RTC1 and /GS. But these offer minimal functionality and /GS is not about code analysis, but purely a detection mecanism meant to operate at runtime.
The argument about runtime analysis is all too obvious, of course. No need to discuss that. But I feel this is exactly where much of the real workload of code analysis resides. hence why I insist so much. The power of a profiler running in tandem with other runtime tools is immense. Especially if you keep on the side nice reports on code metrics and other such annoyances that you performed before compilation. This is particularly important because, especially for large and complex projects or projects being developed by a single person, code analysis is less about the perfect code and more about finding the less than acceptable code and fix only that. For most test cases, this is not information you get during static analysis. Meanwhile, having static stuff being moved to a compiler slows down the whole process.
But as I have stated, a compiler need not perform static analysis, though it might have it built in. This is the reasoning behind compiler switches.
Harder to maintain, perhaps. But then again, if properly separated, it offers advantages because the compiler already knows a lot about the code since it has to generate the machine code. The analysis code could simply borrow that information from the compiler code.
And if the invoker chooses not to do any analysis, the compiler does not need to run slower because no checks are performed. I see no reason why this would not work.
Separate tools for checks are not necessarily good, unless they share the compiler's code base, since otherwise there would basically be two compilers to maintain and update.