Thread: c programs

  1. #16
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Actually, since the compiler is well understood in what the code does, where better to perform the static analysis?
    It's the perfect place for such things. And so I do believe the compiler should do analysis. Heck, in Visual Studio, the compiler does the analyzing, doesn't it?

    Run-time tools cannot be tied to the compiler, of course. But they're another subject.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  2. #17
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    Quote Originally Posted by Elysia View Post
    Actually, since the compiler is well understood in what the code does, where better to perform the static analysis?
    But, but... what I'm trying to say is that the best place to make most of these analysis is at runtime because it's almost certainly much cheaper. This is almost certainly true for compiled languages. 3 problems:

    - I cannot fathom the amount of hideous code necessary for a compiler to be able to check code logic during compilation. From the point of view of the developers of such tools, how can that be beneficial if he same task can be performed much easily during runtime, where most of these problems are more easily diagnosed? Do I have evidence of this fact? Not really. What do I know. My recent foray into creating a rudimentary scripting language of my own was followed by total failure... 30 minutes later. But I suspect I'm right.

    - The amount of extra processor work needed makes this task useless in the context of program compilation. For large projects where this type of analysis is actually very important, a project rebuild could take an inordinate amount of time to execute. Why then, when that analysis can be performed outside the context of compilation and in a much quicker fashion? Why delay compilation time if you can do code analysis outside it, before or after?

    - I'm not sure how a compiler should report code analysis issues. It's one thing to report obvious problems with your code. But obvious problems are rarely the domain of Code Analysis. If a user is willingly taking advantage of processor features, or otherwise knowingly cheating in order to gain performance benefits, they will not appreciate if their compiler throws in a warning. On a large project warnings could amount to a big number and having to sort through all that to check what warnings are meaningful to them and what aren't isn't fun. Code Analysis being done by the compiler would have to include a large number of new options to make for this, adding to the already huge army of options for straight compilation. On the other hand, don't you think those tricks/cheats are better analysed at runtime where they will be in effect?

    Heck, in Visual Studio, the compiler does the analyzing, doesn't it?
    Err... no?
    Not that I know of. You have minimal features on this regard. Nothing on the realm of Code Analysis.
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  3. #18
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    I'm afraid I think you don't seem to understand static analysis.
    I can list some benefits of it:
    - Alerts the developer of the problems in an early stage. In companies, the dev builds are usually sent off to the testers afterwards to test for errors and the longer it takes to find a problem, the more costly it is.
    - Static analysis can detect potential problems in the code that you might not detect at runtime. How about some buffer overrun in some code that is rarely seen or used at runtime? What if it's some code that only occurs at certain conditions? The harder it is to find these problems. The program might have to be running a certain amount of time before the problem can occur. Static analysis can detect these before they pop up as runtime errors.
    - Static analysis can also detect code problems (not bugs). Such as poor hierarchies, intertwined classes, etc.

    Furthermore, runtime analysis is also often expensive.
    Now, it is a good idea to be able to select which things the compiler reports. This is true, and this is why Microsoft has added rule sets to VS10 to allow companies to ignore certain kinds of warnings. They can even export them and file them as bugs to be fixed later, and the compiler won't report the problems.
    And a developer must not always do a code analysis all the time. I would say it's usually a good idea to run once you've completed a feature or so, especially before checking in in a team.

    And yes, it's in the compiler (/analyze switch). Although, it might not do everything. It might be some separate process that actually does the real analyzing. I can't say... I must investigate, I think.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  4. #19
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    Quote Originally Posted by Elysia View Post
    I'm afraid I think you don't seem to understand static analysis.
    Oh, I do. But what I don't understand is why you aren't agreeing with me

    If I'm putting emphasis on runtime analysis, it is because a lot of code analysis is actually done during runtime. However, if you read carefully enough, you'll see I'm not ignoring statistical analysis. I'm however objecting to it being done during compilation time while using arguments that my bloated ego had though would have put an end to this discussion.
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  5. #20
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Well, I tried
    I put forth my argument why static analysis is a good thing™, but you seem to put too much into runtime analysis. Both are good.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  6. #21
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    Well, both are good, no?

    The debate however is about merging this functionality with the compiler. That's where we differ significantly. I see no advantages whatsoever.

    In the context of an Integrated Development Environment (and this include gcc on linux, if you have organized your own set of development tools, even if you program on eddie) there is no reason in fact. Tools can be designed to sit alongside other tools and thus minimize any performance impact. A compiler that tries to be a lint is a better compiler. But it's also a slower compiler, and harder to use compiler and an harder to maintain compiler. Which probably makes it a useless compiler... or a compiler that does redundant work considering the same lint tool could sit on its own outside the context of code compilation while still delivering all that nice lint functionality.

    So that's the argument against static analysis being moved into compilation. Necessarily I can understand some of that analysis moving into compilation. That's the case of a subset of buffer overrun analysis. MSVC, for instance, does this well with /RTC1 and /GS. But these offer minimal functionality and /GS is not about code analysis, but purely a detection mecanism meant to operate at runtime.

    The argument about runtime analysis is all too obvious, of course. No need to discuss that. But I feel this is exactly where much of the real workload of code analysis resides. hence why I insist so much. The power of a profiler running in tandem with other runtime tools is immense. Especially if you keep on the side nice reports on code metrics and other such annoyances that you performed before compilation. This is particularly important because, especially for large and complex projects or projects being developed by a single person, code analysis is less about the perfect code and more about finding the less than acceptable code and fix only that. For most test cases, this is not information you get during static analysis. Meanwhile, having static stuff being moved to a compiler slows down the whole process.
    Last edited by Mario F.; 07-26-2009 at 10:03 AM.
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  7. #22
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    But as I have stated, a compiler need not perform static analysis, though it might have it built in. This is the reasoning behind compiler switches.
    Harder to maintain, perhaps. But then again, if properly separated, it offers advantages because the compiler already knows a lot about the code since it has to generate the machine code. The analysis code could simply borrow that information from the compiler code.
    And if the invoker chooses not to do any analysis, the compiler does not need to run slower because no checks are performed. I see no reason why this would not work.
    Separate tools for checks are not necessarily good, unless they share the compiler's code base, since otherwise there would basically be two compilers to maintain and update.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Recommend upgrade path for C programs
    By emanresu in forum C Programming
    Replies: 3
    Last Post: 11-22-2007, 07:32 AM
  2. Way to get the current open programs displayed?
    By CPPguy1111 in forum C++ Programming
    Replies: 6
    Last Post: 06-22-2005, 12:24 AM
  3. Communication between programs?
    By johny145 in forum Windows Programming
    Replies: 3
    Last Post: 06-01-2005, 10:14 PM
  4. POSIX/DOS programs?
    By nickname_changed in forum C++ Programming
    Replies: 1
    Last Post: 02-28-2003, 05:42 AM
  5. executing c++ programs on the web
    By gulti01 in forum C++ Programming
    Replies: 4
    Last Post: 08-12-2002, 03:12 AM