Thread: separating files for classes and thier implemenation.

  1. #16
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Quote Originally Posted by Raigne View Post
    but honestly, is the (fractional) performance gain of in-lining worth it on modern hardware.
    Depends on what you are doing in the function, versus the overhead of the call. A tiny function (say add two numbers together) can easily have 10x the overhead of passing arguments, calling a function and cleaning up the stack after the call vs. the single add instruction that it can be replaced with.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  2. #17
    C++ Witch laserlight's Avatar
    Join Date
    Oct 2003
    Location
    Singapore
    Posts
    28,413
    I believe that, from the perspective of the C++ Standard, there is no difference between #include "xyz.h" and #include "xyz.cpp" if they both contain the same thing. In practice, an IDE might create the makefile (or other build script) such that "xyz.cpp" is compiled even when it should not be, possibly leading to redefinition errors.

    SQLite's amalgamation is an example of an optimisation where the final source code is generated from the various header files and source files such that it becomes one big source file. This might be a better way than trying to develop by including (non-header) source files all over the place.
    Quote Originally Posted by Bjarne Stroustrup (2000-10-14)
    I get maybe two dozen requests for help with some sort of programming or design problem every day. Most have more sense than to send me hundreds of lines of code. If they do, I ask them to find the smallest example that exhibits the problem and send me that. Mostly, they then find the error themselves. "Finding the smallest program that demonstrates the error" is a powerful debugging tool.
    Look up a C++ Reference and learn How To Ask Questions The Smart Way

  3. #18
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Quote Originally Posted by cyberfish View Post
    GCC can't (or am I missing something). In the UNIX world they are done by two different programs entirely (compiling and linking, by gcc and ld).
    I am uncertain of how GCC works or if it supports this or not. I believe there was some experimental code or something, but nothing in the main executable.

    It would be truly amazing of the Microsoft compiler if it can do that.
    But it can.

    But is that to say every time a cpp file is changed, the whole project needs to be recompiled? Since that's the only way cross-file inlining can be done?
    I cannot say for sure, but usually when compiling a Release, it is a pretty long process and many files are typically re-compiled, and during a Debug build, you do not use optimizations.
    But I think the optimization is done at the linker stage, so perhaps only the linking stage needs to be redone.

    If that is the case... then what's the difference between that and including cpp files? (and keeping dummy header files for human reference, or include all headers before all cpp's?)
    For one thing, it is considered bad practice to include source files. Not that it is really such a bad thing if used like this, but anyway.
    Secondly, the entire code base is completely re-compiled everytime, even if nothing has changed in those source files.
    Thirdly, I guess there will be complications, such as global variables with internal linkage, and such. Probably much more.

    I thought one of the main advantages of using headers is that the project can be incrementally compiled.
    Not sure what you are hinting at?
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  4. #19
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    I think we should not call it "linking" when we're talking of "inlining from all of the source code", because what really happens is that the compiler is doing the work in two or three steps. The first step involves reading and "understanding" the source code. The second step involves generating the actual binary code. In the case of "whole program optimization", you'd only spend a little bit of time parsing the code and making some intermediate form that can be used for producing the final binary. But certainly, some of the steps in the actual code generation step will involve quite a bit of "hard work" for the processor, compared to just linking together already compiled object files. But for a total build from scratch, I'd expect that it's not much difference. And as Elysia says, most development is done in debug builds, where very little time is spent on optimization.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

Popular pages Recent additions subscribe to a feed