I believe that, from the perspective of the C++ Standard, there is no difference between #include "xyz.h" and #include "xyz.cpp" if they both contain the same thing. In practice, an IDE might create the makefile (or other build script) such that "xyz.cpp" is compiled even when it should not be, possibly leading to redefinition errors.
SQLite's amalgamation is an example of an optimisation where the final source code is generated from the various header files and source files such that it becomes one big source file. This might be a better way than trying to develop by including (non-header) source files all over the place.
But it can.Quote:
It would be truly amazing of the Microsoft compiler if it can do that.
I cannot say for sure, but usually when compiling a Release, it is a pretty long process and many files are typically re-compiled, and during a Debug build, you do not use optimizations.Quote:
But is that to say every time a cpp file is changed, the whole project needs to be recompiled? Since that's the only way cross-file inlining can be done?
But I think the optimization is done at the linker stage, so perhaps only the linking stage needs to be redone.
For one thing, it is considered bad practice to include source files. Not that it is really such a bad thing if used like this, but anyway.Quote:
If that is the case... then what's the difference between that and including cpp files? (and keeping dummy header files for human reference, or include all headers before all cpp's?)
Secondly, the entire code base is completely re-compiled everytime, even if nothing has changed in those source files.
Thirdly, I guess there will be complications, such as global variables with internal linkage, and such. Probably much more.
Not sure what you are hinting at?Quote:
I thought one of the main advantages of using headers is that the project can be incrementally compiled.
I think we should not call it "linking" when we're talking of "inlining from all of the source code", because what really happens is that the compiler is doing the work in two or three steps. The first step involves reading and "understanding" the source code. The second step involves generating the actual binary code. In the case of "whole program optimization", you'd only spend a little bit of time parsing the code and making some intermediate form that can be used for producing the final binary. But certainly, some of the steps in the actual code generation step will involve quite a bit of "hard work" for the processor, compared to just linking together already compiled object files. But for a total build from scratch, I'd expect that it's not much difference. And as Elysia says, most development is done in debug builds, where very little time is spent on optimization.