example...
Code:int main() { text(); } void text() { cout << "Example.\n"; }
example...
Code:int main() { text(); } void text() { cout << "Example.\n"; }
Code:#include <string> #include <iostream> using namespace std; string text(){ return "Example\n"; } int main(int argc, char* argv[]) { cout << text(); return 0; }
Doesn't returning std::string return a copy of the data? That doesn't sound very performant...
What's wrong with returning a pointer or reference?
...until you realize that the standard allows the implementation to eliminate the copy operation on return values, and most, if not all, implementations do this.
Returning a pointer or reference to a local object is undefined behavior. Allocating it on the heap puts responsibility on the caller, requiring additional documentation, etc.
What can this strange device be?
When I touch it, it gives forth a sound
It's got wires that vibrate and give music
What can this thing be that I found?
What it means is that the compiler transforms this:
..to essentially...Code:std::string text() { return "Example\n"; } int main(int argc, char* argv[]) { std::cout << text(); return 0; }
Essentially what it does is actually construct the string to hold "Example\n" in main, yet it acts like a variable in text. So it constructs it in-place. No copies. No moves. Like emplace.Code:int main(int argc, char* argv[]) { std::string text("Example\n"); std::cout << text; return 0; }
As for pointers and references - unless you really need the speed, why bother why C strings?
The things other people have said applies, but I also would like to add that thinking about this is premature optimization.
Rob Pike's 5 Rules of Programming.
As a software developer simple and well structured code is what you should always strive for. If you have to choose between 2 or more ways to do something, then you should always opt to go for the one that is more readable and when applicable the one that puts the least burden of the person using your library or what not.
What you should focus on is first choosing the best design, data structure and algorithm to solve the problem. When you have done that and start coding, then you should focus on the things I said above, readablility and ease of use.
Then, when your project is closer to completion, then you measure and optimize only what needs to be optimized.
@Elysia : That seems a lot like inlining which isn't a guarantee by all compilers, right? Even with the inline keyword, it's still ultimately up to the compiler to decide whether or not it decides the function will be inlined.
@Jimmy : I agree with you but if you're string is 2,000 bytes and you're returning just the struct, wouldn't it be obvious that returning 8 bytes would be faster?
It's like returning sizeof(user_defined_structure) bytes vs sizeof(pointer_to_user_defined_structure) bytes.
Last edited by MutantJohn; 01-15-2015 at 06:12 PM.
Exactly, it is up to the compiler so you shouldn't bother until you know it is a problem.
But what I'm saying is that in most real world programs that difference will not matter, and if it does, then you will find it when you measure.
I have worked on projects with people that have resorted to a lot of premature optimizations and changing the code other people checked in with micro optimizations. It often lead to buggy code, because people like that doesn't take the time to even figure out how the code they "fix" work, they just have a template that they apply.
They later realize the error of their ways when measurements later show that the application is actually spending most of its time waiting for input from the user.
The example you use would probably make a difference of exactly 0 effect in the real world for a majority of software.
The only time it might be worth thinking about optimizing when writing code is when you sit on an embedded system with little memory and have to optimize for memory size. But probably not even then.
Last edited by Jimmy; 01-15-2015 at 06:30 PM. Reason: Runaway end quote removed.
Hmm... Depends on how many times you're going to be returning the string and how large it is. For a one-time assignment you're probably right but consider a billion assignments. Consider a string that's 1 MB in size. Consider both. I know this is all hypothetical, but I'd imagine that the overhead adds up over time.
Unless Elysia is right in that a move will be performed but I'm not sure if that's guarantee. Putting the -O3 flag up and praying isn't the same thing as learning how to pass by reference and getting guaranteed performance.
Now we're back in the premature optimization argument.
Unless you can prove it's a bottleneck, don't bother.
It's guaranteed. And again, you don't have to hope and pray. If your software runs fast, then there's nothing to discuss. If it has performance problems, then you profile and fix the critical path.Unless Elysia is right in that a move will be performed but I'm not sure if that's guarantee. Putting the -O3 flag up and praying isn't the same thing as learning how to pass by reference and getting guaranteed performance.
Again, if it is a problem, then you will find out when you measure. Trust me, it is the right way to do it. Write as simple and easy to use and understand code as you can always, then tune the things you have to tune after you have measured. Complexity leads to more defects and therefore more security issues. And bugs costs more money.
Fancy micro optimizations in general probably will make your code run slower instead of faster, because it hinders the compiler from making better optimizations.
Since when is passing by reference a complex, unreadable micro-optimization? O_o