Thread: Questions regarding operator overloading (multiply)

  1. #1
    Guest
    Guest

    Questions regarding operator overloading (multiply)

    I'm trying to wrap my head around how to make use of operator overloading, as I have a bunch of nested loops and object interaction, that become hard to keep track of.

    I have followed myriads of online tutorials on this, but they don't help clear up my confusion.

    Let's take a highly simplified example:
    Code:
    class Foo
    {
        public:
            Foo();
            
        private:
            int x;
    };
    If I had two objects of class Foo and wanted to update x within the first object, by multiplying it with x in the second object, what are my syntactical options, and how would I go about writing the overloaded operator? From what I understand, not all operators are equal (neutral) under the C++ standard and compilers make certain presumptions about their usage.
    Code:
    int main()
    {
        Foo f1, f2;
    
        // examples
        f1 * f2;
        f1 *= f2;
        f1 = f1 * f2;
    }
    Which of these are legitimate, if any?

    When I follow tutorials, the operator function usually has the return type of the class it modifies, and that not necessarily by reference. This seems expensive to me, can someone clarify how this works?
    Code:
    class Foo
    {
        public:
            Foo();
            int getX() const;
            void setX(const int new_x);
            Foo operator*(const Foo& f2);
            
        private:
            int x;
    };
    
    // consider x initialized, I'll leave that out
    
    int Foo::getX() const
    {
        return x;
    }
    
    void Foo::setX(const int new_x)
    {
        x = new_x;
    }
    
    Foo Foo::operator*(const Foo& f2)
    {
        Foo f_tmp;
        f_tmp.setX(this->getX() * f2.getX());
        return f_tmp;
    }
    Looking at the second block of code (the main() function) - does f1 get "replaced" by f_tmp as a means of updating the value of x?

    Is the reason for returning an object to allow for the chaining (f1 = f2 * f3 * f4) of operations? If so, could I modify an object by reference, knowing the multiplication operator will only ever be used on two objects of this class? Could i call the member functions directly and avoid the creation of a temporary (i.e. f_tmp) object? What would the return type be? void?

    Sorry for all the questions, but I hope you got an idea where my uncertainty/misconceptions lie.

  2. #2
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Quote Originally Posted by Guest View Post
    If I had two objects of class Foo and wanted to update x within the first object, by multiplying it with x in the second object, what are my syntactical options, and how would I go about writing the overloaded operator?
    You are thinking of it in the wrong way. Operator overloading is about defining semantics for an object as a whole. That is, you would define how to multiply two Foo objects. The compiler does obviously not know, so you must tell it.
    How that is done is an implementation details, and in this case, you would just update the variable x.

    From what I understand, not all operators are equal (neutral) under the C++ standard and compilers make certain presumptions about their usage.

    Which of these are legitimate, if any?
    All are legitimate. They just do different things.

    f1 * f2;
    Multiplies f1 and f2 and returns the result. Note that f1 and f2 are not modified. So you can redo that multiplication infinite amount of times and get the same result.

    f1 *= f2;
    Multiplies f1 by f2 and stores the result if f1. Shorthand for f1 = f1 * f2.

    f1 = f1 * f2;
    Same as above.

    When I follow tutorials, the operator function usually has the return type of the class it modifies, and that not necessarily by reference. This seems expensive to me, can someone clarify how this works?
    It depends on what operator you are overloading.
    If you overload the * operator, you must return a new instance by value. The reason is that the class you perform the multiplication on must not be modified. Just as in mathematics, the operands of the multiplication are immutable. They don't change. Therefore, you have to make a new object which you initialize to the state of the object you are going to multiply (the left hand side of the multiplication), perform the multiplication, then return it.
    Speaking about how expensive it is... it depends on how complex your object is. In most cases, it's super cheap. With the addition of move semantics in C++11, it's often not working thinking about. But then again, it is more important to conserve semantics than efficiency. If you want efficiency, you can simply avoid using operators that return temporaries (like operator *=).
    Operator *= will modify the left-hand, so therefore you would not construct a temporary. Because we are modifying "ourself" (this), we can simply return a reference. It's more efficient than creating a copy and returning that (but that works, too).

    Looking at the second block of code (the main() function) - does f1 get "replaced" by f_tmp as a means of updating the value of x?
    If you have some operator f1 = f2 * f3, then the compiler is going to evaluate f2 * f3 first, which results in some temporary f4 which is then assigned to f1. So some pseudo code:

    f4 = f2 * f3;
    f1 = f4;

    As for the rest of the questions... again, don't modify semantics. There is typically a set of rules concerning operator overloading that you should follow. Everyone expects them to behave like that, and not following it will serve to confuse.

    Generally, operator *= is more efficient than operator * because no temporary needs be created. If you need the speedup, consider that instead of trying to change the semantics of the operators.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  3. #3
    Guest
    Guest
    Thank you, your explanations clarified things considerably for me.

    Another hypothetical case, if you don't mind. (this is a quick writeup, feel free to ignore syntax errors).
    Code:
    class Foo
    {
        public:
            Foo();
            int getX() const;
            void setX(const int new_x);
            void multiplySetX(const int x1, const int x2);
            friend Foo operator*(const Foo& f1, const Foo& f2);
            
        private:
            int x;
            std::vector<double> vx;
    };
    
    void Foo::multiplySetX(const int x1, const int x2)
    {
        x = x1 * x2;
    }
    
    Foo operator*(const Foo& f1, const Foo& f2)
    {
        Foo f;
        f.multiplySetX(f1.getX(), f2.getX());
        return f;
    }
    Let's assume that vector vx is very long, i.e. 50MB of data.

    If we now do the following,
    Code:
    int main()
    {
        Foo f1, f2, f3;
        f3 = f1 * f2 * f3;
    }
    I imagine the large vector would have to be copied during each call to the multiply function, even if it is not directly involved in the computation. Is there a way around this, assuming that order of multiplication is relevant, as can be the case in e.g. matrix multiplication?

  4. #4
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Yes, the vector would have to be copied everytime you used a multiplication, unless you come up with a way for the objects to "share" state.
    To avoid expensive copies, you can modify your code to

    Foo f1, f2, f3;
    f3 *= f1;
    f3 *= f2;
    f3 *= f3;

    Well, actually, the important thing is that the end result is the same. You can actually return some temporary that simply propagates the result, and nothing else.
    Let me show an example in a few minutes.
    Last edited by Elysia; 04-14-2013 at 04:47 PM.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  5. #5
    Guest
    Guest
    Ok, thank you Elysia!

  6. #6
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Example:
    Code:
    #include <vector>
    #include <iostream>
    #include <ctime>
    
    class Foo
    {
    public:
    	struct MultiplicationPropagator
    	{
    		int x;
    	};
    
    	Foo(int x): vx(1E2), x(x) {}
    	int getX() const { return x; }
    	void setX(const int new_x) { x = new_x; }
    	void multiplySetX(const int x1, const int x2);
    	friend MultiplicationPropagator operator*(const Foo& f1, const Foo& f2);
    	friend MultiplicationPropagator operator*(const MultiplicationPropagator& f1, const MultiplicationPropagator& f2);
    	friend MultiplicationPropagator operator*(const Foo& f1, const MultiplicationPropagator& f2);
    	Foo& operator = (const MultiplicationPropagator& rhs);
    
    private:
    	int x;
    	std::vector<double> vx;
    };
    
    void Foo::multiplySetX(const int x1, const int x2)
    {
    	x = x1 * x2;
    }
    
    Foo::MultiplicationPropagator operator*(const Foo& f1, const Foo& f2)
    {
    	Foo::MultiplicationPropagator f = { f1.getX() * f2.getX() };
    	return f;
    }
    
    Foo::MultiplicationPropagator operator*(const Foo::MultiplicationPropagator& f1, const Foo& f2)
    {
    	Foo::MultiplicationPropagator f = { f1.x * f2.getX() };
    	return f;
    }
    
    Foo::MultiplicationPropagator operator*(const Foo::MultiplicationPropagator& f1, const Foo::MultiplicationPropagator& f2)
    {
    	Foo::MultiplicationPropagator f = { f1.x * f2.x };
    	return f;
    }
    
    Foo& Foo::operator = (const Foo::MultiplicationPropagator& rhs)
    {
    	x = rhs.x;
    	return *this;
    }
    
    int main()
    {
    	auto time1 = std::clock();
    
    	for (int i = 0; i < 1E4; i++)
    	{
    		Foo f1(10), f2(20), f3(30);
    		f3 = f1 * f2 * f3;
    		if (i == 0)
    			std::cout << "The result is: " << f3.getX() << std::endl;
    	}
    
    	auto time2 = std::clock();
    
    	std::cout << "Took " << (time2 - time1) / (CLOCKS_PER_SEC / 1000) << " ms.\n";
    }
    This cuts time by 10 times approximately on my system.
    Note that I used C++11 features, so be sure to enable C++11 on your compiler if you want to run the code.

    Of course, you have to ask yourself if it's worth the time it takes to code this. You also have to be careful to not let MultiplicationPropagator be used outside its context since you are throwing away state from your class.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  7. #7
    Master Apprentice phantomotap's Avatar
    Join Date
    Jan 2008
    Posts
    5,108
    O_o

    It should be noted that if the code in post #6 appropriately satisfies what you query in post #3 then the `std::vector<double>' instance `vx' is probably not part of state that is normally propagated by mathematics.

    That's an incredibly important consideration for code reuse and proper design.

    Before I continue, this is for cases where several, or most or all, functions (Here I am using "functions " to include any method, operator, or normal function that operates on the types being considered.) do not utilize (Here implying the need for a bit of data, from class members, to be read and possibly copied into or derived from those variables on which such functions operate.) specific members of an object.

    There is absolutely nothing wrong with the concept Elysia forwards. It is useful to have in your toolbox, and may even be appropriate here. (I simply don't know what you are ultimately going to do.) It is best used though when a part of a class isn't necessary to convey the outward state of a class for only a few specific operations. (Even then, it is generally better to write such classes as providing an interface which retrieves the "visible state" so that functions and such need not be written as friends.) When many such operations exist, the alternative presented here is usually better.

    With the alternative approach, you must write such operations in terms of "visible state" by never imparting the unnecessary state into copies by overloading a constructor. (This state would be as `MultiplicationPropagator' from the code Elysia posted.) It depends on the class, but you can either default to only copying such necessary state or providing a specific overload. You can even use the existing concept as forwarded by Elysia to implement the constructor. This alternative approach allows you to write functions so that they result directly in the appropriate type instead of the type associated with the "visible state".

    The code looks, and pretty much is, more complicated. So, why would you go this route? I'll explain.

    There are some overloads that the concept provided by Elysia needs that we don't. Specifically, we don't need the extra overloads related to the "visible state" for the multiplication operation. (That said, he could have reduced that by overloading a constructor and allowing implicit conversion. The implicit constructor has its own problems which is why I assume he didn't go that route.) This, it turns out, is crucial for operations that can be chained.

    Why? Because the type returned by those operators are not the "true" type; they are a separate type that participates in overloading differently.

    You can see this in my example.

    Code:
    // Assume that `fX' and `fY' are both instances of `Foo'.
    (fX * fY).doSomethingAwesome(); // This will work. It works because the result of the multiplication operation is a `Foo'.
    Why is this important? The operators are just normal functions with a special signature. This means that the strategy I noted, the implicit constructor, to change the code Elysia posted or such overloads, for the "visible state", will be necessary for every operator. By letting the relevant constructors do most of the work, you need fewer operator overloads, and none of those operators need to be friends of involve the "visible state" type in the interface allowing you to change it freely.

    And all of that is just for cases where the type can be coerced or referenced as a constant. (This would be your `const Foo &'.) When overloads are involved that don't match that criteria things start getting more messy without this alternative approach because the result will need to be explicitly stored in a variable of the appropriate type. Which, granted, isn't evil or anything; it can just get messy if you have several such cases at the same scope.

    [Edit]
    Be fairly warned, I changed this as I was going from the example Elysia posted. It is missing a few bits and may have a few errors besides.
    [/Edit]

    Soma

    Code:
    struct Foo
    {
        // ...
        struct CrucialState
        {
            int x;
        };
        // ...
        Foo
        (
            const Foo & fOther
        ):
            vx(fOther.vx)
          , x(fOther.x)
        {
            // ...
        }
        Foo
        (
            int fX
        ):
            vx()
          , x(fX)
        {
            // ...
        }
        // ...
        explicit Foo
        (
            const CrucialState & fOther
        ):
            vx()
          , x(fOther.x)
        // ...
        Foo & operator *
        (
            const Foo & fRHS
        )
        {
            // ...
            x *= fRHS.x;
            return(*this);
        }
        // ...
        void doSomethingAwesome() const
        {
            // Something Awesome!
        }
        // ...
        CrucialState getState() const
        {
            CrucialState sResult = {x};
            return(sResult)
        }
        // ...
        std::vector<double> vx;
        int x;
        // ...
    };
    // ...
    Foo operator *
    (
        const Foo & fLHS
      , const Foo & fRHS
    )
    {
        Foo sResult(fLHS.getState()); // We don't copy the `vx' member.
        sResult *= fRHS;
        return(sResult); // The `vx' member is copy-constructed normally,
                         // but the `vector<???>' is essentially empty
                         // so the operation is effectively free.
    }
    // ...

  8. #8
    Algorithm Dissector iMalc's Avatar
    Join Date
    Dec 2005
    Location
    New Zealand
    Posts
    6,318
    You should also note that statements like a = b * c; do not necessarily result in the overhead of the creation and copying from a temporary. RVO and NRVO are allowed to modify the behaviour of such code such that as you write into the variable that is to be returned from the multiplication operator (for example) that this is actually translated into a write directly into the variable 'a' that the function's result is being assigned into. It's not just theory either; modern compilers do frequently do this for you.

    Even without that, on a C++11 compliant compiler, you can also implement another assignment operator and constructor, that take an r-value reference, and implemented destructive copy-semantics, such that the overhead of temporary creation is minimised.

    The last and final trick I'll mention, that is quite difficult to do and is increasingly unnecessary, is to use template-meta-programming to build up expression trees and evaluate them in an optimal manner at compile-time, creating zero unnecessary temporaries.

    It seems highly doubtful that a 10x speedup was profiled on a release build, which is what you should always be profiling.
    Bottom line is, don't worry about the creation of temporaries until you've found it to be a problem. Get it working correctly, and maybe later consider techniques for improving the efficiency, if it turns out to be needed.
    My homepage
    Advice: Take only as directed - If symptoms persist, please see your debugger

    Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"

  9. #9
    Master Apprentice phantomotap's Avatar
    Join Date
    Jan 2008
    Posts
    5,108
    It seems highly doubtful that a 10x speedup was profiled on a release build, which is what you should always be profiling.
    It seems perfectly likely to me.

    Even the one temporary, assuming "RVO" here, would be more expensive than creating none simply because the normal constructor was designed to be expensive.

    The question would be, how often does it come up such that a class has partial value state which can't be written to create such an expensive state only when necessary?

    I'm still thinking pretty much never.

    The last and final trick I'll mention, that is quite difficult to do and is increasingly unnecessary, is to use template-meta-programming to build up expression trees and evaluate them in an optimal manner at compile-time, creating zero unnecessary temporaries.
    That would be a bad fit here according to post #3.

    The question that started the discussion here relates to an expensive member variable that represents state unnecessarily for certain operations.

    Using expression templates would fit the case where the expensive member is crucial to such operations.

    That out of the way, you are completely wrong about "increasingly unnecessary", but then here it is absolutely unnecessary which leads me to believe that you don't know where expression templates shine. Even "rvalue references" do not reduce the utility of expression templates when combined with chaining evaluators.

    [Edit]
    Of course, this is only where they shine with respect to optimization code without sacrificing any readability.

    That's actually a rather small part of expression templates.

    And yes, I do realize iMalc was not actually referencing such stuff as embeddable languages when he said "increasingly unnecessary". This edit is for those who don't know this.
    [/Edit]

    Consider the following code with respect to the intent of saving copies and allocations.

    Code:
    VEC f1,f2,f3,f4;
    // ...
    f1 = f2 * (f3 - f4); // f1[?] = f2[?] * (f3[?] - f4[?]);
    Here expression templates are simply unnecessary. Yes, they are obviously unnecessary with C++11 because of "rvalue references", but they aren't even useful only to that end, saving copies and allocations, as both of the examples Elysia and I posted shows. (And of course, there are still more possibilities.)

    That goal only transforms the code to eliminate the overhead of the relevant temporaries.

    Code:
    VEC f1,f2,f3,f4;
    // ...
    f1 = f3;
    f1 -= f4;
    f1 *= f2;
    // ...
    Now, that alone is a fine goal, but it really is as you said "increasingly unnecessary" to do implement such facilities by hand only for that purpose.

    That isn't where expression templates stop.

    The above code exhibits multiple transforms that are as large as the relevant object. (Here being the length of the implied array.)

    You basically get something like the following where each element of the target is mutated multiple times.

    Code:
    VEC f1,f2,f3,f4;
    // ...
    for(int cB(0), cE(f1.size()); cB < cE; ++cB)
    {
        f1[cB] = f3[cB];
    }
    // ...
    for(int cB(0), cE(f1.size()); cB < cE; ++cB)
    {
        f1[cB] -= f4[cB];
    }
    // ...
    for(int cB(0), cE(f1.size()); cB < cE; ++cB)
    {
        f1[cB] *= f2[cB];
    }
    // ...
    The code below is likely to be quite a bit faster for multiple reasons. (If the array is large enough you'll get fewer cache misses if nothing else came into play.)

    Code:
    VEC f1,f2,f3,f4;
    // ...
    for(int cB(0), cE(f1.size()); cB < cE; ++cB)
    {
        f1[cB] = f2[cB] * (f3[cB] - f4[cB]);
    }
    // ...
    However, while many modern compilers do implement at least "RVO", they don't really make that transformation. (If the loops had been written directly with fixed values the transformation is somewhat likely at higher optimizations with some compilers. However, the "visibility" of the loops are hidden behind methods where the compiler is already trying to work out "RVO" and "NRVO" optimizations as well as whatever other optimizations may be appropriate within those methods. Of course, making them `inline' could very likely help, but even then, the transformation is far more unlikely because far more must be proven about the code for the compiler to reason that the transformation is always valid.)

    This transform is exactly what expression templates can buy you without even considering what optimizations the compiler supports.

    Again, here it isn't necessary, but then, we still aren't at then end of expression templates.

    The single loop code from above might very well look more like what follows where more meta-programming may live.

    Code:
    VEC f1,f2,f3,f4;
    // ...
    for(int cB(0), cE(f1.size()); cB < cE; ++cB)
    {
        eval<???expression>(f1[cB],eval<???expression>(f2[cB], eval<???expression>(f3[cB], f4[cB])));
    }
    // ...
    Here some real magic (*) may happen. Why?

    This could result, as part of expansion, in a completely different expression tree with far more complexity... which can also be evaluated as part of evaluation the original expression. ^_^

    Soma

    (*) #3: Any sufficiently advanced technology is indistinguishable from magic.
    Last edited by phantomotap; 04-15-2013 at 02:59 AM.

  10. #10
    Registered User
    Join Date
    Oct 2006
    Posts
    3,445
    Quote Originally Posted by phantomotap View Post
    ...so that functions and such need not be written as friends.
    friendship could be avoided altogether, if the class provided a static muiltiply() function that has the same parameters and return type as the non-member operator. it could then keep its internal state private, and still provide the external functionality that is required

  11. #11
    Master Apprentice phantomotap's Avatar
    Join Date
    Jan 2008
    Posts
    5,108
    friendship could be avoided altogether, if the class provided a static muiltiply() function that has the same parameters and return type as the non-member operator. it could then keep its internal state private, and still provide the external functionality that is required
    O_o

    Well, it is true that friendship could be avoided altogether. What you quoted says as much.

    However, the rest of that isn't really relevant in the context of operator overloads unless you are also considering forwarding.(*)

    Even then, the named functions would need to use some method, any of the four proposed would be fine, to eliminate the "expensive" state; you wind up with more named overloads (Naturally, "CRTP" can eliminate that aspect.). Just having a named function "do the dead" is no different in this respect than having a named method retrieve the private state.

    If you aren't considering forwarding, the named functions can have whatever signature we like allowing us to have explicit chaining of a single named temporary which satisfies most of the problem without resorting to manually coding move or reduced complexity. That would get ugly quick so I don't thing that's what you intend.

    In any case, I prefer the multiplication operator, and similar, be written as non-member, non-friend functions relying on the assignment forms to access and mutate private state--as in my example. You generally need those forms anyway so that stuff may happen where it needs to happen, and the magic stuff, related to dealing with "expensive" state, can happen in the regular form. You don't need to deal with the named function so you have less work to deal with if you plan on providing both forms of the operator overloads.

    Soma

    (*) For those not following along: an operator overload must either be a non-static member function or a non-member function.

  12. #12
    Algorithm Dissector iMalc's Avatar
    Join Date
    Dec 2005
    Location
    New Zealand
    Posts
    6,318
    Sorry, I missed the important statement about assuming that the vector contains 50MB of data; that changes a lot.

    Template meta programming used to be about the only way to avoid temporaries, but nowdays it's not, that's all I meant. High-performance math libraries will still certainly use it.
    My homepage
    Advice: Take only as directed - If symptoms persist, please see your debugger

    Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Overloading operator++
    By vopo in forum C++ Programming
    Replies: 9
    Last Post: 08-18-2008, 02:52 AM
  2. operator overloading
    By jbook in forum C++ Programming
    Replies: 4
    Last Post: 10-17-2007, 02:37 AM
  3. Overloading * operator
    By tenken in forum C++ Programming
    Replies: 2
    Last Post: 05-13-2007, 09:19 AM
  4. operator overloading
    By blue_gene in forum C++ Programming
    Replies: 6
    Last Post: 04-29-2004, 04:06 PM
  5. overloading >> operator
    By Diamonds in forum C++ Programming
    Replies: 1
    Last Post: 03-21-2003, 02:01 AM