I've written software with significant amounts of overloading, and lately I regret that policy. I would say this:
Only overload operators if it's the natural, expected thing to do and doesn't have any side effects.
So if you make a new RomanNumeral
class, it makes sense to overload addition and subtraction etc. But don't overload it unless it's natural: it makes no sense to define addition and subtraction for a Car
or a Vehicle
object.
Another rule of thumb: don't overload ==
. It makes it very hard (though not impossible) to actually test if two objects are the same. I made this mistake and paid for it for a long time.
As for when to overload +=
, ++
etc, I'd actually say: only overload additional operators if you have a lot of demand for that functionality. It's easier to have one way to do something than five. Sure, it means sometimes you'll have to write x = x + 1
instead of x += 1
, but more code is ok if it's clearer.
In general, like with many 'fancy' features, it's easy to think that you want something when you don't really, implement a bunch of stuff, not notice the side effects, and then figure it out later. Err on the conservative side.
EDIT: I wanted to add an explanatory note about overloading ==
, because it seems various commenters misunderstand this, and it's caught me out. Yes, is
exists, but it's a different operation. Say I have an object x
, which is either from my custom class, or is an integer. I want to see if x
is the number 500. But if you set x = 500
, then later test x is 500
, you will get False
, due to the way Python caches numbers. With 50
, it would return True
. But you can't use is
, because you might want x == 500
to return True
if x
is an instance of your class. Confusing? Definitely. But this is the kind of detail you need to understand to successfully overload operators.