2

Sometimes, I would write if-statements with conditions like it:

if(a-b>0){
}

but in fact, I can move 'b' from left hand side to right hand side:

if(a>b){
}

Similar cases like 'a-b!=0','a-b<=0' (not the case like 'a>0') may also applies it. My motivation to do it is:

  1. have shorter statement

  2. avoid magic number '0'

Is it a good practice as a coding habit?

ocomfd
  • 5,750

3 Answers3

17

Semantically, your two examples mean two different things. Your first example checks to see if the result of a subtraction is a positive number. Your second example checks to see if one number is greater than the other.

While they are mathematically equivalent, they are not semantically equivalent. Your choice should be the one which best represents the semantics of your program. In other words, whichever one is more readable in your specific context.

Robert Harvey
  • 200,592
7

It depends a lot (on the semantics of - and of >).

First, you did not mention the type of a and of b. But let's assume for simplicity they both have the same integral type (e.g. C int or long).

Then, a - b might be erroneous (think of weird cases like overflow) or undefined behavior (think of pointer arithmetic, in some cases computing difference of unrelated pointers is UB). And in some programming languages and with some types, a-b and compare with 0 could be defined, but a>b might not be. Sometimes in C++ a-b, a>b, x>0 would be three different user defined operators of some user specified class (imagine some bignum library) with different behavior and performance.

Also a and b could have some weird types (perhaps matrixes...) and comparing them could make sense when computing their differences is unreliable, or time consuming, etc..

Read about the as-if rule, and the precise specification of your particular programming language (and its semantics).

At last, in many cases a - b > 0 is more readable than a > b (e.g. when a and b are time instants, their difference is a duration). So it really depends.

But replacing a-b>0 by a>b (or vice versa, when useful) -for integers- is a micro-optimization that most optimizing compilers would do better than you. So don't bother !

Is it a good practice as a coding habit?

Not in 2018, unless it improves readability. You should care a lot to make your code readable.

BTW, IMHO 0 is usually not a magic number. It is quite special (in some rare cases it could be a magic number, but choosing 0 as magic is poor taste and error prone...).

1

It's simple: If you want to know whether a is greater than b, then you write a > b. If you want to know whether the result of subtracting b from a is greater than 0, then you write a - b > 0.

Be aware of subtle differences. a > b with integer or floating point types will give you the mathematically correct result. When you calculate a - b, you can get overflow / underflow and undefined behaviour with signed integers. With unsigned integers, any result is ≥ 0, so the result a - b > 0 will be true except when a == b. For floating-point numbers, infinity minus infinity gives NaN, which is not greater than 0, or less than 0, or equal to 0 so this can come as a surprise.

And for languages with overflow checks, a - b > 0 can always lead to a crash.

But frankly, I have never, ever replaced a - b > 0 with a > b. And that's because it would never occur to me to write a - b > 0 in the first place.

gnasher729
  • 49,096