I want to give a somewhat practical example.
While this example is constructed to be simple, it reflects applications that I encounter on an everyday basis.
Suppose we want to minimise the function f(x) = exp(1/x) − 1/g(100·x), where g(x) = −(1+x)−x, all for positive x. This function looks like this (and nothing mathematically interesting happens beyond the plotted interval):

Below is a simple Python code that implements those functions and applies a numerical minimiser to f. The only thing that can be potentially be negative zero is g(x), so we can simulate a world that only has a regular (positive) zero in a second round:
import numpy as np
from scipy.optimize import minimize_scalar
def is_negative_zero(x):
return str(x) == "-0.0"
print(f"−0?\tx\tf(x)")
for world_has_negative_zero in [True,False]:
def f(x):
return np.exp(1/x) - 1/g(100*x)
def g(x):
result = -(1+x)**(-x)
if is_negative_zero(result) and not world_has_negative_zero:
# replaces negative zeros with positive ones
return np.float64(0.0) # to avoid ZeroDivisionError
else:
return result
result = minimize_scalar(f)
print(f"{world_has_negative_zero}\t{result.x:.3f}\t{result.fun:.0f}")
This returns:
−0? x f(x)
True 0.069 3525796
False 5.236 -inf
As you can see, we obtain the correct result only with a negative zero.
Now, what happened here?
g(x) is generally negative, but for x > 150, it is larger than the largest negative floating-point number.
For such x, a numerical underflow occurs, and g(x) is best represented as negative zero.
This happens in the first round, and consequentially f(x) becomes positive infinity.
This is a broad approximation, but it is correct insofar as the value of f(x) is indeed very large, which is all we (and minimize_scalar) need to know for our purposes:
Since we did not give minimize_scalar any hints on where to find the minimum, it performs a trial-and-error procedure, during which it calls g with arguments larger than 150.
Since the result is infinity, it (correctly) assumes that no minimum can be found there and searches elsewhere, eventually finding the minimum.
In the second round, we make g(x) return positive zero for large x, which leads to f(x) being negative infinity, which is far from the truth.
minimize_scalar finds such a case during its initial probing and thus this is the end result, because it trumps the correct minimum.
In many cases involving numerical solvers and similar, numerical over- and underflows are common for bad guesses.
Since the floating-point standard correctly handles them (and produces NaNs when it can’t), routines like minimize_scalar can do so as well.
Thus the programmers using all of these do not need to handle them separately, and instead this is done without further ado by the innermost layers of the CPU.
This does not only considerably speed up certain computations, but also keeps code simple, avoids programming mistakes, and prevents bugs.