It's not until you study math a little bit that you realize just how awful floating-point arithmetic is. I mean, so it's a little inexact. So what? But the following are examples of statements that are absolutely true for integers, rationals, and reals, but not for floating-point numbers (even setting aside floating-point infinities and “not-a-number”s):
a - (a - b) = b
If b > 0, then a + b > a.
The associative law: (a + b) + c = a + (b + c)
The distributive law: a × (b + c) = (a × b) + (a × c)
Cancellation: If a × b = a × c and a ≠ 0, then b = c.
In short, if you've ever done any reasoning about floating-point numbers, you were probably wrong.
It's easy to come up with formulas that floating-point arithmetic gets seriously wrong, especially for numbers very close to zero.
The inexactness is impossible to track in a useful way, so you never know just how bad the result is. Typically you just assume it's almost right until you find out it's wrong. The only surprising thing is that it works so often.