While unit testing during the week, I came across a pretty nasty code artefact that had come about through a refactoring, where someone had decided to replace Double with BigDecimal in a class for handling monetary amounts. (Because it’s better for rounding, apparantly. I like floating-point numbers, but apparantly I’m in the minority!)

There was a unit test to see if the results of an arithmetic computation that should result zero, did result in zero. And with floating point numbers, this of course worked — thanks to the sensible implementation of FP numbers, and the fact that the relevant java classes were pretty well designed.

In comparison, BigDecimal feels a bit like a management-pleasing hack. A simplified way of imagining it would be a class containing a long (in base 10) and an int, with the int (called the scale) telling you at which figure in the long the decimal point should go. Then java.equals runs a comparison on these two numbers, if they’re both true, then they’re equal. This means that

3.99 – 3.99 != 0

because the scale of the left-hand side will be 2, and the scale of the right hand side, will be 0. The method I should use to see if these are equal is BigDecimal.compareTo(x) which will return 0 if they have the same literal value. Obviously, this makes my unit tests look ugly, but there’s much bigger problems than that.

What about this?

BigDecimal x = new BigDecimal(1);

BigDecimal y = new BigDecimal(1.00);

BigDecimal z = new BigDecimal(1.0000000000);

Set a = new HashSet();

Set b = new TreeSet();

Can you see where this is going? Set a is going to accept 3 different values of 1, despite the fact that sets are designed to contain unique elements. Wheras Set b is going to run into trouble because BigDecimal’s defintion of compareTo is not consistant with it’s definition of equals.

Honestly…. give me floating point maths any day!

### Like this:

Like Loading...

*Related*