It’s not that you’re actually getting extra precision – it’s that the float didn’t accurately represent the number you were aiming for originally. The double is representing the original float accurately;
toString is showing the “extra” data which was already present.
For example (and these numbers aren’t right, I’m just making things up) suppose you had:
float f = 0.1F; double d = f;
Then the value of
f might be exactly 0.100000234523.
d will have exactly the same value, but when you convert it to a string it will “trust” that it’s accurate to a higher precision, so won’t round off as early, and you’ll see the “extra digits” which were already there, but hidden from you.
When you convert to a string and back, you’re ending up with a double value which is closer to the string value than the original float was – but that’s only good if you really believe that the string value is what you really wanted.
Are you sure that float/double are the appropriate types to use here instead of
BigDecimal? If you’re trying to use numbers which have precise decimal values (e.g. money), then
BigDecimal is a more appropriate type IMO.