Hi, I think there are two separate concerns colliding here:
1. StrD()
Remember, whether you write 1.4 or 1.400, it results in the exact same floating point value, so StrD() has no idea how many decimal places you originally intended.
Code: Select all
a.d = 1.4
b.d = 1.400
Debug Bin(PeekQ(@a), #PB_Quad)
Debug Bin(PeekQ(@b), #PB_Quad)
Debug StrD(a)
Debug StrD(b)
StrD() is doing what it claims, removing trailing zeroes, StrD(1.0) --> "1"
That being said... I agree it should always show AT LEAST the 1 decimal places (unless you specify zero places), StrD(1.0) --> "1.0"
because in PB and other languages the ".0" indicates it's a float/double, not integer!
But I don't expect the behavior of a basic existing PB function to change after 20 years
2. JSON
The JSON format does not specify whether a "number" is integer or float/double... see Wikipedia
https://en.wikipedia.org/wiki/JSON
The format makes no distinction between integer and floating-point.
Numbers in JSON are agnostic with regard to their representation within programming languages. While this allows for numbers of arbitrary precision to be serialized, it may lead to portability issues. For example, since no differentiation is made between integer and floating-point values, some implementations may treat 42, 42.0, and 4.2E+1 as the same number, while others may not. The JSON standard makes no requirements regarding implementation details such as overflow, underflow, loss of precision, rounding, or signed zeros, but it does recommend to expect no more than IEEE 754 binary64 precision for "good interoperability".
That's just the way it is. And because the PB JSON lib does give access the "raw original text" of the stored number, you don't know if it contained a decimal point. The simplest estimate is to check if
Int(MyDouble) = MyDouble