Re: Ruby can't subtract ?
George Neuner <gneuner2 <at> comcast.net>
2009-11-01 02:10:30 GMT
On Sat, 31 Oct 2009 12:48:28 -0500, Christopher Dicely
<cmdicely <at> gmail.com> wrote:
>On Fri, Oct 30, 2009 at 11:40 PM, George Neuner <gneuner2 <at> comcast.net> wrote:
>> On Wed, 28 Oct 2009 14:30:21 -0500, Marnen Laibow-Koser
>> <marnen <at> marnen.org> wrote:
>>>Robert Klemme wrote:
>>>> On 28.10.2009 19:21, Matthew K. Williams wrote:
>>>>> As a rule of thumb, if you really care about the decimals, either use
>>>>> BigDecimal or integers (and keep track of where the decimal should be --
>>>>> this is common for $$$$). Unfortunately, this is not limited to ruby,
>>>>> either -- C, Java, and a host of other languages all are subject.
>>>> Absolutely: this is a common issue in *all* programming languages which
>>>> are not systems for symbolic math (like Mathematica) because they do not
>>>> work with real numbers but just rational numbers.
>>>That is not the issue here -- after all, BigDecimal does precise
>>>arithmetic, but only with rational numbers. The issue is rather that
>>>IEEE 754 does an inadequate job of representing arbitrary rational
>>>numbers, and the small errors are accumulated and magnified in
>> The problem is that the 754 representation has finite precision.
>Well, the problem isn't that. The problem is that the IEEE 754 (1985)
>provides only binary floating point representations, when many common
>problem domains deal almost exclusively with values that have finite
>(and short) exact representations in base 10, which may or may not