Page 2 of 2

Re: About Compiler option && IEEE 754

PostPosted: Thu Dec 30, 2010 3:36 am
by Robert Sample
How does the fraction 1 / 7 look when represented in IEEE 754-2008? From what I see, IEEE 754-2008 supports a maximum of 34 digits, so any calculation requiring accurate representation past the 34th digit will not be the precise value under IEEE 754-2008. Expecting computer floating point operations to generate the same values as real-world calculations is not a realistic expectation.

Re: About Compiler option && IEEE 754

PostPosted: Thu Dec 30, 2010 5:35 am
by Akatsukami
Robert Sample wrote:How does the fraction 1 / 7 look when represented in IEEE 754-2008? From what I see, IEEE 754-2008 supports a maximum of 34 digits, so any calculation requiring accurate representation past the 34th digit will not be the precise value under IEEE 754-2008. Expecting computer floating point operations to generate the same values as real-world calculations is not a realistic expectation.

The difference between the (most widely held figure for the) radius of the observable universe and the radius of a proton is about 41-42 orders of magnitude. I seriously question the need for more than 34 digits in the significand; indeed, before I wrote so much as a line of code for that application, I'd demand to see the physical process that provided data measurable to that accuracy.

The popular expansion of GIGO is "Garbage In, Garbage Out". Less popular but equally imoprtant to remember is the expansion "Garbage In, Gospel Out"; the results are not accurate to whatever number of decimal places we choose to print.

Re: About Compiler option && IEEE 754

PostPosted: Thu Dec 30, 2010 8:52 am
by steve-myers
IBM "hexadecimal" 32-bit floating point has always had accuracy problems doing arithmetic. IBM spent a huge amount of money around '67/'68 partially fixing System/360 floating point. I went from 704x/709x 36/72-bit floating point to System/360 32/64-bit floating point in the 1960s and I remember being horrified about how awful it was. I remember spending many hours analyzing System/360 floating point and trying to think of ways it could have been done better; I eventually concluded they did about as well as they could. There were many, many good things about System/360, but I always thought its floating point was very much an after thought.

The conversion issues that have been mentioned apply to all floating point systems, not just System/360 "hexadecimal" floating point, and for the same reasons. I have to agree with Akatsukami that Cobol is probably the worst language to use for floating point, though, in truth, I know very little about Cobol.

The reason we (not the Royal we, just to include all of us) have to be concerned about the binary representation of floating point is data interchange with other platforms. I personally think it's better to send the binary representation of something like 1234.5678 than the the converted digits to be reconverted back to binary on the receiving system. You're potentially losing precision twice sending data this way, first when when it's converted to decimal digits before the data is sent, and then converting the data sent to binary on the receiving system.

Us mainframe people often lose sight of the fact that what really made IEEE floating point popular was with Intel's decision to use it with their 8087 floating point co-processor with 808x computer chips. My memory is faulty here: I think there was a math co-processor with the 80286 chip; I'm pretty sure a math co-processor was baked in the 80386 and subsequent computer chips.

For what it's worth, I have mixed feelings about some aspects of IEEE floating point, particularly the fact the exponent size in 32-bit IEEE floating point is smaller than in 64-bit IEEE floating point. I think I know why they did it that way, but that does not mean I think it was "right." For all their faults, both 704x/709x and System/360 floating point used the same size exponent for both their single precision and double precision floating point, which greatly simplifies conversion from, say, 64-bit to 32-bit values.

Re: About Compiler option && IEEE 754

PostPosted: Thu Dec 30, 2010 11:05 am
by vinsonzhang
Thanks for all of your reply. It really helps me a lot.

Maybe this discussion will last for a long time . This issue really impacts my APP.

Wish something will be changed in the future.

Thank all of you.

Re: About Compiler option && IEEE 754

PostPosted: Fri Dec 31, 2010 12:52 am
by dick scherrer
Hello,

Wish something will be changed in the future.
But this wish should not influence the code currently being developed. . .

Today's requirements must be implemented with today's capabilities. . . An acceptable resolution may not be implemented before the app needs to go live.

Re: About Compiler option && IEEE 754

PostPosted: Fri Dec 31, 2010 1:27 am
by enrico-sorichetti
This issue really impacts my APP.

if You have such big issues with the way cobol handles floating point numbers
why do You stick to COBOL, investigate a programming language with the features You need!
simpler and more effective, isn' t it?

Re: About Compiler option && IEEE 754

PostPosted: Fri Dec 31, 2010 1:52 am
by steve-myers
Akatsukami's discussion about the radius of a proton and the radius of the observable universe is not quite dead on. A proton is just a cloud of quarks; its "radius" depends, at least in part on the energy level of the proton. Again, the "radius" of the observable universe is not a truly fixed number, but it is harder to pin down than the radius of a proton.

Scientists tend to translate enormously large numbers, or enormously small numbers to something else. Astronomers use "light year" to represent an enormous distance. But, if you think about it, a "year" is an enormously flaky number. What is a "year" in an absolute sense? It is the time it requires us to complete one rotation around our sun. However, we seem to have trouble measuring a "year!" From time to time we have to add a "leap second" to our year because we can now measure time more precisely. Or take it from another perspective. In 1967 I celebrated the start of the year near Philadelphia, Pennsylvania. In 1968 I celebrated the start of the year near Chia Yi, in Taiwan. My 1967 was somewhat shorter. Similarly, I celebrated the start of the year 1998 in Moscow, Russia; my year was somewhat shorter than if I had celebrated the start of year 1998 near Philadelphia. By 1999 I was back near Philadelphia, so my 1998 was 8 hours longer. Of course, over two years my two years stayed the same.

Physics uses the term "electron volt" as a way to measure energy in sub-atomic reactions. By any human standard an electron volt is an extremely tiny amount of energy. The energy to light the display on my laptop is probably the equivalent of billions of electron volts.

Now 1.2345678E-10 is a much different value that 1.2345678E10. The difference, of course, is in the exponent. 1.2345678e10 - 1.2345678e-10 is still 1.2345678e10.

8 digits is about all you can do in 32-bit IEEE floating point 1.2345678 generates 3F9E0651, 1.2345679 generates 3F9E0652, 1.23456781 generates 3F9E0651, as does 1.23456782.

Re: About Compiler option && IEEE 754

PostPosted: Mon Jan 03, 2011 6:36 pm
by steve-myers
vinsonzhang wrote:When I convert a floating point "0.001f" from IEEE format to IBM HEX Format,

and then convert it back to IEEE format, the value does not equal "0.001f".

I think this is a severe problem in many application.

So that's why I want to know how to control the storing format.

anyway, Thank you for your reply.
You have "lost" precision converting decimal 0.001 to its near equivalent in hexadecimal floating point as there is no exact representation of decimal 0.001 in any binary floating point. I don't think you "lose" any more precision converting the number from hexadecimal floating point to IEEE floating point, though, in truth, I don't know this to be a fact.

The newest Z series hardware now supports a decimal floating point, but I don't know if there is any high level language support for it.