## Quick and dirty floating-point to decimal conversion – exploring binary funny jokes in urdu

In my article “ Quick and Dirty Decimal to Floating-Point Conversion” I presented a small C program that uses double-precision floating-point arithmetic to convert decimal strings to binary floating-point numbers usa today coaches poll. The program converts some numbers incorrectly, despite using an algorithm that’s mathematically correct; its limited precision calculations are to blame. I dubbed the program “quick and dirty” because it’s simple, and overall converts reasonably accurately.

For this article, I took a similar approach to the conversion in the opposite direction — from binary floating-point to decimal string. I wrote a small C program that combines two mathematically correct algorithms: the classic “repeated division by ten” algorithm to convert integer values, and the classic “repeated multiplication by ten” algorithm to convert fractional values.

The program uses double-precision floating-point arithmetic, so like its quick and dirty decimal to floating-point counterpart, its conversions are not always correct — though reasonably accurate. I’ll present the program and analyze some example conversions, both correct and incorrect.

Every binary floating-point number has an exact decimal equivalent, which can be expressed as a decimal string of finite length. For example, the double-precision floating point representation of 3.14159, which in binary scientific notation is 1.100100100001111110011111000000011011100001100110111 x 2 1, is equal to the decimal value 3.14158999999999988261834005243144929409027099609375.

For the purposes of this article, I’ll consider a floating-point to decimal conversion correct if it’s the exact decimal equivalent of the binary floating-point number __yahoo futures indices__. I won’t consider rounding to a smaller number of digits, except to note — in specific examples — how rounding to 17 significant digits affects round trip conversions aud usd live chart. Program to Convert a Floating-Point Number to a Decimal String

Here’s the “quick and dirty” C program I wrote to convert a floating-point number to a decimal string; it attempts to generate all the digits of the decimal number (which it would do if it used arbitrary-precision floating-point):

• It converts the integer part of a floating-point number by repeatedly dividing it by ten and appending the remainders to form the equivalent decimal string — in reverse. The process stops when the integer part becomes zero.

• It converts the fractional part of a floating-point number by repeatedly multiplying it by ten and stripping off the integer part to form the equivalent decimal string. The process stops when the fractional part becomes zero.

These are the same algorithms used in my PHP conversion routines dec2bin_i() and dec2bin_f() and in my C conversion program fp2bin.c, except for using 10 as a divisor and multiplier instead of 2. The Code

The code is a condensed version of fp2bin.c. I could have copied fp2bin.c in its entirety, renamed it fp2dec.c, renamed some variables, and substituted occurrences of 10 for 2 *gold price history chart* 100 years. But my goal for this article was to keep the program simple, to show the essential elements of the conversion.

• The number to be converted starts out as a hardcoded decimal literal, which in the program as shown is 3.14159; it is converted by the compiler to double-precision floating-point **binary representation**. The program converts that floating-point number back to a decimal string. (Whether the compiler does the conversion to floating-point correctly or not does not matter — all I’m testing is that the resulting floating-point value, whatever it is, converts to its decimal equivalent. But for what it’s worth, I did verify that the examples in this article were converted to floating-point correctly, by comparing them to the conversions done by David Gay’s strtod() function.)

• Pure integer values are formatted as “i.”, and pure fractional values are formatted as “.f”. I did that to keep the program simple, rather than handle the special cases (printing integers without a decimal point, printing fractions with a leading ‘0.’).

• The constants 311 and 1076 represent the maximum size for a converted integer value and fractional value, respectively. The maximum positive double-precision integer has 309 digits, and the minimum positive double-precision fraction has 1074 digits. Each constant also accounts for the decimal point and the string terminator.

To test conversions I packaged the program above in some additional C code, not shown __usd euro chart__. I used David Gay’s dtoa() function, and formatted its output so that the decimal number was displayed with all its digits and not in scientific notation. I generated random values to convert and then compared the quick and dirty conversion to dtoa()’s formatted output. I selected three example conversions for analysis below, one correct and two incorrect.

I also verified each of the three conversions by hand, by converting the initial decimal string to binary, rounding it by hand to 53 significant bits, and then converting it back to decimal the binary system. Example 1: A Correct Conversion

The double-precision floating-point number 0x1.921f9f01b866ep+1, converted by the compiler from the decimal literal 3.14159, is converted correctly by the quick and dirty program, to this decimal number:

The conversion succeeds because at no point during the computation does it require more than 53 significant bits. To show this, I did two things:

• I traced both the quick and dirty and the GMP-based conversions, using my function fp2bin() **usa today**. I printed the binary value of the fractional part at each step, after it was multiplied by ten and before it had its integer part subtracted out. (I didn’t trace the conversion of the integer part ‘3’; this trivial conversion incurs no floating-point errors.)

• The fractional part shrinks by one bit at each step. There are 50 steps, equaling the number of fractional bits of the initial floating-point number (this includes leading zeros). One decimal digit is produced at each step, giving a conversion with a 50 digit fractional part. (This neat “lose one bit per step” behavior is only guaranteed to occur when there is no loss of floating-point precision.)

The double-precision floating-point number 0x1.9eb851eb851ecp-1, converted by the compiler from the decimal literal 0.81, is converted incorrectly by the quick and dirty program:

Step 1 requires a 54 significant bit result, so in double-precision floating-point it will be rounded; here is the trace of the quick and dirty conversion, which shows the rounding:

• The quick and dirty conversion has three less decimal digits than the correct one. That’s because the rounding at bit 54 propagates to bit 51, effectively shortening the floating-point number by three significant bits (the grayed segments in the traces — in the first step — show where the rounding occurs).

• Hand-rounded to 17 digits, the correct conversion is 0.81000000000000005 and the quick and dirty conversion is 0.81000000000000014; that is a nine decimal ULP error **jpy usd yahoo**. This error is big enough that the quick and dirty conversion does not round-trip, as it’s supposed to as a 17 significant digit number. The quick and dirty conversion would convert back to floating-point (using a correctly rounding conversion routine) as 0x1.9eb851eb851edp-1, which is one binary ULP away from 0x1.9eb851eb851ecp-1.

The double-precision floating-point number 0x1.0000000000000p+57, converted by the compiler from the decimal literal 144115188075855877, is converted incorrectly by the quick and dirty program:

I traced both the quick and dirty and correct conversions. For these traces, I printed the running integer quotient and the remainder at each step australia to us exchange rate. In the quick and dirty conversion, things go wrong because the quotient, at some point, needs more than 53 significant bits to be represented accurately. Let’s look at the correct conversion first:

Step 1 requires a 54 significant bit result, so in double-precision floating-point it will be rounded; here is the trace of the quick and dirty conversion:

• Hand-rounded to 17 digits, the correct conversion is 144115188075855870 and the quick and dirty conversion is 144115188075855880. That’s only a one decimal ULP error, and the rounded quick and dirty conversion round-trips back to floating-point correctly.

The two quick and dirty programs I’ve written — decimal to floating-point and floating-point decimal — use double-precision floating-point to convert to and from double-precision floating-point. It seems like a reasonable approach: use IEEE 754 arithmetic to produce IEEE 754 numbers. However, double-precision floating-point isn’t up to the task; it takes higher-precision floating-point to give correct results for all conversions.

Dealing with higher-precision floating-point is a little tricky; for one, you need to figure out how much precision is enough. That’s why, in practice, different conversion algorithms are used, ones that work with high-precision integer arithmetic — at least for the cases that need more than double-precision arithmetic. David Gay’s conversion routines are a good example of this approach. Endnote

While writing this article I discovered a bug in David Gay’s strtod() function. On a certain class of inputs it would give wildly inaccurate results (see the change log for a description). The bug was fixed 11/5/10.