Splitting a Decimal Value

For whatever reason, it’s your desire to split a real number into its integer and fractional parts. Perhaps you’re angry with the value. Regardless, I can think of a few ways to perform this feat, but need not exercise a single brain cell in this effort as the modf() function performs the task automagically.

The modf() function is defined in the math.h header file. It doesn’t require linking in the math library in Linux. Here is the function’s man page format:

double modf(double x, double *iptr);

Argument x is the double value for which you want to cleave its fractional and integer portions. Argument iptr is a double pointer to hold the integer portion of the result (expressed as a double value). The double value returned from the function is the fractional part. Yes, this function returns two values, using a pointer argument to hold the second value.

Here’s sample code:

2025_02_15-Lesson.c

#include <stdio.h>
#include <math.h>

int main()
{
    double p,i;

    p = modf(M_PI,&i);

    printf("%f is %f and %f\n",
            M_PI,
            i,
            p
          );

    return 0;
}

The modf() function examines the value M_PI, which is π as defined in the math.h header file. The integer portion of the value is stored in double variable i (passed as an address, &i), with the decimal portion returned in double variable p. A printf() statement outputs the original value and its two parts:

3.141593 is 3.000000 and 0.141593

As I wrote earlier, I could concoct a similar function myself, but why bother when it’s already in the library?

Two companion functions handle float and long double values, modff() and modfl(), respecively. The arguments follow the same pattern, though their data types are float and long double.

Other real number chop-chop functions include frexp(), frexpf(), and frexpl(). These seem to be the same as their modf counterparts, though they do require linking in the math library in Linux; use -lm at the command prompt or activate the math library in your IDE.

5 thoughts on “Splitting a Decimal Value

  1. I used `objdump -M intel -d ./modf-sample` to look at the assembly of the above:

      mov rax, [.LC0]
      movq xmm0, rax
               ⋮
      call printf@PLT
               ⋮
    .LC0:
      .long 1413754136 ; low 32-bit of M_PI
      .long 1074340347 ; high 32-bit of M_PI

    Turns out in case of constant values GCC doesnʼt even invoke __builtin_modf(), but just inserts pre-calculated values into the programʼs .rodata section. To force the compiler to actually invoke __builtin_modf(), a ‘volatile double value = M_PI;’ declaration can be used.

    Also I was a bit hesitant at first, as to whether or not I should try to take a look at the inner workings of modf()… but in the end my curiosity got the better of me. After a quick search I found a C implementation for modf() [double] as well as a for modff() [float] (and corresponding definitions in math_private.h) in the glibc git repository.

    In an attempt to understand the inner workings of this code, I modified the implementation for float to using the following union instead of macros:
    typedef union {
    float value;
    unsigned int as_u32;
    struct {
    unsigned int significand : 23;
    unsigned int biased_exponent : 8;
    unsigned int sign : 1;
    };
    } ieee754_single;With that I then arrived at the following code for modff():
    #define F32_MANTISSA_MASK 0x007FFFFFu
    #define F32_SIGN_MASK 0x80000000u

    static const float F32_ONE = 1.0f;

    float c_modff (float x, float *iptr) /* glibc\sysdeps\ieee754\flt-32\s_modff.c */
    { ieee754_single * restrict float32 = (ieee754_single *)&x;
      ieee754_single * restrict int_part = (ieee754_single *)iptr;

      int const exponent = float32->biased_exponent – 127;
      float fract_part;

      if (exponent < 23) /* i.e. there is a fractional part */
      {
        if (exponent < 0) /* |x| < 1 */
        { /* *iptr = ±0 */
          int_part->as_u32 = (float32->as_u32 & F32_SIGN_MASK);
          fract_part = float32->value;
        }
        else
        {
          /* abs(value) = (1.significand << exponent) ⇒
             (MANTISSA_WIDTH – exponent) bits after decimal point: */
          unsigned int fract_bits = F32_MANTISSA_MASK >> exponent;

          if ((float32->significand & fract_bits) == 0) /* x is integral? */
          {
            int_part->value = float32->value;
            float32->as_u32 &= F32_SIGN_MASK; /* return ±0 */
            fract_part = float32->value;
          }
          else /* … all other bits specify the integral value: */
          {
            int_part->as_u32 = float32->as_u32 & ~fract_bits;
            fract_part = float32->value – int_part->value;
          }
        }
      }
      else /* no fractional part: */
      {
        /* multiply by 1.0 to preserve NaN-edness: */
        int_part->value = float32->value * F32_ONE;

        if (exponent == 128 && float32->significand != 0)
          return (float32->value * F32_ONE); /* handle NaNs separately */

        float32->as_u32 &= F32_SIGN_MASK; /* return ±0 */
        fract_part = float32->value;
      }

      return (fract_part);
    }To make everything work for ‘denormal’s and ‘NaN’s quite a few if conditions are necessary, but the “happy code path” is actually pretty straightforward. I have posted my code on GitHub (should anyone be interested): https://tinyurl.com/4njduuw7

  2. That’s amazing, and I would have never expected it. I mean, why not just use math to split off the mantissa? I suppose the answer requires a deeper understanding of BCD.

    Another question I would have about the function guts (and you need not waste time on this) is why the frexp() version requires the math library?

  3. I think modff() is implemented the way it is, because the IEEE-754 1985 floating-point standard—in section 3.2.1 Single, on page 4—describes several different encodings… how floating-point values with certain bit patterns are to be interpreted:

    1. if (exponent ⩵ 128 && significand ≠ 0) f32 = NaN [regardless of sign]
    2. if (exponent ⩵ 128 && significand ⩵ 0) f32 = -1ˢⁱᵍⁿ·∞ [infinity]
    3. if (-126 ≤ exponent ≤ 127) f32 = -1ˢⁱᵍⁿ·2ᵉˣᵖᵒⁿᵉⁿᵗ·1.significand
    4. if (e ⩵ -127 && significand ≠ 0) f32 = -1ˢⁱᵍⁿ·2⁻¹²⁶·0.significand [denormalized]
    5. if (e ⩵ -127 && significand ⩵ 0) f32 = -1ˢⁱᵍⁿ·0 [zero]

    Further differentiating these cases, Intels Software Developerʼs Manual Volume 1: Basic Architecture lists 9 different patterns for floating-point values in total:

    s bias_exp significand [Intel SDM, Table 4-3]
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
    ? ??…1…??? ??????????????????????? Normalized Finite
    ? 00000000 ??????????…1…?????????? Denormalized Finite
    0 00000000 00000000000000000000000 Positive Zero
    1 00000000 00000000000000000000000 Negative Zero
    0 11111111 00000000000000000000000 Positive Infinity
    1 11111111 00000000000000000000000 Negative Infinity
    ? 11111111 0?????????…1…?????????? SNaN (signaling NaN)
    ? 11111111 1?????????????????????? QNaN (quiet NaN)
    ? 11111111 10000000000000000000000 QNaN Indefinite (e.g. result of 0/0)

    To support all these cases, , and need to be inspected anyhow… as such itʼs simply easiest to use bitwise operations to manipulate given values.

    Also, I guess the reason for modf() being usable without -lm (i.e. linking the math library) may have to do with the fact that GCC replaces such calls with compiler built-ins… that being said, on my system (Debian 12) I didnʼt have to use -lm to get frexp() to work either. Which is good, because GCCʼs documentation also lists them as built-ins (so my hypotheses is at least not invalidated by all of this).

  4. Again, this is awesome information. I’ve tried to look up how floating point data is stored. I find such details fascinating.

  5. Several universities offer publicly accessible copies of the IEEE Std 754-1985 PDF. Therein, all the gory details of (binary) floating-point formats are described in ~12 pages.

    While later versions of the standard come with the advantage of also describing decimal floating-point formats, I like the original standard best because it clearly was written with understandability in mind (while later standards seem to have been authored by more mathematically inclined people).

Leave a Reply