I was surprised to discover that I hadn’t written about this topic before: converting a decimal value into a fraction. Of course, the solution is really stupid — which I’ll show in a moment. But the goal is to reduce or simplify the stupid way and end up with a fraction instead of a decimal.
To convert a decimal into a fraction you convert the decimal portion into an integer ratio and then reduce the fraction.
For example, the value 0.256 can be expressed as 256/1000. This fraction is then reduced to 32/125. The value 0.25 becomes 25/100 and then 1/4.
So, the process seems rather simple: Multiply the decimal to create a larger, integer value and then reduce this fraction.
To write the code, I borrowed from an Exercise presented four years ago on reducing fractions. Here’s what I came up with:
2026_03_28-Lesson.c
#include <stdio.h>
#include <stdlib.h>
int main()
{
float decimal;
int numerator,denominator,diff,larger,smaller;
/* obtain and validate input */
printf("Enter decimal value: ");
scanf("%f",&decimal);
if( decimal > 1.0 || decimal < 0.0 )
{
puts("Please input a value less than 1.0");
puts("and greater than zero");
return 1;
}
/* configure the numerator and denominator
use 100000 based on 'float' precision */
denominator = 100000;
numerator = decimal*denominator;
/*
Use Euclid's algorithm to find the least common
denomniator and reduce the fraction
*/
/* calculate differences between the larger and smaller values */
larger = numerator>denominator ? numerator : denominator;
smaller = numerator<denominator ? numerator : denominator;
diff = larger-smaller;
/* keep calculating until the common denominator is found */
while( diff!=larger )
{
larger = smaller>diff ? smaller : diff;
smaller = smaller==larger ? diff : smaller;
diff = larger-smaller;
}
printf("%f is the fraction %d/%d\n",
decimal,
numerator/diff,
denominator/diff
);
return 0;
}
The program prompts for a decimal value as input, stored in float variable decimal. A test is made to confirm that the value is in the range of 1.0 to zero — a positive decimal without an integer portion. (Though the program does allow 1.0 to be input.)
To reduce the fraction, and employ Euclid’s algorithm, I calculate the numerator and denominator values:
denominator = 100000;
numerator = decimal*denominator;
The rest of the code is lifted from the earlier Exercise’s solution. A printf() statement outputs the results.
Here are a few sample runs:
Enter decimal value: 0.875 0.875000 is the fraction 7/8 Enter decimal value: 0.4 0.400000 is the fraction 2/5 Enter decimal value: 0.212121 0.212121 is the fraction 5303/25000
Alas, the program fails to properly convert thirds:
Enter decimal value: 0.6666666 0.666667 is the fraction 33333/50000
I can read this failure in that either I’m not doing the conversion properly or there’s some other trick that I’m missing to catch certain exceptions. Or, I suppose, the problem could be with Euclid’s algorithm not being able to reduce a continuing fraction. I dunno.
For the most part, my approach works. It solves a puzzle I hadn’t yet addressed in this blog. I’d be interested to know of any other approaches for converting decimal values into a rational representation.
But 0.6666666 or 0.666667 aren’t 2/3. I can think of two ways of dealing with this:
1: Have a second Boolean input to flag that the number should be regarded as recurring, then do something (not too sure what) to allow for that.
2: Assume if there are a certain number of repeating digits then the number is recurring.
(In LibreOffice Calc if you format 0.6666666 as a fraction is displays as 2/3. As it’s FOSS maybe you look at the source code and see how it’s done :))
According to ChatGPT, the code makes special exceptions for continuing fractions.