Checking the CPU Clock

The clock() function has nothing to do with human time. Nope. It returns a value from the computer’s CPU, the processor time. You can use this value to determine the amount of time it takes your programs to run.

Of course, the time utility at the command prompt does the same thing without your need to code anything. Just type:

time program

where program is the name of the program you want to time. The results look like this:

$ time ./a.out
[program output appears here]

real 0m0.004s
user 0m0.001s
sys 0m0.002s

The three values are: 4 milliseconds for the entire process; 1 millisecond of user time; and 2 milliseconds of system time.

Within your code, you can use the clock() function to fetch these CPU time values. The value returned is a clock_t (long unsigned) integer representing the CPU time, but this value must be interpreted properly.

2023_01_28-Lesson-a.c

#include <stdio.h>
#include <time.h>

int main()
{
    clock_t start;

    start = clock();
    printf("Processor start time: %lu\n",start);

    return(0);
}

The time.h header is required to define and use the clock() function.

The clock_t variable start holds the value returned from clock(), which is output by using the %lu placeholder.

Here’s a sample run:

Processor start time: 2191

The value 2191 (or whatever you see on your system) is meaningless without context. In this case, the value is related to the number of clock ticks per second the processor generates. This per-second value is different for each processor, though the defined constant CLOCKS_PER_SEC is used to calculate the number of processor ticks per second. The CLOCKS_PER_SEC value is set by default to one million, which doesn’t make it any more useful given that (again) processors have different clock rates.

To make clock() function’s return value more realistic, divide it by the CLOCKS_PER_SEC constant. To get the best results, cast the clock_t value as a double. You must also update the code to replace the %lu conversion character with %f:

printf("Processor start time: %f\n",(double)start/CLOCKS_PER_SEC);

Click here to view the full code on GitHub. Here is the updated output:

Processor start time: 0.002196

The point of this exercise is to time how long code, or a portion of code, executes. To do so, two calls must be made to clock(), one before all the action and another after. The following update to the code adds a for loop to ensure that time is consumed before the second reading is made:

2023_01_28-Lesson-c.c

#include <stdio.h>
#include <stdlib.h>
#include <time.h>

int main()
{
    int r;
    clock_t start,finish;

    /* obtain start time */
    start = clock();
    printf("Processor start time: %f\n",(float)start/CLOCKS_PER_SEC);

    /* seed the randomizer */
    srand( (unsigned)time(NULL) );
    
    /* loop for a while */
    for(;;)
    {
        r = rand() % 1000;
        if( r==1 )
            break;
    }

    /* obtain finish time */
    finish = clock();
    printf("Processor end time: %f\n",(float)finish/CLOCKS_PER_SEC);

    return(0);
}

The for loop keeps repeating until the a random value in the range of zero to 1,000 is equal to one. This loop can take a few milliseconds to complete, which makes the output values different each run:

Processor start time: 0.002194
Processor end time: 0.002220

To discover how much CPU time the program has consumed, the code must obtain the difference between the two values. The following printf() statement is added just before the return statement:

printf("Total program runtime: %f\n",
        ((float)finish-start)/CLOCKS_PER_SEC
      );

The full code is available on GitHub. The output now shows the difference:

Processor start time: 0.002229
Processor end time: 0.002258
Total program runtime: 0.000029

I’ve seen examples on the Interwebs where this result is multiplied by 1,000. I assume that’s to translate the result into milliseconds, though I don’t see any data to back that up.

If I run the program through the time utility, I see this output:

$ time ./a.out
Processor start time: 0.002200
Processor end time: 0.002236
Total program runtime: 0.000036

real    0m0.004s
user    0m0.001s
sys     0m0.002s

The time utility reports a total runtime of four milliseconds, which is close to the program result of .000036. The difference might explain why I’ve seen code that multiplies the clock_t value by 1,000. Again, that’s just my assumption.

Leave a Reply