Thread: using clock()

  1. #1
    Registered User
    Join Date
    Nov 2006
    Posts
    176

    using clock()

    I'm trying to find the time in microseconds it takes to cycle a char through a pipe
    currently this is what I have, but the results are negative somehow

    Code:
    clock_t cs_do_child(int input_pipe[], int output_pipe[])
    {
            char ch;        /* character we are cycling */
            clock_t done;   /* clock time after writing */
    
            close(input_pipe[1]);
            close(output_pipe[0]);
    
            if (read(input_pipe[0], &ch, 1) > 0)
            {
                    if (write(output_pipe[1], &ch, 1) == -1)
                    {
                            perror("Child write");
                            close(input_pipe[0]);
                            close(output_pipe[1]);
                            exit(1);
                    }
            }
    
            done = clock();
    
            close(input_pipe[0]);
            close(output_pipe[1]);
    
            return done;
    }
    
    clock_t cs_do_parent(int input_pipe[], int output_pipe[])
    {
            char ch = 'a';  /* character to cycle */
            clock_t start; /* clock time before write */
    
            close(input_pipe[1]);
            close(output_pipe[0]);
    
            start = clock();
    
            if (write(output_pipe[1], &ch, 1) == -1)
            {
                    perror("Parent write");
                    close(input_pipe[0]);
                    close(output_pipe[1]);
                    exit(1);
            }
    
            if (read(input_pipe[0], &ch, 1) <= 0)
            {
                    perror("Parent read");
                    close(input_pipe[0]);
                    close(output_pipe[1]);
                    exit(1);
            }
    
            close(input_pipe[0]);
            close(output_pipe[1]);
    
            return start;
    }
    void context_switch()
    {
            int child_to_parent[2];
            int parent_to_child[2];
            pid_t child_pid;
            clock_t start, finish;
            double benchmark;
    
            if (pipe(child_to_parent) == -1)
            {
                    perror("Create pipe: child_to_parent");
                    exit(1);
            }
    
            if (pipe(parent_to_child) == -1)
            {
                    perror("Create pipe: parent_to_child");
                    exit(1);
            }
    
            child_pid = fork();
    
            switch(child_pid) {
                    case -1: perror("fork"); exit(1);
                    case 0: finish = cs_do_child(parent_to_child, child_to_parent); exit(0);
                    default: start = cs_do_parent(child_to_parent, parent_to_child); break;
            }
    
            benchmark = ((double)(finish - start)) / CLOCKS_PER_SEC;
    
            printf("%f\n", benchmark);
    }
    yes, dividing by CLOCKS_PER_SEC gives me seconds, not microseconds, but right now I'm just trying to get realistic output, and will change it later.

    Output:
    -12.854472

  2. #2
    ATH0 quzah's Avatar
    Join Date
    Oct 2001
    Posts
    14,826
    You can't get that fine of clock measurements without a third party library of some sort.


    Quzah.
    Hope is the first step on the road to disappointment.

  3. #3
    Registered User SKeane's Avatar
    Join Date
    Sep 2006
    Location
    England
    Posts
    234
    You are trying to subtract the value of a variable set in the child process from a variable that is set in the parent process. They can't see each other's data. The child gets a copy of the parents environment, it doesn't see any subsequent changes.

  4. #4
    and the hat of int overfl Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    39,660
    > benchmark = ((double)(finish - start)) / CLOCKS_PER_SEC;
    As SKeane has already said, one of these variables remains uninitialised in the parent.

    You could try something like
    Code:
    {
       int status;
       wait( &status);
       finish = clock();
    }
    benchmark = ((double)(finish - start)) / CLOCKS_PER_SEC;
    But that would include all the time taken to start and terminate the process as well as the sending a character.

    Alternatively you could use a counter which isn't on a "per process" basis, and which is very accurate.
    Perform the calculation in the child after you received the char.
    But even then, you've still got the child process startup time in there.
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.

  5. #5
    Registered User
    Join Date
    Nov 2006
    Posts
    176
    I see it now, thanks I'm gonna have to communicate finish back to the parent......I'm not sure if I can wait on status though...will that not just end up in deadlock, having the parent waiting on status, and the child waiting on the parent.

  6. #6
    and the hat of int overfl Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    39,660
    The parent will have to close it's end of the pipe before trying to wait for the child to finish.
    Or send it some kind of message to cause it to finish.
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.

  7. #7
    Registered User
    Join Date
    Nov 2006
    Posts
    176
    well I thaught I had it figured out, but I think I'm not using clock() right. I'm on SunOS and the man page says clock() returns the amount of microseconds that have passed since the first call to clock() in the executing process.

    Code:
    #include <stdio.h>
    #include <time.h>
    #include <sys/types.h>
    
    int main(int argc, char **argv)
    {
            clock_t the_clock;
    
            /* first call to clock in process */
            if (clock() == (clock_t)-1)
                    printf("Error\n");
    
            sleep(1);            /* sleep for 1,000,000 micro seconds */
    
            /* clock() should now return 1,000,000 aproximatley */
            the_clock = clock();
    
            if (the_clock == (clock_t)-1)
                    printf("Error 2\n");
    
            printf("%ld\n", the_clock);
    
    }
    by that definition the output of this test program would be 1,000,000 I assumed, but same as in the main program I'm using clock() in, the function always returns 0, except in odd cases, it has returned 10000 and 20000 (randomly maybe 1 out of 30 trials). Has anyone used clock on sunos before that may know what is going on.

  8. #8
    Registered User
    Join Date
    Nov 2006
    Posts
    176
    nevermind, thats wrong...its the microseconds of cpu time, so sleep would not add to that, since the process would just be blocked for that second. maybe the original reply was correct and this is just too small a window to time.

  9. #9
    Registered User
    Join Date
    Nov 2006
    Posts
    176
    well if anyone else is looking to evaluate very small time intervals like I was, check out gethrtime() not clock(), it's working good now and giving me very comparable results to other test data I have.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Logical Error in Clock program
    By SVXX in forum C++ Programming
    Replies: 0
    Last Post: 05-10-2009, 12:12 AM
  2. Outside influences on clock cycles? (clock_t)
    By rsgysel in forum C Programming
    Replies: 4
    Last Post: 01-08-2009, 06:15 PM
  3. Clock Troubles
    By _Nate_ in forum C Programming
    Replies: 22
    Last Post: 06-19-2008, 05:15 AM
  4. clock program
    By bazzano in forum C Programming
    Replies: 3
    Last Post: 03-30-2007, 10:12 PM
  5. System clock
    By bazzano in forum C Programming
    Replies: 10
    Last Post: 03-27-2007, 10:37 AM