Thread: Timeout with FILE* handle

  1. #1
    Registered User
    Join Date
    Aug 2002
    Posts
    351

    Timeout with FILE* handle

    with reference to the following thread.

    http://www.cprogramming.com/cboard/s...threadid=35764

    Select and poll work with integer file descriptors.

    Are there alternative timeout methods (excluding signal alarms) that can be applied to stream functions that deal with a FILE* handle?

    In addition, the pclose hang seems to a bug in popen (not closing all fds) judging by general comments on the problem.

    what do you think vVv?

    TIA, rotis23

  2. #2
    Registered User
    Join Date
    Aug 2002
    Posts
    351
    I've solved the problem using select with popen, fread and pclose by converting the FILE* handle to a file descriptor.

    It seems that SIGCHLD cuases the same problems as SIGALRM with pclose.

    thanks vVv.

  3. #3
    Registered User
    Join Date
    Aug 2002
    Posts
    351
    ok, just one last implementation issue.

    If pclose is never called, (according to dmalloc) 1500 bytes are left not freed by iopopen.c.

    I presume the way forward is to not use popen.

    I also need to kill the offending program called by popen, otherwise it remains 'hung' in the process list!
    Last edited by rotis23; 03-17-2003 at 09:38 AM.

  4. #4
    Registered User
    Join Date
    Aug 2002
    Posts
    351
    >If you'd do fork( )/exec*( )/pipe( ) yourself, you had the PID and could do kill( pid, SIGINT );

    good point. i think that has got to be the long term solution to the problem.

  5. #5
    Registered User
    Join Date
    Aug 2002
    Posts
    351
    just a remark after testing the select method.

    if a timeout occurs, orphaned processes are left in the process list.

    linux (red hat 7.2) seems to clean these up after a period of time - not sure how long or what the process is - some research i think.

  6. #6
    Registered User
    Join Date
    Aug 2002
    Posts
    351
    vVv, appologies to resurrect this somewhat distant thread.

    I've finally got around to implementing pipe/fork/exec with a select based timeout. I've got some problems though.

    Defunct processes are being left in the process list until the parent has teminated. These processes are the ones called by execve (ones in argv):
    Code:
    execve( "/bin/sh", argv, environ );
    exit( 1 );
    I think the child (which is the parent of the process called by execve) is exiting before the execve called process terminates, thus causing it to be defunct. Obviously, the main parent process still has the file descriptor with which to obtain the output even after ITS child has terminated.

    Now I presume that the child process needs to wait until the execve process has finished before terminating, like with a waitpid function call.

    Am I right? If so, what is the best way to achieve this?

    TIA, rotis23

  7. #7
    Registered User
    Join Date
    Aug 2002
    Posts
    351
    Coolio. Thanks, that did the trick!!

    Just one more query.

    How safe is the execve call as opposed to the original popen call? Does execve open a shell as opposed to a pipe?

  8. #8
    Registered User
    Join Date
    Aug 2002
    Posts
    351
    OK, I'm just concerned about general safety of invoking shells in my code.

    BTW, thanks for your help on this whole topic vVv. I owe you a couple of beers.

    rotis23

  9. #9
    Registered User
    Join Date
    Aug 2002
    Posts
    351
    > you should of course change the effective UID to the user's real UID before you execute the shell

    I'm using a daemon. I require the user to be root to start it, but then I setuid to its own user, so no setuid-root. But you know as I do that you only need a shell and a vulnerable system to do some interesting stuff.

    So, yes this was my concern. I know that the exec family of functions are safer than the system function, but they still invoke a shell. I need to read up on this really!!

    > But, if I ever come to England

    what do you mean... if?

  10. #10
    Registered User
    Join Date
    Aug 2002
    Posts
    351
    vVv, still got problems.

    take this scenario based on the examples:

    1) The process called by execve hangs.

    2) In timed_fgets the select 'times out' and returns an error.

    3) try_wait is called, kills the child that makes the execve call and waits to clean up.

    problem: the process called by execve is not killed, but remains in the process list (now owned by init I presume).

    This was my original problem. I need to find a way of killing the child AND all of its children.

    Any ideas?

  11. #11
    Registered User
    Join Date
    Aug 2002
    Posts
    351
    ok, thanks.

    The code I'm using is based on your suggestions.

    Maybe a bit more background. The program is desgined in this way so 'small bash scripts (or even small command line programs written in any language)' can be 'plugged into' the system and called by a controlling daemon.

    Now if these hang or screw up in any way, the daemon needs to timeout (my original problem a while ago) and clean up any processes that are left - this includes grandchildren or great grandchildren.

    So, I'm using select and pipe/exec/fork to achieve this similar to your examples.

    I'm using the execv function to call the plug-in scripts. Other than using pid files (seems messy), whats the best way of passing the pid to the parent??

  12. #12
    Registered User
    Join Date
    Aug 2002
    Posts
    351
    Thanks vVv, I like the idea.

    But it still doesn't give me the control I need of the processes descendants.

    The problem is (i think) that the kill() function only (depending on the specified signal - in my case SIGKILL) kills the process with the specified PID, leaving processes called by the process (with PID) orphaned and owned by init.
    Code:
    kill(pid,SIGKILL):
    waitpid(pid,NULL,0);
    Now I presume this would not happen if I called exit() within the process (PID). i.e. the child process is terminated fully.
    Code:
    execv("/bin/sh",argv); //if this hangs here its terminated with kill after a select timeout
    exit(1);                           //this never gets called
    It would solve my problems if the timeout caused an exit within the child process (with PID).

    Do you agree?

  13. #13
    Registered User
    Join Date
    Aug 2002
    Posts
    351
    shall we try and make this a 10 pager!!

  14. #14
    Registered User
    Join Date
    Aug 2002
    Posts
    351
    Ahhh, I think I see - correct me if i'm wrong.

    The key point of this design is the use and control of the _exit(0) call.

    This exit will fully terminate the main child and all its descendents.

    Hangs of the process called by execv will never occur because they've been called by a seperate thread.
    So, really I need:

    1) A thread to handle the execv and its potential hang

    2) A thread to handle the read and its potential timeout

    3) A thread to handle 1 & 2 that controls and cleans up with an "_exit"

    Can this be the solution????

  15. #15
    Registered User
    Join Date
    Aug 2002
    Posts
    351
    ok, I don't think my problem is clear. Its not with killing multiple childs of the parent, but with killing childs of execve.

    This is the flow of execution (with reference to your original pipe_open etc)

    Three processes:
    parent - pthread that calls does the following sequesnce many times and then exits.
    child - forked process that calls execve and exits.
    run_bash - bash script that calls some other stuff and returns data to stdout.

    1) pipe_open gets called and forks

    2) the child calls a bash script called run_bash using execve. It calls some other stuff and hangs.

    3) the parent goes on to execute timed_fgets which tries to read but select causes a timeout.

    4) the parent calls the following code because select returns 0:
    Code:
    if(waitpid(pid,NULL,WNOHANG) = -1)
    {
      kill(pid,SIGKILL);
      waitpid(pid,NULL<0);
    }
    5) Now on my system the child always terminates. But sometimes the run_bash is left in the process list (owned by init).

    6) This remains in the process list even when the parant pthread exits with pthread_exit(NULL).

    I don't see how your code is any different to this. I suppose my code only calls the kill() function when waitpid returns an error. Will this kill() call behave exactly the same as _exit?

    Am I missing something in your solution now?

    I'm determined o get to the bottom of this now!!!

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Getting other processes class names
    By Hawkin in forum Windows Programming
    Replies: 3
    Last Post: 03-20-2008, 04:02 PM
  2. Replies: 8
    Last Post: 03-10-2008, 11:57 AM
  3. Direct3D problem
    By cboard_member in forum Game Programming
    Replies: 10
    Last Post: 04-09-2006, 03:36 AM
  4. Button handler
    By Nephiroth in forum Windows Programming
    Replies: 8
    Last Post: 03-12-2006, 06:23 AM
  5. Manipulating the Windows Clipboard
    By Johno in forum Windows Programming
    Replies: 2
    Last Post: 10-01-2002, 09:37 AM