View Full Version : Timeout with FILE* handle

03-17-2003, 06:01 AM
with reference to the following thread.


Select and poll work with integer file descriptors.

Are there alternative timeout methods (excluding signal alarms) that can be applied to stream functions that deal with a FILE* handle?

In addition, the pclose hang seems to a bug in popen (not closing all fds) judging by general comments on the problem.

what do you think vVv?

TIA, rotis23

03-17-2003, 08:27 AM
I've solved the problem using select with popen, fread and pclose by converting the FILE* handle to a file descriptor.

It seems that SIGCHLD cuases the same problems as SIGALRM with pclose.

thanks vVv.

03-17-2003, 09:20 AM
ok, just one last implementation issue.

If pclose is never called, (according to dmalloc) 1500 bytes are left not freed by iopopen.c.

I presume the way forward is to not use popen.

I also need to kill the offending program called by popen, otherwise it remains 'hung' in the process list!

03-17-2003, 10:18 AM
>If you'd do fork( )/exec*( )/pipe( ) yourself, you had the PID and could do kill( pid, SIGINT );

good point. i think that has got to be the long term solution to the problem.

03-20-2003, 03:40 AM
just a remark after testing the select method.

if a timeout occurs, orphaned processes are left in the process list.

linux (red hat 7.2) seems to clean these up after a period of time - not sure how long or what the process is - some research i think.

05-23-2003, 07:11 AM
vVv, appologies to resurrect this somewhat distant thread.

I've finally got around to implementing pipe/fork/exec with a select based timeout. I've got some problems though.

Defunct processes are being left in the process list until the parent has teminated. These processes are the ones called by execve (ones in argv):

execve( "/bin/sh", argv, environ );
exit( 1 );

I think the child (which is the parent of the process called by execve) is exiting before the execve called process terminates, thus causing it to be defunct. Obviously, the main parent process still has the file descriptor with which to obtain the output even after ITS child has terminated.

Now I presume that the child process needs to wait until the execve process has finished before terminating, like with a waitpid function call.

Am I right? If so, what is the best way to achieve this?

TIA, rotis23

05-23-2003, 07:41 AM
Coolio. Thanks, that did the trick!!

Just one more query.

How safe is the execve call as opposed to the original popen call? Does execve open a shell as opposed to a pipe?

05-23-2003, 07:55 AM
OK, I'm just concerned about general safety of invoking shells in my code.

BTW, thanks for your help on this whole topic vVv. I owe you a couple of beers.


05-23-2003, 10:31 AM
> you should of course change the effective UID to the user's real UID before you execute the shell

I'm using a daemon. I require the user to be root to start it, but then I setuid to its own user, so no setuid-root. But you know as I do that you only need a shell and a vulnerable system to do some interesting stuff.

So, yes this was my concern. I know that the exec family of functions are safer than the system function, but they still invoke a shell. I need to read up on this really!!

> But, if I ever come to England

what do you mean... if?

06-02-2003, 07:40 AM
vVv, still got problems.

take this scenario based on the examples:

1) The process called by execve hangs.

2) In timed_fgets the select 'times out' and returns an error.

3) try_wait is called, kills the child that makes the execve call and waits to clean up.

problem: the process called by execve is not killed, but remains in the process list (now owned by init I presume).

This was my original problem. I need to find a way of killing the child AND all of its children.

Any ideas?

06-02-2003, 08:35 AM
ok, thanks.

The code I'm using is based on your suggestions.

Maybe a bit more background. The program is desgined in this way so 'small bash scripts (or even small command line programs written in any language)' can be 'plugged into' the system and called by a controlling daemon.

Now if these hang or screw up in any way, the daemon needs to timeout (my original problem a while ago) and clean up any processes that are left - this includes grandchildren or great grandchildren.

So, I'm using select and pipe/exec/fork to achieve this similar to your examples.

I'm using the execv function to call the plug-in scripts. Other than using pid files (seems messy), whats the best way of passing the pid to the parent??

06-04-2003, 04:58 AM
Thanks vVv, I like the idea.

But it still doesn't give me the control I need of the processes descendants.

The problem is (i think) that the kill() function only (depending on the specified signal - in my case SIGKILL) kills the process with the specified PID, leaving processes called by the process (with PID) orphaned and owned by init.


Now I presume this would not happen if I called exit() within the process (PID). i.e. the child process is terminated fully.

execv("/bin/sh",argv); //if this hangs here its terminated with kill after a select timeout
exit(1); //this never gets called

It would solve my problems if the timeout caused an exit within the child process (with PID).

Do you agree?

06-04-2003, 04:59 AM
shall we try and make this a 10 pager!!

06-04-2003, 07:44 AM
Ahhh, I think I see - correct me if i'm wrong.

The key point of this design is the use and control of the _exit(0) call.

This exit will fully terminate the main child and all its descendents.

Hangs of the process called by execv will never occur because they've been called by a seperate thread.
So, really I need:

1) A thread to handle the execv and its potential hang

2) A thread to handle the read and its potential timeout

3) A thread to handle 1 & 2 that controls and cleans up with an "_exit"

Can this be the solution????

06-04-2003, 08:25 AM
ok, I don't think my problem is clear. Its not with killing multiple childs of the parent, but with killing childs of execve.

This is the flow of execution (with reference to your original pipe_open etc)

Three processes:
parent - pthread that calls does the following sequesnce many times and then exits.
child - forked process that calls execve and exits.
run_bash - bash script that calls some other stuff and returns data to stdout.

1) pipe_open gets called and forks

2) the child calls a bash script called run_bash using execve. It calls some other stuff and hangs.

3) the parent goes on to execute timed_fgets which tries to read but select causes a timeout.

4) the parent calls the following code because select returns 0:

if(waitpid(pid,NULL,WNOHANG) = -1)
5) Now on my system the child always terminates. But sometimes the run_bash is left in the process list (owned by init).

6) This remains in the process list even when the parant pthread exits with pthread_exit(NULL).

I don't see how your code is any different to this. I suppose my code only calls the kill() function when waitpid returns an error. Will this kill() call behave exactly the same as _exit?

Am I missing something in your solution now?

I'm determined o get to the bottom of this now!!!

06-05-2003, 03:00 AM
Yeah (redhat 7.2), on linux a pthread is an implementation of a LinuxThread isn't it.

The problem is intermittent, but I know how to reproduce now.

I'll see if I can reproduce with the code you posted. I'll post the results.

vVv, what's the effect of calling _exit before or directly after pthread_exit? I mean, pthread_exit just cleans up the thread resources, and according to my man page, _exit sends SIGCHLD signal to processes inherited by the init process etc.

06-05-2003, 05:53 AM
ok, just tested last posted script. same problem.

I changed the code to:

char *script1 = "#!/bin/sh\n"
"printf \"hello world\"";

after the code 'times out' and EXITS i get the following still in my process list (ps -auxf):

root 9822 0.0 0.3 2232 1012 pts/5 S 12:37 0:00 /bin/sh my_bash_script.sh

The script is acually doing a snmpdf (disk space analysis using SNMP) on a remote windows box with a ropey snmp daemon.

Now, is this a Linux issue? Is snmpdf doing something wierd?

One hack is to kill the PID of my_bash_script.sh from the C daemon using pid files, but I want ot get to the root of the problem.

06-05-2003, 08:31 AM
Well, if it's not my code (i.e. the system) then I need to hack:

1) Use pid file to explicitly kill the pid of my_bash_script.sh - i don't want to have to manage these really.

2) Run a cron job that stop the daemon kills all processes owned by it and starts it again - i know!

3) Use BSD.

I can't think of anything else!