The problem is that in the reader, you are calling getline(), which clears the pipe that echo was waiting on, and then there is a basic "race condition" between the proceses: if the bash script manages to add another line in its loop before the reader loop calls getline() again, then you are in luck. If not, there is no line to get and so getline() fails and the while loop exits.
You could solve this problem a few different ways. You could try using & with echo in the bash loop, but that is still not a guarantee it will keep up, and if it does, it may create an undesirable number of forks.
The simplest would be to just feed the whole file at once into the pipe:
If there is a reason you need to loop through the file one line at a time, this is what I meant by concatenate the data first then send it:
Code:
data=""
while read line
do
data=$data$line"\n"
done < 'aaa'
echo -e $data > $2
The "\n" is because read chomps the newline. The -e is important otherwise echo will output a literal slash-small-n and not a newline.
Finally, if you absolutely have to feed a line at a time into the pipe, then you can do something like this:
Code:
while read line
do
echo $line >> $2
done < 'aaa'
echo "***END***" >> $2
And the reader:
Code:
#include <cstring>;
char line[LINESIZE]; // I'm assuming something like this
while (1) {
child.getline(line,LINESIZE));
if (!strcmp(line, "***END***")) break;
cout << line << endl;
}
The only pitfall with this is if the bash script exits prematurely and never sends ***END***, because there may be no percievable difference in terms of error-checking in the reader (via .good(), .eof(), .fail(), .bad()) between that and just having to wait for input from the pipe.
It also may not work, if the 'child' fstream closes or has the failbit or eofbit set when readline returns from an empty pipe. In that case you could try checking or reseting those. It may end up that you have to re-open the stream each time.
Finally, it's also a busy loop, but probably not too bad a one.
WRT to adding small delays, don't do it as the sole means of synchronization, but it is fine to combine it with one of the more reliable techniques above if you think it will smooth things out or to take the potential teeth out of a busy loop. You can get smaller delays than one second on linux using nanosleep():
SourceForge.net: POSIX timers - cpwiki
Beware the caveat about "granularity" there; don't bother with gaps less than 10ms (meaning, your 500 line file will take at least 5 seconds).