I need have analogue of subj because
system uses fork. It means that if your program uses x mb of memory and in system y memory at all,
and if 2*x>y, then system() call fails because fork fails too. I need other solution
I need have analogue of subj because
system uses fork. It means that if your program uses x mb of memory and in system y memory at all,
and if 2*x>y, then system() call fails because fork fails too. I need other solution
Your arithmetic is flawed.
The moment you call fork(), you don't instantly use 2x the memory. Pages are only duplicated when either the child or the parent write to memory. Until then, pages of memory are shared.
So if your child immediately does an exec(), then the amount of work the OS has done in creating a new process has been minimised.
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
Anyway i have now trouble with program of about 200Mb in memory in computer of 512 mb memory. So after loading >80mb is free (there is also system memory) and system("bla-bla-bla") fails because of insufficient space. I use latest 2.6.35 kernel with disabled overcommiting.
So why aren't you using fork/execl directly, rather than adding a layer of indirection using system()?
system() being approximately fork + exec(shell) + fork + execl(cmd)
No swap space?
Is there something about your machine you're not telling us?
For example, making the mistake of assuming this is a regular desktop machine?
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
No, it's just one old even diskless and videocardless node of small cluster. Swapping of course is disabled too.
P.S. Thank you for such quick reply. Maybe execve should be solution? In the beginnig of program (when it's not so big, most of memory goes throught malloc later) i will create executable *.sh file in which later i just will write and execute my command via execve?
yep, of course execve doesn't help me It totally destroys process before executing own commands. After carefully reading of manual it looks like only early-staged fork+system inside child process is solution.
Well that's why you fork() first!
> Anyway i have now trouble with program of about 200Mb in memory in computer of 512 mb memory
Why are you running a program with such a large foot-print on such a resource constrained system?
Does it really need to use that amount of memory?
It must be using some very large arrays, are the data types the most optimal for size?
What exactly would one of your .sh files look like?
Because if all it does is run another program, then it's a waste of effort.
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
yep!
Just have what i have... Anyway it's out of discussion
Yep... It's just cache of pairs of particles with some conditions. I've already packed 3 pointers in 5-byte structure instead of 12 by default... Bitwise+'pragma pack' rules
It's very simple script. Main purpose - pack already created files with gzip.
No... Why? If everything is true that we discussed above then i satisfied enough. I mean, this external program has own small address space. Main problem is really fork of "big" parent process... And i just hope that other forks inside system() call inside child process process will copy really small adress space of that child one.
And.... don't please even ask me why i've choosed "overcomitting disabled" system's memory control. I've just read The Linux kernel: Memory
and understand that OOM killer is not for me...
I'm suggesting that if your shell script is nothing more than
Then you can save yourself a whole process (or two) by doingCode:#!/bin/sh gzip file
Like I said, modern fork() implementations implement copy-on-write, so the only memory page that gets duplicated is the page at the top of the stack (where args ends up being stored).Code:if ( fork() == 0 ) { const char *args[] = { "/bin/gzip", "file", NULL }; execv( args[0], args ); }
After the execv, the parent process is as it was, all the memory allocated to the 'clone' vanishes and the gzip process uses whatever memory it was going to use.
What does your large footprint program do after running this child process?
A lot more work, or does it soon exit? More importantly, does it depend on the output of the spawned process?
In other words, would this work?
> And.... don't please even ask me why i've choosed "overcomitting disabled" system's memory control.Code:#!/bin/sh ./myLargeProgram gzip file
Ah, but that's perhaps why fork() would never work for you.
If your process is already using >50% of physical memory, then you can never call fork() AT ALL.
The OS is immediately going to see that it needs more pages than it is allowed to manage and just barfs. You and I know that it is never going to have to duplicate every single page of memory, but it doesn't.
With overcommit, all the pages in the child process quietly vanish at the execv() call, and the overcommitted state is resolved.
The OOM killer is only a problem if you have a dynamic system creating processes all the time. Your description suggests a quiet system with "one large elephant in the room". I'm guessing that you know it will never allocate more memory than you have already.
If you still want fork-like behaviour, consider clone
It is fork() on drugs - it has many buttons, levers and dials to play with to fine tune control over what happens.Code:NAME clone, __clone2 - create a child process SYNOPSIS #define _GNU_SOURCE #include <sched.h> int clone(int (*fn)(void *), void *child_stack, int flags, void *arg, ... /* pid_t *ptid, struct user_desc *tls, pid_t *ctid */ );
> It's very simple script. Main purpose - pack already created files with gzip.
Or you could just link in the zlib library and "roll your own" zip utility to compress a single file using inline code in the current process.
No need to fork() or exec() anything.
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
Now you completly understand me. The problem is not in fork imlementation but rather in memory management. Anyway i decided to do lightweight-system-like call with early-staged fork. Implementation with zlib would work for me in this case, but i want any system-script-like-call. So, only one question for this thread to discuss is that "AT ALL"? Even lightweighted will not work before main maloc's? I thing it will! I guess...
Besides i already almost implement (incapsulated!) version. Of course there are three fuctions (init before main malloc, process somewhere inside, and free to free child process) instead of one system() call. For details of this implementation you are welcome to adjanced thread because i have with this new interest questions