Like Tree3Likes

GCC -j

This is a discussion on GCC -j within the Tech Board forums, part of the Community Boards category; What's with this flag? I'm talking with a friend that is showing me screenshots of Chromium being compiled with -j13 ...

  1. #1
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Portugal
    Posts
    7,436

    GCC -j

    What's with this flag?

    I'm talking with a friend that is showing me screenshots of Chromium being compiled with -j13 showing all cores on his machine at 100% (HTOP). However, I've been using GCC for years, with -j4 now, and I don't remember ever seeing all my cores at 100% during compilation.

    Is there some fundamental change between MinGW and GCC here?
    The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.”
    The programmer comes home with 12 loaves of bread.


    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  2. #2
    Unregistered User Yarin's Avatar
    Join Date
    Jul 2007
    Posts
    1,605
    Huh. I can't even find -j in gcc or g++'s man page.

  3. #3
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Portugal
    Posts
    7,436
    Sorry. I didn't make it clear. That's Make option. Not gcc.
    The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.”
    The programmer comes home with 12 loaves of bread.


    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  4. #4
    Registered User manasij7479's Avatar
    Join Date
    Feb 2011
    Location
    Kolkata@India
    Posts
    2,498
    Quote Originally Posted by Mario F. View Post
    What's with this flag?

    I'm talking with a friend that is showing me screenshots of Chromium being compiled with -j13 showing all cores on his machine at 100% (HTOP). However, I've been using GCC for years, with -j4 now, and I don't remember ever seeing all my cores at 100% during compilation.

    Is there some fundamental change between MinGW and GCC here?
    Did you compile chromium (with 4 jobs) or other code ?
    Seems that it depends upon the number of available jobs at a time, which may not be 'many' when some complicated dependency structure is there.
    Maybe Chromium's makefile is generated in such a way that that it is straight forward enough for allowing numerous parallel tasks.
    Manasij Mukherjee | gcc-4.8.2 @Arch Linux
    Slow and Steady wins the race... if and only if :
    1.None of the other participants are fast and steady.
    2.The fast and unsteady suddenly falls asleep while running !



  5. #5
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Portugal
    Posts
    7,436
    Well, I haven't tried chromium yet, no. But plan to tomorrow. However I'm being guaranteed 100% core usage is a common occurrence on about any build on Linux with that flag. Where's on MinGW I don't remember ever seeing my cores being utilized like that. Ever.

    I cannot test this on an actual Linux box because mine is on a VM. But I trust this guy. Besides he shown me screenshots.
    The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.”
    The programmer comes home with 12 loaves of bread.


    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  6. #6
    Registered User manasij7479's Avatar
    Join Date
    Feb 2011
    Location
    Kolkata@India
    Posts
    2,498
    Quote Originally Posted by Mario F. View Post
    However I'm being guaranteed 100% core usage is a common occurrence on about any build on Linux with that flag.
    I compiled the kernel with j2 recently. (on a dual core.. .. planning to buy a sixer soon!)
    But instead of 100%, the the cpu usage graph was somewhat like a DNA and oscillated between 100% and about 75%
    Would upload a screenshot the next time I do so ..
    Manasij Mukherjee | gcc-4.8.2 @Arch Linux
    Slow and Steady wins the race... if and only if :
    1.None of the other participants are fast and steady.
    2.The fast and unsteady suddenly falls asleep while running !



  7. #7
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Portugal
    Posts
    7,436
    Can it be something to do with -j<max_cpu+1> instead of -j<max_cpu> ?

    What you see is what I usually get too. With a lot more time spent under 100% right?
    The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.”
    The programmer comes home with 12 loaves of bread.


    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  8. #8
    Registered User manasij7479's Avatar
    Join Date
    Feb 2011
    Location
    Kolkata@India
    Posts
    2,498
    Quote Originally Posted by Mario F. View Post
    Can it be something to do with -j<max_cpu+1> instead of -j<max_cpu> ?
    Reason ?

    Would try that or j without arguments....next time.
    Quote Originally Posted by man make
    ...
    If the -j option is given without an argument, make will not limit the number of jobs that can run simultaneously.
    ...
    With a lot more time spent under 100% right?
    Could say so.. but it seemed pretty symmetric.
    Manasij Mukherjee | gcc-4.8.2 @Arch Linux
    Slow and Steady wins the race... if and only if :
    1.None of the other participants are fast and steady.
    2.The fast and unsteady suddenly falls asleep while running !



  9. #9
    Registered User
    Join Date
    Nov 2010
    Location
    Long Beach, CA
    Posts
    5,459
    Let's pretend that your computer is basically doing nothing else but running make. If you have 4 cores and compile with -j4, you're getting 4 jobs. Note that all 4 jobs might not be using their respective cores at once, as one could be waiting for I/O to come back from the disk. That's not counted against your processor usage, so you will get less than 100% usage, despite having one job per core. Does your friend really have 12 cores? If he only has 8, then he's much more likely to have 8 jobs all using the CPU to actually compile, and 5 others waiting for disk I/O or their turn on a core.

  10. #10
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Portugal
    Posts
    7,436
    6 cores, hyperthread cpu.

    This must be something with MinGW.
    The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.”
    The programmer comes home with 12 loaves of bread.


    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  11. #11
    Registered User
    Join Date
    Nov 2010
    Location
    Long Beach, CA
    Posts
    5,459
    AFAIK, hyperthreading doesn't give two full "processors" per core, it just duplicates registers, MMU stuff, etc. It speeds up the time for context switching between threads/processes by having two complete contexts stored on the CPU instead of the others being out in cache/RAM. There is only one actual computing part of the processor (ALU) per core, so he can only crunch data on 6 threads/processes at once. Drop it down to -j6 or -j7 and see if he gets below 100%.

  12. #12
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Portugal
    Posts
    7,436
    But shouldn't HTOP report the "virtual" load on each of the 12 cores (6 real + 6 virtual)?
    The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.”
    The programmer comes home with 12 loaves of bread.


    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  13. #13
    Captain Crash brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,239
    Quote Originally Posted by anduril462 View Post
    AFAIK, hyperthreading doesn't give two full "processors" per core, it just duplicates registers, MMU stuff, etc. It speeds up the time for context switching between threads/processes by having two complete contexts stored on the CPU instead of the others being out in cache/RAM. There is only one actual computing part of the processor (ALU) per core, so he can only crunch data on 6 threads/processes at once. Drop it down to -j6 or -j7 and see if he gets below 100%.
    Hyperthreading is way cooler than just fast context switching. Modern superscalar CPUs can have dozens of instructions in flight simultaneously under good conditions. Hyperthreading is a "hack" (though a pretty advanced one) that allows two register contexts to interleave in the instruction decoder to pump the pipelines fuller. For example if one instruction stream is receiving a 200 cycle cache miss, the other hyperthread could step in and execute its instructions during the cycles that would otherwise be wasted.

    It's actually analogous to the idea of launching say, 5 threads on a 4 core system for a compile. You might have one thread per core, but when the threads occasionally stall for disk IO those cores stop doing useful work. Assuming the OS scheduler isn't super slow, it's probably a win to launch double of even triple the number of cores as threads.

    For Mario's specific situation I'd recommend running a whole-system profiler to figure out where the compiler threads are sleeping. It's probably inside of an easily identified system call, I suspect waiting for disk IO either reading files or from paging activity.
    Code:
    //try
    //{
    	if (a) do { f( b); } while(1);
    	else   do { f(!b); } while(1);
    //}

  14. #14
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Portugal
    Posts
    7,436
    Quote Originally Posted by brewbuck View Post
    For Mario's specific situation I'd recommend running a whole-system profiler to figure out where the compiler threads are sleeping. It's probably inside of an easily identified system call, I suspect waiting for disk IO either reading files or from paging activity.
    Ok. So a full load on all my cores is definitely something I should be seeing. Thanks. I'll look into it asap.
    The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.”
    The programmer comes home with 12 loaves of bread.


    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  15. #15
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    Quote Originally Posted by Mario F. View Post
    Can it be something to do with -j<max_cpu+1> instead of -j<max_cpu> ?

    What you see is what I usually get too. With a lot more time spent under 100% right?
    I can observe that max_cpus +1 will use 100% of all cores on linux, at least when it is actually the compiler that is running.

    Quote Originally Posted by anduril462 View Post
    There is only one actual computing part of the processor (ALU) per core, so he can only crunch data on 6 threads/processes at once. Drop it down to -j6 or -j7 and see if he gets below 100%.
    I'd guess that you could then often have only half the real processors running, since the kernel treats all 12 as real and will assign based on that. It will not assign tasks to just even numbered processors or something.

    Seems to me that popular opinion is against enabling hyper-threading on multi-core systems tho.
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

Page 1 of 2 12 LastLast
Popular pages Recent additions subscribe to a feed

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21