home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.lang.c      Meh, in C you gotta define EVERYTHING      243,242 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 241,710 of 243,242   
   Waldek Hebisch to Keith Thompson   
   Re: New and improved version of cdecl   
   31 Oct 25 00:27:36   
   
   From: antispam@fricas.org   
      
   Keith Thompson  wrote:   
   > antispam@fricas.org (Waldek Hebisch) writes:   
   > [...]   
   >> Assuming that you have enough RAM you should try at least using   
   >> 'make -j 3', that is allow make to use up to 3 jobs.  I wrote   
   >> at least, because AFAIK cheapest PC CPU-s of reasonable age   
   >> have at least 2 cores, so to fully utilize the machine you   
   >> need at least 2 jobs.  3 is better, because some jobs may wait   
   >> for I/O.   
   >   
   > I haven't been using make's "-j" option for most of my builds.   
   > I'm going to start doing so now (updating my wrapper script).   
   >   
   > I initially tried replacing "make" by "make -j", with no numeric   
   > argument.  The result was that my system nearly froze (the load   
   > average went up to nearly 200).  It even invoked the infamous OOM   
   > killer.  "make -j" tells make to use as many parallel processes   
   > as possible.   
   >   
   > "make -j $(nproc)" is much better.  The "nproc" command reports the   
   > number of available processing units.  Experiments with a fairly   
   > large build show that arguments to "-j" larger than $(nproc) do   
   > not speed things up (on a fairly old machine with nproc=4).  I had   
   > speculated that "make -n 5" might be worthwhile of some processes   
   > were I/O-bound, but that doesn't appear to be the case.   
      
   I frequently build my project on a few different machines.  My   
   machines typically are generously (compared to compiler need)   
   equipped with RAM.  Measuring several builds '-j 3' gave me   
   fastest build on 2 core machine (no hyperthreading), '-j 7'   
   gave me fastest build on old 4 core machine with hyperthreading   
   (so 'nproc' reported 8 cores).  In general, increasing number   
   of jobs I see increasing total CPU time, but real time may go   
   down because more jobs can use time where CPU(s) would be   
   otherwise idle.  At some number of jobs I get best real time   
   and with larger number of jobs overheads due to multiple jobs   
   seem to dominate leading to increase in real time.  If number   
   of jobs is too high I get slowdown due to lack of real memory.   
      
   On 12 core machine (24 logical cores) I use '-j 20'.  Increasing   
   number of jobs give sligtly faster build, but difference is   
   small, so I prefer to have more cores availble for interactive   
   use.   
      
   Of course, that is balancing tradeoffs, your builds may have   
   different characteristics than mine.  I just wanted to say   
   that _sometimes_ going beyond number of cores is useful.   
   IIUC what Bart wrote he got 3 times speedup using '-j 3'   
   on two core machine, which is unusually good speedup.  IME   
   normally 3 jobs on 2 core machine is neutral or gives small   
   speedup.  OTOH with hyperthreading activationg logical core   
   my slow down its twin.  Consequently using less jobs than   
   logical cores may be better.   
      
   --   
                                 Waldek Hebisch   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca