home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.lang.c      Meh, in C you gotta define EVERYTHING      243,242 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 241,691 of 243,242   
   David Brown to Keith Thompson   
   Re: New and improved version of cdecl   
   30 Oct 25 12:50:33   
   
   From: david.brown@hesbynett.no   
      
   On 30/10/2025 05:24, Keith Thompson wrote:   
   > antispam@fricas.org (Waldek Hebisch) writes:   
   > [...]   
   >> Assuming that you have enough RAM you should try at least using   
   >> 'make -j 3', that is allow make to use up to 3 jobs.  I wrote   
   >> at least, because AFAIK cheapest PC CPU-s of reasonable age   
   >> have at least 2 cores, so to fully utilize the machine you   
   >> need at least 2 jobs.  3 is better, because some jobs may wait   
   >> for I/O.   
   >   
   > I haven't been using make's "-j" option for most of my builds.   
   > I'm going to start doing so now (updating my wrapper script).   
   >   
   > I initially tried replacing "make" by "make -j", with no numeric   
   > argument.  The result was that my system nearly froze (the load   
   > average went up to nearly 200).  It even invoked the infamous OOM   
   > killer.  "make -j" tells make to use as many parallel processes   
   > as possible.   
   >   
   > "make -j $(nproc)" is much better.  The "nproc" command reports the   
   > number of available processing units.  Experiments with a fairly   
   > large build show that arguments to "-j" larger than $(nproc) do   
   > not speed things up (on a fairly old machine with nproc=4).  I had   
   > speculated that "make -n 5" might be worthwhile of some processes   
   > were I/O-bound, but that doesn't appear to be the case.   
   >   
   > This applies to GNU make.  There are other "make" implementations   
   > which may or may not have a similar feature.   
   >   
      
   Sometimes "make -j" can be problematic, yes.  I don't know if newer   
   versions of GNU make have got better at avoiding being too enthusiastic   
   about starting jobs, but certainly if you have a project where a very   
   large number of compile tasks could be started in parallel, but you   
   don't have the ram to handle them all, things can go badly wrong.  I've   
   seen that myself too on occasion.  (In the case of cdecl, there are not   
   that many parallel compiles for it to be a risk, at least not on my   
   machine.)   
      
   Using "make -j ${nproc}" - or using "make -j 4" or "make -j 8" if you   
   know your core count - can be a safer starting point.  The ideal number   
   for a given build can vary quite a lot, however.  More parallel   
   processes take more ram - great up to a point, but it can mean less ram   
   for disk and file caching and thus slower results overall.  And often   
   cores are not all created equal - with SMT, half your cores might not be   
   "real" cores, and on some processors you have a mix of fast cores and   
   slow low-power cores.  On my work machine with 4 "real" cores and 4 SMT   
   cores, "make -j 6" is usually optimal for bigger builds.  And then you   
   have to consider that sometimes builds require significant other work   
   than just compiling, and the ideal balance for those tasks may be   
   different.  Of course such fine-tuning it only really matters if you are   
   doing the builds a lot.   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca