Forums before death by AOL, social media and spammers... "We can't have nice things"
|    comp.lang.c    |    Meh, in C you gotta define EVERYTHING    |    243,242 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 241,692 of 243,242    |
|    bart to David Brown    |
|    Re: New and improved version of cdecl (1    |
|    30 Oct 25 12:07:48    |
      From: bc@freeuk.com              On 30/10/2025 10:15, David Brown wrote:       > On 30/10/2025 01:36, bart wrote:              > Try "make -j" rather than "make" to build in parallel. That is not the       > default mode for make, because you don't lightly change the default       > behaviour of a program that millions use regularly and have used over       > many decades. Some build setups (especially very old ones) are not       > designed to work well with parallel building, so having the "safe"       > single task build as the default for make is a good idea.       >       > I would also, of course, recommend Linux for these things. Or get a       > cheap second-hand machine and install Linux on that - you don't need       > anything fancy. As you enjoy comparative benchmarks, the ideal would be       > duplicate hardware with one system running Windows, the other Linux.       > (Dual boot is a PITA, and I am not suggesting you mess up your normal       > daily use system.)       >       > Raspberry Pi's are great for lots of things, but they are not fast for       > building software - most models have too little memory to support all       > the cores in big parallel builds, they can overheat when pushed too far,       > and their "disks" are very slow. If you have a Pi 5 with lots of ram,       > and use a tmpfs filesystem for the build, it can be a good deal faster.       >       >>> (And my computer cpu was about 30% busy doing other productive tasks,       >>> such as playing a game, while I was doing those builds.)       >>>       >>>       >>> So, you are exaggerating, mismeasuring or misusing your system to get       >>> build times that are well over an order of magnitude worse than       >>> expected. This follows your well-established practice.       >>       >> So, what exactly did I do wrong here (for A68G):       >>       >> root@DESKTOP-11:/mnt/c/a68g/algol68g-3.10.5# time make >output       >> real 1m32.205s       >> user 0m40.813s       >> sys 0m7.269s       >>       >> This 90 seconds is the actual time I had to hang about waiting. I'd be       >> interested in how I managed to manipulate those figures!       >       > Try "time make -j" as a simple step.                     OK, "make -j" gave a real time of 30s, about three times faster. (Not       quite sure how that works, given that my machine has only two cores.)              However, I don't view "-j", and parallelisation, as a solution to slow       compilation. It is just a workaround, something you do when you've       exhausted other possibilities.              You have to get raw compilation fast enough first.              Suppose I had the task of transporting N people from A to B in my car,       but I can only take four at a time and have to get them there by a       certain time.              One way of helping out is to use "-j": get multiple drivers with their       own cars to transport them in parallel.              Imagine however that my car and all those others can only go at walking       pace: 3mph instead of 30mph. Then sure, you can recruit enough       volunteers to get the task done in the necessary time (putting aside the       practical details).              But can you a see a fundamental problem that really ought to be fixed first?                     >> But I pick up things that nobody else seems to: this particular build       >> was unusually slow; why was that? Perhaps there's a bottleneck in the       >> process that needs to be fixed, or a bug, that would give benefits       >> when it does matter.       >       > Do you think there is a reason why /you/ get fixated on these things,       > and no one else in this group appears to be particularly bothered?              > Usually when a person thinks that they are seeing something no one else       > sees, they are wrong.              Quite a few people have suggested that there is something amiss about my       1:32 and 0:49 timings. One has even said there is something wrong with       my machine.              You have even suggested I have manipulated the figures!              So was I right in sensing something was off, or not?              > And I fully understand that build times for large projects are       > important, especially during development.       >       > But I do not share your obsession that compile and build times are the       > critical factor or the defining feature for a compiler (or toolchain in       > general).              I find fast compile-times useful for several reasons:              *I develop whole-program compilers* This means all sources have to be       compiled at the same time, as there is no independent compilation at the       module level.              The advantage is that I don't need the complexity of makefiles to help       decide which dependent modules need recompiling.              *It can allow programs to be run directly from source* This is something       that is being explored via complex JIT approaches. But my AOT compiler       is fast enough that that is not necessary              *It also allow programs to be interpreted* This is like run from source,       but the compilation is faster as it can stop at the IL. (Eg. sqlite3       compiles in 150ms instead of 250ms.)              *It can allow whole-program optimisation* This is not something I take       advantage of much yet. But it allows a simpler approach than either LTO,       so somehow figuring out to create a one-file amalgamation.              So it enables interesting new approaches. Imagine if you download the       CDECL bundle and then just run it without needing to configure anything,       or having to do 'make', or 'make -j'.              This is a demo which runs my C compiler instead of a CDECL. The C       compiler source bundle is the file cc.ma (created using 'mm -ma cc'):               c:\demo>dir        30/10/2025 11:31 648,000 cc.ma        26/09/2025 14:44 60 hello.c              Now I run my C compiler from source:               c:\demo>mm -r cc hello        Compiling cc.m to cc.(run)        Compiling hello.c to hello.exe              Magic! Or, since 'cc' also shares the same backend as 'mm', it can also       run stuff from source (but is limited to single file C programs):               c:\demo>mm -r cc -r hello        Compiling cc.m to cc.(run)        Compiling hello.c to hello.(run)        Hello, World!              Forget ./configure, forget make. Of course you can do the same thing,       maybe there is 'make -run', the difference is that the above is instant.              > This is not a goal most compiler vendors have. When people are not       > particularly bothered about the speed of compilation for their files,       > the speed is good enough - people are more interested in other things.       > They are more interested in features like better checks, more helpful       > warnings or information, support for newer standards, better       > optimisation, and so on.              See the post from Richard Heathfield where he is pleasantly surprised       that he can get a 60x speedup in build-time.              People like fast tools!              > Mainstream compiler vendors do care about speed - but not about the       > speed of the little C programs you write and compile. They put a huge       > amount of effort into the speed for situations where it matters, such as       > for building very large projects, or building big projects with advanced              [continued in next message]              --- SoupGate-Win32 v1.05        * Origin: you cannot sedate... all the things you hate (1:229/2)    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca