From: bc@freeuk.com   
      
   On 31/10/2025 22:01, Waldek Hebisch wrote:   
   > bart wrote:   
   >> On 30/10/2025 10:15, David Brown wrote:   
   >>> On 30/10/2025 01:36, bart wrote:   
   >>   
   >>>> So, what exactly did I do wrong here (for A68G):   
   >>>>   
   >>>> root@DESKTOP-11:/mnt/c/a68g/algol68g-3.10.5# time make >output   
   >>>> real 1m32.205s   
   >>>> user 0m40.813s   
   >>>> sys 0m7.269s   
   >>>>   
   >>>> This 90 seconds is the actual time I had to hang about waiting. I'd be   
   >>>> interested in how I managed to manipulate those figures!   
   >>>   
   >>> Try "time make -j" as a simple step.   
   >>   
   >>   
   >> OK, "make -j" gave a real time of 30s, about three times faster. (Not   
   >> quite sure how that works, given that my machine has only two cores.)   
   >>   
   >> However, I don't view "-j", and parallelisation, as a solution to slow   
   >> compilation. It is just a workaround, something you do when you've   
   >> exhausted other possibilities.   
   >>   
   >> You have to get raw compilation fast enough first.   
   >    
   >>   
   >> Quite a few people have suggested that there is something amiss about my   
   >> 1:32 and 0:49 timings. One has even said there is something wrong with   
   >> my machine.   
   >   
   > Yes, I wrote this. 90 seconds in itself could be OK, your machine   
   > just could be slow. But the numbers you gave clearly show that   
   > that only about 50% of time on _one_ core is used to do the build.   
   > So something is slowing down your machine. And this is specific to   
   > your setup, as other people running build on Linux get better than   
   > 90% CPU utilization. You apparently get offended by this statement.   
   > If you are realy interested if fast tools you should investigate   
   > what is causing this.   
   >   
   > Anyway, there could be a lot of different reasons for slowdown.   
   > Fact that you get 3 times faster build using 'make -j' suggests   
   > that some other program is competing for CPU and using more jobs   
   > allows getting higher share of CPU. If that affects only programs   
   > running under WSL, than your numbers may or may not be relevant to   
   > WSL experience, but are incomparable to Linux timings. If slowdown   
   > affects all programs on your machine, then you should be interested   
   > in eliminating it, because it would also make your compiler faster.   
   > But that is your machine, if you not curious what happens that   
   > is OK.   
      
      
   I'm really not interested in finding out the ins and outs of my Linux   
   system or messing about with it.   
      
   All I know is that I followed the instructions and the built-time for a   
   particular project WAS 90 seconds elapsed, after that configure stuff.   
   It shouldn't be job to fix any shortcomings.   
      
   I wasn't that happy either with using '-j'. Yes I got a faster time, but   
   that looks to me like brushing things under the carpet. What is really   
   going on? It's hard to tell because it's all so complicated.   
      
   I had a go anyway. I logged the output of a full 'make'. The output   
   (sans some make-lines at each end) was 213 lines: 107 invocations of   
   gcc, and 106 uses of 'mv'.   
      
   I was able to use that output file as a script (and I didn't need   
   'clean' before each run).   
      
   It still took 92 seconds. I got rid of the 'mv' lines, it was now 85   
   seconds. I added some commands, 'echo n' before each compile, and   
   'time', to track each invocation.   
      
   It looks like there are 106 files compiled, and last use of gcc is for   
   linking, which took 3.x seconds. Most compiles were 0.5-0.8 seconds,   
   with a few taking 1-2 seconds, all elapsed 'real' time.   
      
   In each case, the user time was a fraction of the real time. One that   
   caught my eye was file # 4: 0.450s real, 0.08s user.   
      
   I tried to extract the invocation and simplify it, but it was too   
   complicated. It looks like this (line breaks added):   
      
   gcc -DHAVE_CONFIG_H -I. -I./src/include -D_GNU_SOURCE   
    -DBINDIR='"/usr/local/bin"' -DINCLUDEDIR='"/usr/local/include"'   
    -g -O2 --std=c17 -Wall -Wshadow -Wunused-variable -Wunused-parameter   
    -Wno-long-long -MT ./src/a68g/a68g-a68g-conversion.o   
    -MD -MP -MF ./src/a68g/.deps/a68g-a68g-conversion.Tpo -c   
    -o ./src/a68g/a68g-a68g-conversion.o   
    `test -f './src/a68g/a68g-conversion.c' ||   
    echo './'`./src/a68g/a68g-conversion.c   
      
      
   I've no idea what this is up to. But here, I managed to compile that   
   file my way (I copied it to a place where the relevant headers were all   
   in one place):   
      
    gcc -O2 -c a68g-conversion.c   
      
   Now real time is 0.14 seconds (recall it was 0.45). User time is still   
   0.08s.   
      
   So, what is all that crap that is making it 3 times slower? And do we   
   need all those -Wall checks, given that this is a working, debugged program?   
      
   I suggest a better approach would be to get rid of that rubbish and   
   simplify it, rather than keep it in but having to call in reinforcements   
   by employing extra cores, don't you think?   
      
    > If slowdown   
    > affects all programs on your machine, then you should be interested   
    > in eliminating it, because it would also make your compiler faster.   
      
   That would be interesting. My already heavy 6-pass compiler can manage a   
   sustained 0.5Mlps on the same machine, /and/ under Windows. How much   
   faster can it be?   
      
   OK, I have a way to run my C compiler under Linux. It would be a   
   cross-compiler for Windows, and wouldn't be able to generate EXEs (needs   
   access to actual Windows DLLS), but it can generate OBJ files.   
      
   It's done via C transpilation, and I compared such versions on both   
   Windows and WSL:   
      
    c:\cx>tim cc -c sql   
    Compiling sql.c to sql.obj   
    Time: 0.187   
      
    root@DESKTOP-11:/mnt/c/cx# time ./cu -c sql.c   
    Compiling sql.c to sql.obj   
      
    real 0m0.316s   
    user 0m0.170s   
    sys 0m0.075s   
      
   The 'user' time looks about the same as what I get on Windows. I just   
   get a longer elapsed time on Linux!   
      
   (Note: the 'tim' utility on Windows is written to exclude the shell   
   process start overheads, since I want actual compile-time. Normally my   
   compilers are invoked from an IDE program - not using 'system' - so that   
   overhead is not relevant.   
      
   If included, the Windows timing would be 0.21 seconds.)   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|