From: cr88192@gmail.com   
      
   On 8/28/2024 3:54 AM, Paul Edwards wrote:   
   > "BGB" wrote in message   
   > news:vamjih$3d4m7$1@dont-email.me...   
   >   
   >   
   >> Just is seems like a lot of other people are getting lots of   
   >> recognition, seem to be doing well off financially, etc.   
   >>   
   >> Meanwhile, I just sort of end up poking at stuff, and implementing   
   >> stuff, and it seems like regardless of what I do, no one gives a crap,   
   >> or like I am little better off than had I done nothing at all...   
   >   
   > Did you consider asking anyone at all if they were after   
   > something?   
   >   
      
   I mostly just did stuff, occasionally posting about it on Usenet,   
   occasionally on Twitter (now known as X...).   
      
      
   For my 3D engines, I posted stuff about them on YouTube; relatively   
   little feedback, in the time of the first 3D engine, was mostly people   
   complaining about "ugly graphics" and "looks like Minecraft" (which was   
   sorta the thing).   
      
   The 2nd engine looked even more like Minecraft, apart from also taking   
   minor influences from things like Undertale and Homestuck (but,   
   generally, was closer to Minecraft than Undertale; apart from the use of   
   billboard sprites for things like NPCs).   
      
      
   The 3rd engine had some particularly awful sprites, mostly because:   
   The 2nd engine sprites were generally fairly high res;   
   For the 3rd engine I just quickly drew some stuff and called it good;   
   But, the 3rd engine was more meant as a technical proof of concept than   
   an actual game.   
      
   Arguably, I could have tried to "lean into it", maybe do characters as   
   32x64 pixel art style (with nearest sampling), but didn't bother.   
      
   Terrain generation algorithms:   
    1st engine had used Perlin Noise.   
    2nd engine had just used X/Y/Z hashing functions and interpolation.   
    3rd engine, basically same as 2nd engine.   
      
   Hash functions generally being better behaved than Perlin Noise. Though,   
   some care is needed, as poor hashing may lead to obvious repeating patterns.   
      
      
      
   Eventually, I mostly gave up on gamedev, as I couldn't seem to come up   
   with anything that anyone seemed to care about, and my own motivation in   
   these areas had largely dried up (and most of the time, I ended up being   
   more motivated to fiddle with technical stuff, than to really do much in   
   artistic/creative directions; as "artistic creativity" seems to be an   
   area where I am significantly lacking).   
      
      
   >> Where, for my own ISA, I am using BGBCC.   
   >> BGBCC is ~ 250 kLOC, and mostly compiles C;   
   >   
   > We have struggled and struggled and struggled to try to get   
   > a public domain C90 compiler written in C90 to produce 386   
   > assembler.   
   >   
   > There have been a large number of talented people who tried   
   > to do this and fell flat on their face. I never even tried.   
   >   
   > The closest we have is SubC.   
   >   
   > Is this a market gap you are able and interested in filling?   
   >   
   > By either modifying BGBCC (and making public domain if   
   > it isn't already), or using your skills to put SubC over the line?   
   >   
      
   It is MIT licensed, but doesn't currently produce x86 or x86-64 (as I   
   mostly just used MSVC and GCC for PC based development).   
      
   Rather, backends it currently has are:   
    BJX2   
    BJX1 and SH-4 (old)   
    BSR1 (short lived)   
    Another custom ISA, inspired by SuperH and MSP430.   
   Very early versions targeted x86 and x86-64.   
    But, this backend was dropped long ago.   
    Did briefly attempt a backend for 32-bit ARM, but this was not kept.   
    This was in a different fork.   
    Performance of the generated code was quite terrible.   
    Didn't really seem worth the bother at the time.   
      
   Much of the current backend was initially derived from an 'FRBC'   
   backend, which was an attempt to do a Dalvik style register IR.   
   The FRBC VM was dropped, as while fast, the VM was very bulky in terms   
   of code footprint (combinatorial mess). But, at the time, wasn't a big   
   step to go from a register IR to an actual CPU ISA, and (for a sensibly   
   designed ISA), it is possible to emulate things at similar speeds to   
   what one could get with a similar VM.   
      
   My current emulator (for BJX2) is kinda slow, but this is more because   
   it is usually trying to be cycle-accurate, and as long as it is possible   
   for it to be (on the PC side of things) faster than the CPU core on the   
   target FPGA, this is good enough...   
      
      
      
   AFAIK, whether declaring something as public domain is legally   
   recognized depends on jurisdiction. I think this is why CC0 exists.   
      
   Personally, I am not all that likely to bother with going after anyone   
   who breaks the terms of the MIT license, as it is pretty close to "do   
   whatever", similar for 3 clause BSD.   
      
   It is also more C95 style, making significant use of // comments and   
   "long long" and similar, more or less the C dialect that MSVC supported   
   until around 2015 or so (when they started adding C99 stuff).   
      
      
      
   I had at one point wanted to try to make a smaller / lighter weight C   
   compiler, but this effort mostly fizzled out (when it started to become   
   obvious that I wasn't going to be able to pull off a usable C compiler   
   in less LOC than the Doom engine, which was part of the original design   
   goal).   
      
   I had also wanted to go directly from ASTs to ASM, say:   
    Preproc -> Parser/AST -> ASM -> OBJ -> Binary   
   Vs:   
    Preproc -> Parser/AST -> RIL -> 3AC -> Machine Code -> Binary   
      
      
   But, likely the RIL and 3AC stages are in-fact useful.   
   And, it now seems like a stack-based IR (for intermediate storage) has   
   more advantages than either an SSA based IR (like in Clang/LLVM) or   
   traditional object files (like COFF or ELF). Well, except in terms of   
   performance and memory overhead (vs COFF or ELF), where in this case the   
   "linker" needs to do most of the heavy lifting (and needs to have enough   
   memory to deal with the entire program).   
      
   A traditional linker need only deal with compiled machine-code, so is   
   more a task of shuffling memory around and doing relocs; with the   
   compiler parts only needing to deal with a single translation unit.   
   Though, the main "highly memory intensive" part of the process tends to   
   be parsing and dealing with ASTs, which is gone by the time one is   
   dealing with a stack bytecode; but, there is still the memory cost of   
   translating the bytecode into 3AC to actually compile stuff. This   
   doesn't ask much by modern PC standards, but is asking a lot when RAM is   
   measured in MB and one wants to be able to run stuff without an MMU (it   
   is a downside if the compiler tends to use enough RAM as to make virtual   
   memory essentially mandatory to be able to run the compiler).   
      
      
   But, RIL's design still leaves some things to be desired. As-is, it   
   mostly exists as big linear blobs of bytecode, and the compiler needs to   
   deal with the whole thing at once. This mostly works for a compiler, but   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|