home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   sci.electronics.design      Electronic circuit design      143,326 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 142,968 of 143,326   
   Don Y to bitrex   
   Re: Call by reference protection (1/2)   
   20 Feb 26 23:42:44   
   
   From: blockedofcourse@foo.invalid   
      
   On 2/20/2026 10:55 PM, bitrex wrote:   
   > On 2/20/2026 6:09 PM, Don Y wrote:   
   >> On 2/20/2026 2:21 PM, bitrex wrote:   
   >>> On 2/20/2026 12:47 PM, Don Y wrote:   
   >>>> [Almost every piece of code in my system is a service or an agency.   
   >>>> As such, they all try to be N copies of the same algorithm running   
   >>>> on N different instances of objects of a particular type.  Easy   
   >>>> if you *design* for that case; tedious if you adopt /ad hoc/   
   >>>> methods!]   
   >>>   
   >>> Yeah, having mutable state in a multithreaded embedded environment without   
   >>> big-iron tools to manage mutable state across threads (like std::mutex and   
   >>> std::weak_ptr) kind of sucks!!   
   >>>   
   >>> Even for single-threaded embedded stuff I like treating C++ more like a   
   >>> functional language and passing non-const references to anything very   
   >>> rarely, those relationships are hard to reason about.   
   >>   
   >> Something has to "do work" -- i.e., make changes.   
   >>   
   >> E.g., the example of a single frame of video needing to be masked   
   >> can either be done by masking the original frame (thereby changing   
   >> it in the process) *or* by masking a copy of the original frame.   
   >>   
   >> It's up to the goals of the algorithm as to which approach to pursue;   
   >> if you don;'t need to preserve the original (unmasked) frame, then   
   >> creating a copy of it for the sole purpose of treating it as const   
   >> is wasteful.   
   >>   
   >> OTOH, creating a copy to ensure other actors' actions don't interfere   
   >> with your processing (and the validity of your actions) *has* value   
   >> (in that it leads to more predictable behavior).   
   >>   
   >> Nowadays, its relatively easy to buy horsepower and other resources   
   >> so the question boils down to how you use them.   
   >>   
   >> [My first "from scratch" commercial product had 12KB of ROM and 256   
   >> bytes of RAM plus the I/Os (motor drivers, etc.).  The cost of just   
   >> the CPU board was well over $400 (when EPROM climbed to $50/2KB).   
   >> Spending $20 on a single node is a yawner...]   
   >>   
   >> Decomposing a design into clients, services and agencies lets it   
   >> dynamically map onto a variety of different hardware implementations   
   >> and freely trade performance, power, size, latency, etc. as needed.   
   >> E.g., each object instance could be backed by a single server   
   >> instance -- or, all object instances can be backed by a single   
   >> server instance -- or, any combination thereof.  Each server can   
   >> decide how much concurrency it wants to support (how many kernel   
   >> threads to consume) as well as how responsive it wants to be (how   
   >> much caching, preprocessing, etc. it uses to meet demands placed   
   >> on it).   
   >   
   > It sounds like you're describing some very hard realtime baremetal system   
   where   
      
   It's actually *soft* as you KNOW you can never meet every deadline   
   (unless you intentionally derate the performance you intend to achieve;   
   you can't even guarantee that you can shoot down every incoming MISSILE   
   with really deep pockets!  :> )   
      
   This is actually considerably harder than a "hard" real-time system   
   because you have to actively consider what to do WHEN you miss a deadline.   
   And, which tasks/jobs you might want to shed to free up resources to   
   improve your chances of meeting those (certain) deadlines in the future.   
      
   [E.g., stop protecting New England and concentrate your defenses on D.C.]   
      
   > you have the luxury of lots of memory to do very resource-intensive   
   operations   
   > like full copies of video frames on the grounds of "predictability" (I think   
   > most embedded video processing on general-purpose CPUs would try very hard to   
   > avoid doing any full copies), but also can't afford the luxury of an MMU   
   and/or   
   > a RTOS that supports some subest of POSIX, so you can use modern C++ features   
   > like smart pointers and mutexes. Maybe that would add too much overhead.   
      
   I avoid copies as much as possible.  I fiddle with the MMU to give   
   the appearance of a copy without actually having to move all of the bytes   
   from one process container to another.   
      
   OTOH, if the code "fails to cooperate", then I have to bear the cost of   
   making that "anonymous" duplicate to protect the code from itself.   
      
   This, eventually, translates into a resource cost penalty (I maintain   
   ledgers and resource quotas for each process/job) so a shitty developer   
   discovers that his "product" abends more frequently than other "products".   
      
   [It is incredibly tedious to consider how to keep "foreign" developers   
   from being piggish with resources.  One easy way is to elide their jobs   
   when resources are scarce -- let THEM answer the support calls from   
   their customers as to why THEIR product keeps crashing...]   
      
   > These are unusual requirements, to me anyway, I've done a decent amount of   
   > embedded programming over the years but IDK how much advice I can give here.   
   > For "big iron"-like tasks like multi-thread processing of large amounts of   
   data   
   > having an MMU and an embedded OS makes life a lot easier.   
      
   I have adopted the MULTICS philosophy of "Computing as a Service"; expect   
   it to be available just as much as any other "utility" (e.g., hot swapping   
   hardware and software in live systems, running diagnostics alongside   
   regular applications, identifying software and hardware "problems" before   
   they manifest, etc.)   
      
   I have about a thousand cores and almost a TB of RAM in my alpha site.   
   One of the beta sites is planned as almost double that.   
      
   > For simple devices like 8 bitters which are more used as "process   
   controllers"   
   > rather than to perform hardcore calculations like working on video, I find   
   > cooperative multitasking among state machines works pretty well, what mutable   
   > state there is is mostly stored in the machine states.   
      
   I let each job decide how it wants to represent its state in the event   
   that it is killed off and restarted at a later time.  Sort of like   
   checkpointing but letting the job (tasks) figure out what they need   
   to persist in order to accomplish this.   
      
   E.g., if you are transcoding video, then remembering where (time/frame   
   offset) in the input stream you were lets you return to that point when   
   you are restarted (it would be silly to start over from the beginning,   
   especially as you may AGAIN be killed off before finishing!)   
      
   This lets me avoid saving the entire process state -- do you really   
   care about the value of the PC when you were terminated?  Will it make   
   a *material* difference if your entire process state could be restored??   
   Or, is the "video offset" enough to achieve MOST of the value you need?   
      
   Because resources are finite and workload isn't, individual jobs need   
   to be aware of the resources they consume and HOW they use them.  As   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca