From: blockedofcourse@foo.invalid   
      
   On 2/21/2026 9:53 AM, Waldek Hebisch wrote:   
   > Don Y wrote:   
   >> On 2/20/2026 11:07 PM, bitrex wrote:   
   >>> Is this a high-frequency trading box? Are you building a salami-slicer   
   >>   
   >> No. More IoT but with all of the processing in the leafs [sic]   
   >> instead of an unscalable "central processor" trying to coordinate   
   >> the activities of motes.   
   >>   
   >> Why put a processor in a mote if all its going to do is sense   
   >> something or control an actuator -- based on decisions made by   
   >> some other "smarter" entity? Once you have the CPU and connectivity,   
   >> why not migrate the smarts out to the periphery (folks are slowly   
   >> starting to realize this is inevitable)   
   >   
   > Well, the point is that processor close to hardware can:   
   > - do real time things   
   > - reduce bandwith needed for communication   
   > - reduce need for wires   
   >   
   > Such processor may be quite cheap (I can buy resonable MCU at $0.2   
   > per piece and modules with MCU at $2 each), usually does not need mass   
   > storage and can have tiny RAM. Small MCU-s may be cheaper than   
   > specialized chips, so it make sense to use them just to unify   
   > hardware and lower the cost.   
      
   So, the "product" is a chip wrapped up in pretty paper?   
      
   Of course not!   
      
   You need a power source and conditioning/protection circuitry   
   (for the processor, associated electronics AND anything required   
   by the field).   
      
   You need the field interface, a circuit board, connectors for the   
   field AND the "main/central CPU". And, a box to contain it all.   
   And, some means of interacting with "it" /in situ/ to determine   
   if it is misbehaving (and merits being uninstalled).   
      
   You need to pay to have this installed and the cable(s) run. If   
   new work, you're just paying for wire and time on a jobsite. If   
   old work, the installer is crawling through attics/basements,   
   removing (and later repairing and repainting) wall board, etc.   
   Along with any "protection" that would be needed to "protect" the   
   signal path from tampering.   
      
   You need to develop and test the software. And, have to ensure   
   an adversary can't just mimic those signals to defeat the device   
   (e.g., encrypted tunnel).   
      
   The difference between a $0.20 MCU and a $20 SoC is just noise   
   in calculations like that!   
      
   > OTOH more complicated algorithms may need a lot of data, large   
   > persistent starage, a lot of RAM. Still, it is likely that a single   
   > CPU can do all needed job. Single CPU make many things simpler. So   
   > unless that is compelling need for more processing using relatively   
   > dumb peripherial nodes and slightly more powerful cental node   
   > makes a lot of sense.   
      
   So, how do you process video? Just *digitize* it at the leaf and   
   ship it off to the "central CPU"? How many of those feeds can   
   the CPU process concurrently (you can't ignore camera 1 while   
   you are processing camera 13)? You can't use some cheap/slow   
   interface because the pipe wouldn't be fat enough. So, add   
   magnetics and upgrade the MCU to support a NIC... and a network   
   stack... and a switch...   
      
   A central processor limits the total amount of "work" that can be   
   done. Crippled leaf processors mean they can't meaningfully *help*.   
   E.g., if I want to scan a recorded OTA broadcast to identify the   
   "commercials" (ads) within, I can call on the leaf processor   
   that handles the garage (door, etc.) to do that work as it is   
   likely not busy, at the current time.   
      
   Or, *ask* the processor that handles the weather station if it   
   has any spare resources that I could exploit.   
      
   With a single processor, every node that you add represents   
   more *work* for THAT processor. With powerful nodes, every node   
   brings additional *capabilities* to the problem.   
      
   > Of course, in commercial settings people work on what they are   
   > payed to do. IIUC developers of say Home Assistant get no   
   > incentives to make it working on really low cost hardware,   
   > so you get requiremets like 8 GB ram, 16 (or maybe 32) GB   
   > filesystem for something that should comfortably run in 32 MB   
   > RAM and 500 MB filesystem. Actually, IIUC comparable functionality   
   > was available in the past on much smaller machines than the   
   > 32 MB RAM and 500 MB filesystem mentioned above, I am simply adding   
   > a lot of slack, to allow higher-level coding and to reduce need   
   > for micro-optimization.   
      
   I spent a career driving hardware costs to $0. I had one product that   
   supported 16Kx1 and 64Kx1 DRAMs pluggable in quantities of *1*.   
   I.e., so you could have 7 16Kb devices and 1 64Kb device in the same   
   DRAM bank with the software recognizing the differences in capacity   
   and treating the 64Kb bit position as "bit wide" while the first 16K   
   was treated as byte-wide. I.e., expand memory in 6KB (48Kb) increments   
   just by plugging different devices.   
      
   I've written custom floating point packages to reduce the size   
   of each float. Or, expedite certain classes of computations.   
   Because the hardware couldn't "afford" to "do things right".   
      
   It's a false economy in almost every case! Even for "self-contained"   
   products that can be "installed" by setting them on a countertop and   
   plugging into the mains.   
      
   It completely ignores the externalities that come with products.   
      
   ANY bug costs the customer time/resources. He may not "bill" you for   
   it but it will affect your reputation and possible future sales.   
   You (and he) have incurred a cost by the manifestation of this "defect".   
   If he has to contact you to resolve that bug, it now costs your support   
   staff.   
      
   If it is a genuine bug, then you have to track it down and fix it   
   and push out an update -- possibly to all users and not just the   
   one who complained about it. I don't know many people who welcome   
   the news that there "device" is now busied out while being updated.   
   AND, that the update doesn't change anything that they didn't   
   expect.   
      
   [Don't you just LOVE your periodic MS and desktop app updates?]   
      
   Anything you can do to reduce the cost of development and (ahem)   
   "maintenance" decreases the TCO, even if not measurable or accounted   
   for on a BoM. (If your staff is busy supporting a prior product,   
   then it isn't available to work on NEW products).   
      
   Bigger processors tend to be more amenable to HLLs and better   
   development/diagnostic/debugging tools. You can build mechanisms   
   into the code that minimize latent bugs hiding in the codebase   
   (manifesting AFTER delivery dramatically increases the actual and   
   perceived cost of the product).   
      
   [Imagine a customer having to take that countertop device and   
   bring/ship it to you for "repair"/upgrade. What TOTAL cost   
   for that?]   
      
   Figure a system designer at $250K/yr. If you trim a week off of   
   his workload, you've saved $5K. Likewise for a programmer,   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|