home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   sci.chem      Chemistry and related sciences      55,615 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 55,148 of 55,615   
   Treon Verdery to All   
   If you had a million core CPU, then a 1t   
   07 Sep 22 09:39:12   
   
   From: treon3verdery@gmail.com   
      
    if the 1 thz clock was 90% (or 7%) data reliable then each 10,000 cores would   
   be 99.9% likely to be running a as-written version of the program, at all   
   million cores it would be 99.999% likely to be running the as written version   
   of the program, some    
   programs I perceive are flexible like neural networks and deep learning AI   
   where 99.9% might sometimes be adequate, especially when more learning data   
   could compensate for the .01% variation at neural weights, so a 1thz computer   
   clock speed at a neural    
   network could be possible, that is 250 times faster than a 2019 personal   
   computer or server, that makes neural network computing orders of magnitude   
   more affordable than other kinds of computing   
      
   Each core could make a minimized byte hash of the program it has and runs,   
   then when first sending program output the hash is sent out once, along with   
   the program output, that is compared to the hash of the actual program as   
   written, when they are the    
   same then that is a core verified as running the program as written, that can   
   then while the ones doing it right, are doing it right, then iteratively move   
   the rest of the cores to running the program as written on all 100 or one   
   million cores, then if    
   the core running the program has 64 registers of 128 bytes each then the   
   accuracy of the contents and computing actions at those registers are   
   multiplied at the "9% of copies of a million byte program loaded has   
   integrity", so each 111 one thz clock    
   cycles(9% majority), or each 333 one thz clock cycles the 3% majority is   
   utilized, comparing register contents between cores, to say " the majority say   
   thats what the register contents or data actually are, so one between cores   
   register comparison (   
   optionally register hash comparison) every 333 cycles, it could be all the   
   other cores register variants hashes identicalize at far less than 1%, so a 1%   
   majority, with about 1k clock cycles between hash comparisons is possible,   
   what about power    
   consumption, IC area and affordability, and which applications benefit from   
   being run at 1 thz per core, driverless cars work at 4ghz but might work   
   better at 1thz, some medical imaging like brain and body scans like positron   
   emission tomography noting    
   neuron type and tissue structure less than 1 mm area (I may have read decimal   
   millimeters) could process at 99.9% accuracy and do another scan if more   
   accuracy was preferred, this omits the data bandwidth of sending the image to   
   the cloud to be processed    
   at a couple hundred computers, anyplace where bandwidth to the cloud is   
   lengthy, like huge multipetabyte databases, where at some versions of this a   
   99.9% accurate output is sufficient( processing all of Facebook, a social   
   networking site among others to    
   bring voluntary content or products to finding children that could easily be   
   made happy, parents that could improve their parenting style or, at the 1 per   
   10 million error rate children that would benefit from being rescued),   
   enterprise resource planning    
   (ERP) data repositories, some large physics experiments,    
   GPUs exist now, comparing a hundred or million cores at 1thz to GPUs, other   
   than the one thz processing velocity, the two out of three approach at highly   
   over clocked GPUs has very similar benefits   
      
   40k people, 20k things each, motion processed every 10 milliseconds (see a   
   keyboard keypress, grab each    
      
   Positron sensors, do isotopically pure semiconductors, or even CCDs respond   
   more accurately giving higher resolution   
      
   Memory CPU    
      
    or even  to make at each 1 thz clock cycle, and all the other cores out of   
   100 or one million are availablized to run other programs, the 100 or one   
   million core CPU then loads programs to those other cores until perhaps all   
   (resend the program data    
   multiple times until the hash matches at the latter 9 cores), the 100 or one   
   million core CPU is then fully loaded, along with neural networks some   
   applications are running multiple different programs from different people,   
   aggregated on the internet    
   cloud, possibly servers, at personal computers and even possibly phones   
   although it is possible to run each program, application, or component of an   
   operating system on a separate core right now for waitless utilization, a   
   separated component software    
   form, and    
      
   A better way might, depending on program execution time, might be 9%   
   likeliness of the neural network program being as written at 100 cores, then   
   if there were anything other than 7-9 identical outputs and hashes (the other   
   89 are likely to differ from    
   each other stochastically and be less homogenous than the 9%)then the program   
   would reload, or at a million CPU chip, 10,000 groups of 100 each at 9%   
   likeliness of being the program as written the output of the program as   
   written could be very strongly    
   numerically present,    
      
   100ghz and possibly 1 thz test instruments exist, I think using analog ICs so   
   loading the 100 or 1 million cores that fast is likely possible   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca