home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   rec.arts.sf.science      Real and speculative aspects of SF scien      45,986 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 44,585 of 45,986   
   Mikkel Haaheim to All   
   Re: James S.A. Corey's answer to There A   
   25 Oct 16 08:07:15   
   
   From: mikkelhaaheim@gmail.com   
      
   Le lundi 26 septembre 2016 01:19:13 UTC+2, Rick Pikul/Chakat Firepaw a   
   écrit :   
      
   >    
   > >> Combine an array, (let's say a 3x3), with faster CCDs and processors,   
   > >> (let's say 5 FOV/s), and an out of plane sensor platform that only   
   > >> looks at half of the sky, (because everything interesting is either   
   > >> staying in the plane of the ecliptic or coming from it), and you are   
   > >> down to one full scan every 12 minutes.   
   > >    
   > > Given the bulk of data (and poverty of input), faster CCDs and   
   > > processors will be insufficient. You need more processors (or processors   
   > > capable of much greater loads, or both), and more sensitive CCDs (if you   
   > > run the CCD too fast, it will not accumulate enough energy to register).   
   >    
   > I'm not assuming a massive improvement here, less than 4X of what could    
   > be done in the early 1990s.   
   >    
   > You know, when a high-end home computer was a 486DX2 running at 66MHz.     
   > Covering the increased processing loads would be trivial.   
      
   It REALLY isn't.   
   You are picking and chosing platform abilities that are bleeding edge tech   
   (for the time, at least), and assuming that all these abilities can be   
   combined on a single platform. That just is not the case. Virtually all of   
   these bleeding edge technologies    
   perform their tasks admirably, but at the expense of other capabilities. Nor   
   are you really analysing what a statement is actually saying.   
   In the case of processing requirements, you are ignoring the fact that, in   
   order to mask out the millions of known sources (known ONLY in the aspect that   
   they have been identified at discrete objects tagged with reasonably reliable   
   position and tracking    
   data), you need to track where these items are at any given time. You need to   
   compute a map of these millions of sources, in three dimensions. You need to   
   compute updates for the map locations as a function of time. You need to   
   calculate course    
   deviations resulting from planetary gravitational influence. You need to   
   calculate the signal intensities, and routine deviations of signal strength.    
   You need to go on to convert the three dimensional maps to the expected 2D   
   image that will be in the    
   FOV at any given time. You have to calculate how overlapping and "adjacent"   
   sources are going to affect the scanned image. You then have to calculate your   
   deviations for each pixel, to spot a "blink" (this is an over-simplification   
   of all the filtering    
   programmes you will have to run). You then need to compare past and present   
   images over time (this will help you sort out through unknown natural   
   sources... once these are identified, the initial task jumps to BILLIONS of   
   background sources that need to    
   be mapped out, but will greatly reduce the analysis processing required).   
   ASSUMING you can detect all the varied unnatural sources, you then have to   
   process the activities of the thousands of regular traffic craft, to identify   
   which craft are not    
   behaving "normally".   
   To give an example of just how hard this task is: scientists are jumping for   
   joy at the tentative success of new software that somewhat reliably allows   
   adjacent sources in a single cluster to be distinguished from one another as   
   discrete entities in a    
   matter of seconds to minutes instead of the months that the same task has   
   normally taken. A single cluster of stars. Not even a single frame. The   
   software isn't perfect. It requires very specific conditions to work. If those   
   conditions are not met, the    
   task still takes months for existing software to determine what are discrete   
   objects.   
      
   >    
   > > Actually, you need that anyway. Astronomers are STILL looking at 50+   
   > > year old data, running them through newer, faster, higher capacity   
   > > computers, trying to extract useful info from the data that is already   
   > > there, just trying to count and catalogue the stars and other objects   
   > > already present on those plates.   
   >    
   > Again:  Scientific budgets are trivial compared to military ones.     
   > Constantly harping about "we can't do it right now with almost no    
   > resources" just highlights how weak your argument is.   
      
   The scientific budgets are trivial, but are much more concentrated. The   
   scientific institutions invest in hiring only the best and the brightest, and   
   in using and creating the best hardware for most efficiently completing the   
   tasks at hand.   
   Military hardware, paid with military budgets, is typically AT LEAST 20 years   
   out of date.   
      
      
   > > Yes. You can reduce it all you want. But say goodbye to your notion of   
   > > stealthless space.   
   >    
   > If you call it taking a couple minutes to detect your hour-long    
   > correction burn 'saying goodbye to stealth being impossible.'   
      
   A stealth craft would not be using single hour long burns. They will be using   
   pulsed burns that are more difficult to detect and track. Even assuming the   
   plume would be detectable. To give you some idea of how hard it REALLY is to   
   detect some plumes:    
   instruments that measure the plume of ion rocket exhaust have to be placed   
   directly in the stream of the exhaust itself. With the exception of a few   
   easily shielded centimeters from the nozzle, there is virtually no detectable   
   emission from the plume,    
   even at ranges of less than a meter.   
      
      
   > > A fast scanner is not going to help you if it can't receive sufficient   
   > > energy input.   
   >    
   > Fast is relative.  We're not talking about doing hemispherical scans    
   > every 30s here.   
      
   WISE has extremely sensitive IR detectors (limited to 4 emission bands, in   
   order to provide that sensitivity). A minimum 10 second exposure is required   
   for every single 0.8 °^2 frame. So, no, we definitely ARE NOT talking about   
   30s scans. For that    
   matter, the minimum exposure is 10s per frame for each individual emission   
   band. Multiple overlapping passes are also required for each image, but this   
   is mostly for eliminating data errors. A little bit is the requirement for   
   picking up fainter    
   emissions that are not always detectable, but are nevertheless recurring. With   
   the 10s exposure time, it should actually only take 28 days for WISE to   
   complete a scan, but the overlaps bring that scan time up to 6 months.   
      
   >    
   > >> You pass off its potential detections to a narrow FOV, higher   
   > >> resolution system that confirms the detection.   
   > >>    
   > >> That then hands off to other platforms to confirm with their narrow FOV   
   > >> systems and lock in the location.   
   > >>    
   > >> At this point you know where it is and where it's heading and can also   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca