home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   sci.physics.research      Current physics research. (Moderated)      17,516 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 16,297 of 17,516   
   toadastronomer@gmail.com to All   
   Re: The evanescent wave at the detector.   
   23 Jul 18 09:23:54   
   
   20-JUL-2018   
      
   I'll take a liberty to digress on the image transform algorithm,   
   with some hope it may illuminate things a bit.  Maybe helpful if this lead   
   balloon suddenly loses lift.   
      
   I've only made this thing work under NIH-Image (Nathional Institutes of   
   Health) and   
   Image-SXM, a scanning x-ray microscopy variant of NIH-Image (University of   
   Liverpool).   
      
   There's a pascal-like macro command language, but not particularly well-suited   
   for high frame rate, high data volume applications.   
      
   Image-J, java-based distribution of Image, might provide a bit of a boost,   
   but I have had no luck porting NIH(SXM)-Image code to Image-J, though on the   
   face of it you'd think to would be straight forward.  I'm not much of a   
   programmer though.   
      
   Generally, start with a well formed image 1024x1024 for example, and extract   
   a 512x512 sub-field centered on a pixel of interest.  Duplicate the 512x512   
   image.   
      
   Convolve one copy by a Gaussian 5x5 kernel   
      
   1  1  2  1  1   
   1  2  4  2  1   
   2  4  8  4  2   
   1  2  4  2  1   
   1  1  2  1  1   
      
   and the other by a 15x15 Gaussian:   
      
   2 2  3  4  5  5  6  6  6  5  5  4  3 2 2   
   2 3  4  5  7  7  8  8  8  7  7  5  4 3 2   
   3 4  6  7  9 10 10 11 10 10  9  7  6 4 3   
   4 5  7  9 10 12 13 13 13 12 10  9  7 5 4   
   5 7  9 11 13 14 15 16 15 14 13 11  9 7 5   
   5 7 10 12 14 16 17 18 17 16 14 12 10 7 5   
   6 8 10 13 15 17 19 19 19 17 15 13 10 8 6   
   6 8 11 13 16 18 19 20 19 18 16 13 11 8 6   
   6 8 10 13 15 17 19 19 19 17 15 13 10 8 6   
   5 7 10 12 14 16 17 18 17 16 14 12 10 7 5   
   5 7  9 11 13 14 15 16 15 14 13 11  9 7 5   
   4 5  7  9 10 12 13 13 13 12 10  9  7 5 4   
   3 4  6  7  9 10 10 11 10 10  9  7  6 4 3   
   2 3  4  5  7  7  8  8  8  7  7  5  4 3 2   
   2 2  3  4  5  5  6  6  6  5  5  4  3 2 2   
      
   Then subtract the result of the former from the latter.   
   Now Fourier transform the difference image and divide   
   that by the Fourier image of an Airy pattern at a circular   
   aperture.   
      
   Now quadrant swap and inverse Fourier transform.   
      
   Voila!  Fringes.   
      
   I'm doing this on a Mac; the quadrant swap puts the peak of   
   the power spectrum of the image division result at the geometrical   
   center of the image, rather than distributed to the four corners.   
   It's a machine thing.   
      
   Being a generally symmetrical field that's imaged, whether one   
   quadrant swaps (when needed) after inverse transform may matter.  For my   
   study measuring frequency and angle of polarization, I've not found   
   any significant differences, but naturally there's a difference in   
   the amplitude of the imaged field over the 2-D space, and the location   
   of phase singularities.  For good qualitative examples of the latter see   
   White et al., Interferometric measurements of phase singularities in the   
   output of   
   a visible laser, Journal of Modern Optics 38;12 1991.   
      
   As for the aperture image: I've been using one image for all data;   
   a circular aperture with 1.5mm radius, backlit by a 5400K florescent   
   source (light table).  Unfortunately I've long since lost the optical   
   config notes, but the spatial image of the Airy pattern I've produced   
   occupies the central 9.3% of 1024x1024 source image, at threshold level   
   206; if that helps.   
      
   To make it even more arbitrary, the aperture image is then scaled by a factor   
   0.5 AFTER Fourier transform, but before image division with reduced source   
   image.  The image division is then done on the 512x512 images that can be   
   moved within a larger original so as to sample 65000+ for a 1024x1024 original.   
      
   I scaled by a half because it's so brutally slow the way I've implemented it.   
      
   Every time I review this it appears more and more absurd that the results are   
   fit   
   so robustly by a simple model that looks like physics.  I suspect nominally the   
   aperture image should be produced using the actual entrance aperture of the   
   instrument producing the input images.  Perhaps it should be the point spread   
   function.   
      
   In the context of interferometry, the input image is signal beam and the   
   aperture   
   is reference, I guess.  But then there's the splitting of spatial frequencies   
   in the first reduction step, so I dunno how strong an analog this is for near   
   field   
   interferometry with, say, a photon scanning tunneling microscope (PSTM).  For   
   such results that are qualitatively very similar to the results I'm observing,   
   see   
   e.g. Balistreri et al., Phase Mapping of Optical Fields in Integrated Optical   
   Waveguide Structures, Journal of Lightwave Technology 19;8 Aug 2001.   
      
   Also consistent with this are results in Gatti, et al., Quantum Imaging   
   arXiv:quant-ph/0203046.   
   The near field is the inverse Fourier transform of the far field.  The   
   observer is always in the   
   far-field; bit of a paradox at first glance, if you're an observer and want to   
   get to the near-field.   
      
   Well, that's the long and the short of it.   
      
   Cheers,   
   mark jonathan horn   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca