From: marcov@toad.stack.nl   
      
   On 2012-07-26, Jim Leonard wrote:   
   >> Same principle. Code overlay is based that some code is only needed at   
   >> certain times. One can do the same for data, page data into the EMS   
   >> pageframe when needed. This is mostly explicit though. (iow you have the   
   >> responsibility to do it at the same time). XMS and EMS arrays are more or   
   >> less based on this.   
   >   
   > I thought you meant you could use the Overlay unit for overlaying data as   
   well as code. You're just taking about paging data in and out as needed.   
      
   I don't know the overlay unit that well. I spent only the beginnings of my   
   16-bit time with   
   TP(6 mostly), moving on to Topspeed Modula2 later. When that ended I came   
   back to FPC. So I'm not that deep into 16-bit specific TP issues. (including   
   286 real mode, which I used only once or twice).   
      
   I do know that some overlay systems can also swap out global variables   
   that are in the implementation of units in an overlay. I don't know if TP   
   can.   
      
   I think what I'm refering to is what XMS/EMSarray are in TP. You just store   
   pointers into the (EMS) pageframe or XMS buffer, and structure your program   
   so that it makes sure that the appropriate memory is mapped in.   
      
   My main adastructure was a "larger than 64k array" (a block>64k allocated and   
   then   
   walked by manipulating the segment part of the pointer) that held pointers   
   to the data. (so that I could have more than 16384 elemnents)   
      
   The pointers to the data contained the block number of the ems frame to map   
   in in the segment part.   
      
   So accessing such a pointer was something like   
      
   blocknr:=seg(myptr);   
   if blocknr<>currentblocknr then   
    begin   
    mapin(blocknr);   
    currentblocknr:=blocknr;   
    end;   
   segmentof(myptr):=segmentemspageframe; // I don't even know how to do this   
   anymore   
      
   access(myptr);   
      
      
   However when you do a lot of random access this can be slow in theory. IIRC   
   I did mergesorts the first time for the bulk of the data, and cached that on   
   disk, adding only most recent mutations on every run. The mergesort had a   
   naieve fallback to disk only in case there was not enough memory (the in   
   memory sorting was faster but required twice the space)   
      
   This made sure that the number of blockswaps was in the magnitude of the   
   64kb/sizeof(data)   
      
   But this was all in 486, early pentium times when even older machines were   
   at least 386's with 4MB. Though a brief while memory became urgent again   
   because people insisted on running under Windows (3.x and even 95) on   
   machines that could barely run it, not leaving much memory for applications.   
      
   My favorate deployment destination was DV (DesqView) or DV/X.   
      
   I maintained that application a while in 16-bit after I moved to 32-bit   
   because it was for a market (BBSes) that didn't warrant rearchitecting.   
      
   And currently I happen to be thinking about making a generics based   
   tstringlist/tstringcollection type for FPC/Delphi that has proper   
   insertion behaviour over 4 billion elements :-)   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|