From: robfi680@gmail.com   
      
   On 2026-01-30 3:35 p.m., MitchAlsup wrote:   
   >   
   > Robert Finch posted:   
   >   
   >> On 2026-01-30 6:16 a.m., Dan Cross wrote:   
   >>> In article <10lfuka$1d2c4$1@dont-email.me>,   
   >>> Robert Finch wrote:   
   >>>> Have inverted page tables fallen out of favor? And how much OS support   
   >>>> is there for them?   
   >>>   
   >>> IVTs are a terrible idea. Consider the challenge of using them   
   >>> to share memory between virtual address spaces in a   
   >>> multiprocessing system, efficiently.   
   >>>   
   >>> - Dan C.   
   >>>   
   >> I may have used misleading terminology, the IVT I refer to is a hash   
   >> table based one. I tend to think of them as the same thing. I do not   
   >> think anybody would use a plain IVT.   
   >>   
   >>    
   >> Is the entire VAS covered by the hierarchical page table system?   
   >>   
   >> With the entire PAS covered by the page table in BRAM it can be walked   
   >> in hardware very quickly, one cycle per step as opposed to walking the   
   >> page table in DRAM which could be quite slow.   
   >>   
   >> Process switch is handled by including an ASID in the mapping as for a TLB.   
   >>   
   >> For the IVT implementation the table is twice the size needed to cover   
   >> the PAS to allow for shared memory pages.   
   >>   
   >>   
   >> The table just stores mappings VPN -> PPN so I am not quite following   
   >> the challenge to using them for shared memory? Multiple VPN-> to the   
   >> same PPN are possible. Is it the freeing up of physical pages in SW that   
   >> cause an issue?   
   >   
   > Consider mmap(file) in multiple different processes at different VAs.   
   > So, now one has multiple VA pages pointing at one Physical page.   
   >   
   ??? I think a hash table has this characteristic. Multiple VA pages can   
   point to a single physical page using a a hash table. Is it just a   
   performance issue? ASID is part of the hash/compare.   
      
   I guess I should take a look at the mmap() code.   
      
   The hash table is only slightly different than a giant TLB. The TLB is   
   walked on a miss instead of walking main memory. Page faults would be   
   handled the same way.   
      
   The table is clustered, the hash points to eight entries which are then   
   searched in parallel for the correct one. If not found a table walk   
   begins using quadratic open addressing.   
      
      
      
   >> I seem to recall at least one fellow advocating the limited use of   
   >> shared memory, using replication instead of shared libraries for instance.   
   >>   
   >> A hierarchical page table is a better solution, but I was looking for   
   >> something lower cost. My monster 2-level TLB is currently about 9000   
   >> LUTs (I have been working to reduce this). The IVT is about 900 LUTs.   
   >   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|