2cbb29c8   
   From: miles@gnu.org   
      
   Edward Rosten writes:   
   >> I frequently called vector.clear, which turned out to be a severe   
   >> performance bottleneck: you would expect clear to be O(1) since the   
   >> destructor is empty, but it obviously wasn't: reducing the vector size   
   >> from 1024 to 17 made an orders of magnitude improvement in running   
   >   
   > When was this and on which compiler? I remember in the not too distant   
   > past, GCC was unable to optimize away an empty loop terribly   
   > effectively, so a container having a for-loop which called trivial   
   > default destructors in place in a block of allocated memory would not   
   > be optimized away to nothing. This flaw disappeared sometime between   
   > about 2005 and 2010, so it might be worth revisiting the problem to   
   > check if it still is a problem.   
      
   Gcc versions 4.4 through 4.6 don't optimize away the following loop:   
      
    #include    
    struct s { ~s () {} int i; int j; };   
    void t (std::vector &v)   
    {   
    unsigned n = v.size ();   
    for (s *s = &v[0]; s < &v[n]; s++)   
    s->s::~s ();   
    }   
      
   [... although the loop body in the resulting assembly is empty.]   
      
   Clang, up through the current trunk version shows the same behavior.   
   Gcc 4.7 _does_ eliminate the entire loop.   
      
   However, "v.clear ();" results in no loop for any of the above, so it   
   seems that the clear method in libstdc++ does not depend on such an   
   optimization to be O(1)...   
      
   -miles   
      
   --   
   Bigot, n. One who is obstinately and zealously attached to an opinion that   
   you do not entertain.   
      
      
    [ See http://www.gotw.ca/resources/clcm.htm for info about ]   
    [ comp.lang.c++.moderated. First time posters: Do this! ]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|