Forums before death by AOL, social media and spammers... "We can't have nice things"
|    comp.lang.c++.moderated    |    Moderated discussion of C++ superhackery    |    33,346 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 32,917 of 33,346    |
|    =?ISO-8859-1?Q?=D6=F6_Tiib?= to Francis Glassborow    |
|    Re: insert [sorted] a node in a linked l    |
|    12 Mar 13 11:38:58    |
      From: ootiib@hot.ee              On Tuesday, 12 March 2013 12:30:05 UTC+2, Francis Glassborow wrote:       > On 12/03/2013 08:19, Öö Tiib wrote:       > > On Tuesday, 12 March 2013 00:06:27 UTC+2, James K. Lowden wrote:       > >> Bumping N up to 250,000, skipping list and adding a deque,       > >>       > >> deque: real 0m25.088s       > >> vector: real 0m7.258s       > >> set: real 0m1.180s       > >> empty: real 0m0.833s       > >>       > >> which is a disappointing result for deque, if you ask me, but much       > >> to your point: deque *should* be faster because inserts are       > >> cheaper, but the effect of locality swamps it.       > >       > > Deque is usually implemented as list of arrays. That makes it       > > efficient to insert/erase from ends (wins vectors easily there). You       > > inserted from middle so there it is quite weak and loses to vector       > > as rule.       > >       > > std::deque is insufficient Deque implementation I feel. It lacks       > > template parameter that specifies size of that array. My tests show       > > that in different situations different sizes are optimal. It can be       > > almost hacked in by tailoring the allocator but that is too much       > > asked from average library user.       > >       > > In such tests (random input and sorted output) you might also want       > > to check out std::priority_queue. It is adapter around other       > > container, but in some related situations it is the winner.       >       > I wonder how much of the vector cost is down to the periodic       > reallocation. I suspect that if you know ahead of time how large you       > want your vector to be that it might perform a bit better.              Very little on current case.              See ... both winner and loser (list and set) allocate on every insert.       The systems pool small allocations. 250,000 allocations/deallocations       take about quarter of second on modern PC-s. Rest of the second that       set takes goes to fighting with bad locality and re-balancing the       tree.              std::vector when growing from empty to 250,000 elements reallocates       itself (depending on implementation) only between 18 and 31 times!       That is tiny factor on current case. It is important factor when       copying or transforming into vector from somewhere. Wast majority       of the seven seconds went to moving half of the elements in average       on each insert.              > However it certainly looks as if the lack of locality makes list       > (in any of its forms) uncompetitive. This is an excellent lesson for       > students to learn (that they need to take into account much more than       > simply big O)              It is especially important when teaching C++ since that is two-edged       blade in our hands that cuts both ways.              One edge is that our containers and algorithms are unrivaled when used       correctly (std::sort typically takes 75% of time of qsort).              Other edge is that misused container or algorithm in critical spot may       easily perform over two orders of magnitude worse than plain array.                     --        [ See http://www.gotw.ca/resources/clcm.htm for info about ]        [ comp.lang.c++.moderated. First time posters: Do this! ]              --- SoupGate-Win32 v1.05        * Origin: you cannot sedate... all the things you hate (1:229/2)    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca