home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.lang.c++.moderated      Moderated discussion of C++ superhackery      33,346 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 31,643 of 33,346   
   Dave Abrahams to All   
   Re: Looking for an elegant way to conver   
   08 Nov 11 00:09:28   
   
   6fffd410   
   From: dave@boostpro.com   
      
   on Mon Nov 07 2011, "gast128-AT-hotmail.com"  wrote:   
      
   > On 6 nov, 09:47, Dave Abrahams  wrote:   
   >> on Sat Nov 05 2011, "gast128-AT-hotmail.com"  wrote:   
   >   
   >> > Using exception handling means that every 'error' must be handled   
   >> > otherwise the application terminates.   
   >>   
   >> It's very easy to handle every error, though: just put a try/catch(...)   
   >> block in main.   
   >   
   > Not for GUI applications.   
      
   Yes, for GUI applications.  At least, for the ones I've built.  You put   
   the try/catch in the mechanism that initiates commands.   
      
   > They don't have an equivalent for main (technically they do have, but   
   > that's not the level u want to add the main try/catch).   
   > To keep the application alive, you could add them to wrap the 'model'   
   > operations. Still most GUI's have a lot of access points to the model   
   > (bot for modification as in selection).   
      
   I think that might depend on the GUI framework you're using, and on how   
   much it insists on doing for you.  But I've never seen a GUI framework   
   design that makes this an appreciably difficult problem.   
      
   > Btw the try/catch is the easy part. Rollback with transaction   
   > semantics is the difficult part as you already mentioned yourself.   
      
   Actually, if your application supports an "undo" command, you probably   
   already have all the tools you need.   
      
   >> > It effectively promotes every error to the fatal level.   
   >>   
   >> Yes, but fatal for what?  It doesn't have to be fatal to the application   
   >> (see above).  The termination semantics of exceptions just mean they   
   >> terminate the current operation... which is almost always the right   
   >> default.   
   >   
   > Fatal if you don't handle them.   
      
   You're still not answering the question: fatal to what?  Yes, an   
   otherwise-unhandled exception should terminate the current command, but   
   it shouldn't necessarily have to terminate the program.   
      
   >> > With status return missed benign errors leave your application alive.   
   >>   
   >> Which might be worse than termination.  It's usually the wrong default,   
   >> and it burdens programmers with explicitly ignoring errors all over the   
   >> place.   
   >   
   > Indeed 'might' or 'might not'. The discussion was what happens if   
   > programmers 'forget' them. For non fatal errors it doesn't harm if   
   > they were accidentally ignored.   
      
   The other question is, what's the right default?  The only way it can be   
   the right default to continue as if nothing happened is if your   
   programmers can write each statement as though all previous statments   
   might not have done what they were supposed to.  Personally, I don't   
   know anyone who can write more than a few lines of code that way without   
   getting horribly tangled in the combinatorial complexity of all the   
   possible things that might have happened... or simply giving up thinking   
   about it and assuming everything's going to be OK (it won't).   
      
   > For fatal they can terminate the application. Last category would be   
   > things like bad_alloc.   
      
   There's no reason bad_alloc needs to be fatal to the application.   
      
   > Btw there seems to be application even handling that   
   > situations.   
      
   I don't know what you're saying.   
      
   > Somewhat this reminds me also to the minix discussion, which tries to   
   > keep the os alive as long as possible (you might have data loss, but   
   > not as fatal if your whole os would crash).   
   >   
   >> > This happens more than you think. We develop a 1 million line of data   
   >> > application with a heavy GUI. The GUI sometimes gets a little out of   
   >> > sync with the 'model' ('model' as in mvc). If every failed request   
   >> > would terminate the program, we would be out of business very soon.   
   >>   
   >> Of course you don't terminate the program.  I don't tolerate things   
   >> getting "a little out of sync" in my code, but I can understand that you   
   >> may have a different practical reality to deal with.   
   >   
   > In an ideal world there would be no bugs and things would never get   
   > out of sync (or at least get updated asap).   
      
   Trying to program with the assumption that (other) code in the   
   application is buggy is very, very difficult (see above).  And it's   
   usually worse for the codebase and the application's reliability than   
   coding with the assumption that everything is correct... which is what I   
   bet you do 99.9% of the time anyway.   
      
   >> If you're "putting catch handlers everywhere," there's something very   
   >> wrong somewhere.  That shouldn't be necessary.   
   >   
   > The application we develop has more than 200 entry points to modify   
   > the model. Those should all be wrapped then.   
      
   Why?  I can't imagine why you'd want a try/catch block around each   
   function that modifies the model.   
      
   >> I'm not sure that's a bad thing.  If you had wanted an exact result, the   
   >> exception would have been appropriate.  If you want a "rough estimate,"   
   >> you have to decide what that means, and decide what to do when a file   
   >> can't be opened.  Just ignoring an error is not necessarily enough to   
   >> make your result right for the job: should you assume this file is the   
   >> same size as the last one we had a size for?  Should you use zero for   
   >> the size?  Should you assume it's the same as the average of other   
   >> files?  So I don't see any justification for the frowny-face at the end   
   >> of that sentence.   
   >   
   > Like I said before, rough was sufficient. It is not an 'all or   
   > nothing' situation here. In practice those non open-able files were   
   > small, so they could be ignored.   
      
   *Exactly*.  You had to make a decision that the exact semantics produced   
   by ignoring the exception were appropriate... this time.  /And/ you had to   
   make sure that the code in the surrounding context was written to have   
   acceptable behavior if the error was ignored.  For example, it's easy to   
   imagine code that could end up adding a huge negative number to the   
   running size total if the error was ignored.  The point is that there   
   are no "benign failures."  There are only "benign failures in the   
   context of a particular use-case and surrounding code."   
      
   > And even they were big, the calculated result is some sort of a   
   > minimum directory size which is sufficient here. Btw for filesystems   
   > it can never be accurate (as specified in the Boost.Filesystem   
   > documentation) since during iteration files can be added and removed   
   > as well. Effectively I had to look for other directory iteration   
   > options as offered by Boost.Filesystem.   
      
   That seems unnecessarily drastic.  I think you could easily add a signle   
   try/catch block to swallow these errors.   
      
   That said, I've never been all that comfortable with the idea of   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca