home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.lang.c++.moderated      Moderated discussion of C++ superhackery      33,346 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 32,626 of 33,346   
   Ivan Godard to but I   
   Re: Fwd: Re: Useful applications for boo   
   02 Nov 12 11:22:14   
   
   From: ivan@ootbcomp.com   
      
   On 11/1/2012 9:26 AM, Daniel Krügler wrote:   
   > [I apologize for the late response, but I had some severe problems with   
   > the configuration of my news group reader]   
   >   
   > On 2012-10-25 20:31, Ivan Godard wrote:   
   >>>   
   >>>   > I frequently use increment over an enumeration, typically when   
   >>>   > iterating over an array whose index set is an enum. This construct   
   >>>   > is not native to C/C++, but with type traits that give lower/upper   
   >>>   > bounds for enum types and a little meta-programming you can write:   
   >>>   >      enum E(f, g, h);   
   >>>   >      array a, b;   
   >>>   >      forEach(x, thru()) {   
   >>>   >          a[x] = 17;   
   >>>   >          b[x + 1] = 23;   
   >>>   >          }   
   >>>   
   >>>   > The metaprogramming ensures that a[5] is illegal. ++ and -- are   
   >>>   > defined as the successor and predecessor operations in the natural   
   >>>   > way, as are E ± integral and E - E (but of course not E + E) in the   
   >>>   > obvious way.  That is, the set of arithmetic operations are the   
   >>>   > same as for pointers.   
   >>>   
   >>> I agree that this looks like a useful tool. Am I correctly   
   >>> understanding that this is a view of an enumeration type's range of   
   >>> valid values (specified by the extreme values b_min and b_max in the   
   >>> standard)?   
   >>   
   >> Strictly speaking it is defined by the lwb and upb values supplied to   
   >> the macro that sets up the traits for the enumeration, and could be any   
   >> value coerceable to a constexpr of the enum. In practice they are always   
   >> the extrema of the declared values of the enum's list. Neither I nor a   
   >> Google search are familiar with std::b_min/max.   
   >   
   > The symbols b_min and b_max are defined in 7.2 [dcl.enum] (I'm using   
   > underscore _ to indicate a subscript):   
   >   
   > "for an enumeration where e_min is the smallest enumerator and e_max is   
   > the largest, the values of the enumeration are the values in the range   
   > b_min to b_max, defined as follows: Let K be 1 for a two’s complement   
   > representation and 0 for a one’s complement or sign-magnitude   
   > representation. b_max is the smallest value greater than or equal to   
   > max(|e_min| − K, |e_max|) and equal to (2^M) − 1, where M is a   
   > non-negative integer. b_min is zero if e_min is non-negative and −(b_max   
   > + K) otherwise. The size of the smallest bit-field large enough to hold   
   > all the values of the enumeration type is max(M, 1) if b_min is   
   > zero and M + 1 otherwise. It is possible to define an enumeration that   
   > has values not defined by any of its enumerators. If the enumerator-list   
   > is empty, the values of the enumeration are as if the enumeration had a   
   > single enumerator with value 0."   
      
   Yes, this defines the width. However, it *doesn't* require the compiler   
   to expose the values of b_min/b_max to the program, which is what I'm   
   looking for.   
   >> It would be very nice if these extrema and some of the other information   
   >> well known to the compiler but hidden by the language were exposed to   
   >> the user, instead of requiring manual maintenance of traits. The most   
   >> badly needed IMO is an array of strings containing the printnames of the   
   >> enumerates.   
   >   
   > I agree that deducing this information via some "reflection" mechanism   
   > would be useful.   
   >   
   >> There are two issues being confused here. My concern is functionality or   
   >> lack thereof. The second is the legacy of C and its lack of   
   >> functionality that would treat an enum as more than the collection of   
   >> #defines that was all C had at the beginning.   
   >   
   > I'm not sure that C++ will really change the good old C enums more than   
   > necessary, since you can use enum classes in C++11. For these enums the   
   > value-range is *exactly* identical to the value-range of the underlying   
   > type of the enum (Which again can be queried via the trait   
   > std::underlying_type).   
      
   I do not suggest changing C enum; it is what it is and every language   
   has some burden of compatibility.   
      
   It's enum class that concerns me. Making the value range be the same as   
   for the underlying type is a mistake. The underlying type is a   
   representation, not a value set. It is common to see a three-valued enum   
   lodged by itself in a four-byte MMIO word. The value set lets the   
   compiler complain (usefully) if an invalid vale is assigned, while the   
   representation (usefully) determines the physical layout in structs and   
   MMIO. These are different notions, and should not be conflated.   
      
   In my posting to the C++ group I advocated extending the representation   
   specification to any type, and admitting value sets for numeric objects.   
   It is as meaningful to be able to say:   
   	enum class num3 : short {1,3,5};   
   as to say:   
   	enum class enum3 : short {a,c,e};   
   They both physically occupy two bytes (or whatever short is; don't get   
   me started) and have delimited explicit value sets. Consequently:   
   	num3 numv1 = 3;	// good   
   	num3 numv2 = 4;	// error   
   	num3 numv3 = enum3::a;	// error   
   	enum3 ev1 = a;  // good   
   	enum3 ev2 = 1;	// error   
   and no, you be able to say:   
   	enum class numx : short {1 = 2, 3 = 4, 5 = 6};   
   even though the compiler could easily produce the mapping table :-)   
      
   I recognize that having the value set be co-extensive with that of the   
   underlying type means that a check at coercion is unnecessary. This is a   
   bug, not a feature. If the programmer is taking the time to write a   
   bounded type then he wants a bounded type and wants the run time to   
   verify  that it is not out of range. A programmer who wants to avoid the   
   check and trust the rest of the program to be bug free can write:   
   	enum class bad : unsigned char {   
   		dummy0 = 0, a,b,c dummy255 = 255};   
   and any reasonable compiler will omit the meaningless check. And yes, I   
   realize that unsigned char is not necessarily 0..255, but I asked you   
   not to get me started, remember :-)   
      
   > Note that *if* you are interested in traversing over the enumerators of   
   > an enum, you seem to make special assumptions, because there is no   
   > guarantee that they are ascending, or unique values in general. I   
   > emphasis this, because I think that your iteration facility depends on   
   > that guarantees. As you say above, the programmer typically defines them   
   > as the first and last values, but I think there exists more than one   
   > reasonable choice here. But you know that.   
      
   I do not iterate over the enumerators, I enumerate over the value set.   
   If a language permits anonymous enumerates as C does (and it should not   
   IMO) then I will iterate over the anonymous ones too. If the language   
   does not permit anonymous enumerates then I will iterate only over the   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca