From: mailbox@dmitry-kazakov.de   
      
   On Sun, 14 Sep 2003 09:35:00 GMT, Don Geddis wrote:   
      
   >"Dmitry A. Kazakov" writes:   
   >> procedure Read_Inc_Write (In, Out : access Stream_Type'Class)   
   >> X : Base_Type'Class := Base_Type'Class'Input (In);   
   >> begin   
   >> Increment (X);   
   >> X'Output (Out);   
   >> end;   
   >> The actual type of X is determined during reading. First the type tag is   
   >> read. Then according to the tag it dynamically dispatches to an   
   >> appropriate read subprogram to read the object.   
   >   
   >What is this "type tag" that is read?   
      
   Type information.   
      
   > I was talking about the user typing   
   >some characters in a terminal. If the user types "53", you ought to generate   
   >an integer. If the user types "53.4", you ought to generate a float. If   
   >the user types "(1 2)", you ought to generate some kind of list structure.   
      
   Why? If the user types 53, this means the file descriptor number 53,   
   corresponding to a serial line attached to the port 53. Then 53.4   
   should never be a float, because may business application is bound to   
   use fixed-point arithmetic, thus 53.4 has to be a rational number of   
   decimal (note) base. Then "(1 2)" means in my application a substring   
   range from 1 to 2 but in the reverse order. (:-))   
      
   Seriously, except for purely mathematical applications, number is not   
   a term. Applications do not deal with numbers, but with "age",   
   "weight", "number of nodes" etc. All that are types which values,   
   casually, mapped to numbers. I am very suspicious of mixing them.   
      
   >Where does the type tag come from?   
      
   When you output a type. The example illustrated a stream I/O, i.e. one   
   between type programs.   
      
   For text I/O due to its nature, nobody uses that, usually an   
   application exactly knows the type being input. So the code looks   
   like:   
      
   X : Base_Type'Class := Read_From_Input;   
      
   Here Read_From_Input is responsible to detect the actual type. For   
   instance it could be a numeric type of an unlimited precision to deal   
   with any possible number.   
      
   >> This code indeed may fail by raising an exception if the input is   
   >> illegal or because of an I/O fault. But it *cannot* fail because   
   >> 'Input, 'Output or Increment is not implemented. This is checked   
   >> statically. If Base_Type would not have Increment defined, the above   
   >> won't compile.   
   >   
   >But that's exactly the case I outlined! Increment is _not_ defined on all   
   >objects; only on numbers. And you can't ensure that the Input will return   
   >only a number; the user might enter a list or symbol or some other non-number   
   >object.   
      
   Then it will be a user fault, because the application *expects* a type   
   having Increment. IF a *legal* input should allow other types THEN the   
   program is incorrect. OTHERWISE, the input is *illegal* and this will   
   be signalled by raising an exception.   
      
   >Hence my example! In a statically-typed language, you _can't_compile_ this   
   >code. But it runs just fine in a dynamically-typed language. And, if the   
   >developer writing the prototype happens to only type numbers when he runs it,   
   >he'll even get the correct answer!   
      
   An incorrect probram may happen to work. Should it mean that it is   
   better to create programs using a random generator? Why should a   
   knowingly illegal program be compilable?   
      
   >> >But you seem to deny that there are _also_ programming tasks for which this   
   >> >is not the critical factor. And I claim that AI research tends to be this   
   >> >latter kind, where the challenge is finding _any_ algorithm that solves the   
   >> >problem, not so much implementing the algorithm correctly once you know   
   what   
   >> >it is.   
   >>   
   >> Well, but if does not solve, how can I be sure that it is because of   
   >> the algorithm and not of a stupid programming fault?   
   >   
   >Of course you can't be sure. But you find in rapid prototyping that the code   
   >often runs without errors, but it exposes a design flaw in the assumptions you   
   >made as you architected the system. In other words, if the algorithm _is_   
   >incorrect, you can often realize that as you run it, and know for certain   
   >that it wasn't a programming fault.   
      
   The problem is that many AI programs perform a huge amount of   
   calculations over extremely large and tricky data structures. Consider   
   image processing or decision trees. Often these programs are also   
   concurrent. Such programs are practically impossible to debug using   
   any convetional debugging tools. This is the case where strict   
   compiler checks are extremely useful. Especially, in languages like   
   Ada, where I can have different integer types, which cannot be mixed:   
      
   type Index is range 1..1024;   
   type Pixel is range 0..255;   
      
   X : Index;   
   Y : Pixel;   
      
   X + Y; -- Compile error!   
      
   Let I know that no reasonable program should sum apples and oranges,   
   then I have an ability to say the compiler to detect all such cases.   
   And this is just by specifying types. I don't feel this price being to   
   high.   
      
   >> >I was only criticizing _forced_ type declarations as always being a good   
   >> >idea. Sometimes they hamper rather than help.   
   >>   
   >> It depends on what you understand under a type declaration. With type   
   >> inference and class-wide types such declaration could be very vague   
   >> leaving much room for flexibility, yet discarding *knowingly* illegal   
   >> cases. What is wrong with that? I saw no algorithm applicable for   
   >> *any* type.   
   >   
   >The problem can be that it takes effort to figure out the exact class of   
   >types for which the algorithm applies. For example, in my Increment example   
   >above, perhaps at the beginning you're only thinking about positive integers.   
   >It's nice to write the code just keeping in mind your current concrete case.   
   >   
   >But then it turns out that the same code also works for negative numbers,   
   >and rationals, and floats, and complex numbers. Extra bonus for you!   
   >   
   >Before you've written the algorithm, though, to force you to decide ahead of   
   >time that "all numbers" is the proper set of types, is putting effort in the   
   >wrong place at the wrong time.   
      
   I agree with that. But note that the only thing said about   
   Base_Type'Class is that it has Increment. This automatically covers   
   all numeric types. So nothing is lost. You can start with an abstract   
   base type specifying as little as you need:   
      
   type Base_Type is abstract ...;   
   procedure Increment (X : in out Base);   
      
   Then as you investigate new algorithms you can either add new methods   
   to Base_Type or, even better, derive new types from it, which would   
   support a richer set of operations. In the end, you can even try to   
   analyze the resulting type tree, and acquire much interesting   
   information from it.   
      
   >> In mathematics you always specify the class of objects you   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|