2db083b8   
   From: cr88192@hotmail.com   
      
   "Le Chaud Lapin" wrote in message   
   news:dafe9b6f-baaa-477f-a12e-508ee46788f6@b2g2000yqi.googlegroups.com...   
   On Nov 16, 10:27 pm, "BGB / cr88192" wrote:   
   > "Char Jackson" wrote in message   
   > what could help the cause of IPv6?...   
   > maybe the people who run ISPs and write router firmware bothering to   
   > support   
   > it.   
      
   <--   
   Many prominent researchers have been saying for years that lack of   
   adoption of IPv6 is the fault of ISP's. My personal view is that IPv6   
   is a bit hackish, and ISP legitimately do not understand what they get   
   for integrating it. The "documentation" is necessarily monstrous,   
   spread-out over many RFC's, but the problem is not so much the   
   documentation, but the underlying model, and a lack of new   
   applications that ride on the model.   
      
   IPv6 also suffers from "Lack of Sufficient Specificity" (LOSS). This   
   is what happens when a group of designers does not yet have a clear   
   understanding of a problem or its solution. Their approach is to   
   provide egregiously copious amounts of "flexibility" in the the   
   specification. Then, the specification allows so much flexibility that   
   the size of the implementation space explodes combinatorially. The   
   likelihood of incompatibility rises to certainty, unless every   
   implementation includes all reachable points in the system space,   
   which would then, by definition, remove the flexibility originally   
   sought, lest end-to-end principle be compromised. As an example   
   consider the possible modes of IPv6 security. It appears there is   
   great flexibility, until one realizes that the set intersection of   
   supported modes must not be empty for two systems to communicate,   
   which implies that, if every system is to be able to communicate with   
   every other system, the universal set of modes must be present in each   
   system. So lack of specificity in a specification is not virtuous.   
   The "flexibility" is actually disservice to the implementor.   
   -->   
      
   granted, I don't believe IPv6 is exactly perfect, but alas it is probably   
   still better than the world of multi-layered NAT we are likely to soon be   
   running into...   
      
      
   > granted, personally a less-overly-large address format would have been   
   > preferable (say, going to 64 bits), since likely this would have had less   
   > bandwidth impact. maybe if it had been designed with a little nicer of a   
   > migration path, things would have also been better.   
      
   <--   
   I remember the week when 128-bit was finally chosen over 64-bit. It   
   seemed to me that the primary reason, more than fundamental technical   
   considerations, was that "One cannot go wrong with 128, even if we do   
   not have everything figured out yet." IPv6 candidates like SIP, TUBA,   
   and NIMROD were still being evaluated. The 128 address space of IPv6   
   was very much in its early stages of design, even though 128 bits had   
   already been chosen.   
      
   Surely, after a certain size, it does not matter so much as how big   
   the address is (256-bits anyone?), but what the address represents -   
   the contextual model in which it fits. It seems that too much   
   attention has been given to the size of the address, and not enough to   
   what the addresses represent. The contextual model, one that is   
   soundly theoretical, whose regular form is readily apparent   
   retrospectively, has not yet been devised.   
      
   I think, once someone finds this model for computer networking, they   
   can then ask the question "Ok, now that we understand what is going   
   on, how big should the addresses should be?", and the model itself   
   will reveal the answer. My gut feeling is that the answer is "64".   
   -->   
      
      
   more or less agreed.   
   for anything where addresses are assigned, 64 bits is likely more than   
   enough, and 128 likely just wastes bandwidth...   
      
   granted, if IPv6 were a self-organizing system based around the large-scale   
   use of random number generators, 128 bits would make more sense, but alas it   
   is not...   
      
      
   > for example, rather than completely redesigning the protocol, a hack could   
   > have been added to expand the address space while still keeping raw IPv4   
   > packets as a de-facto transport (only, with plans in place to eventually   
   > dissolve it in a piecewise manner), such that the network would have been   
   > up   
   > and running quickly, rather than very slowly and in bits and pieces.   
      
   <--   
   Yes, this is possible. Since routing is fundamental, and routing today   
   requires IPv4 headers [mostly], then any non-disruptive solution will   
   need to piggy-back on IPv4 in some way.   
      
   1. One could create an entirely new protocol stack, then use IPv4 in   
   "bitch mode" (tunneling).   
   2. One could skip tunneling and embed IPv4 addresses in IPv6-like   
   addresses, a kind of hybrid addressibility that does not explicitly   
   use tunneling but still uses IPv4.   
      
   The distinction is subtle, but important. In the former case, there is   
   a truly new protocol stack that is using IPv4 tunnels as point-to-   
   point links. No link, no network. The links, if replaced by point to   
   point Ethernet links, would eliminate IPv4 from the network, so the   
   intermediate routers must be routing the new protoocol.   
      
   In the latter case, IPv4 remains endemic to the new stack. This   
   distinction becomes pertinent when trying to solve other networking   
   problems, like mobility and multicast. #1 is the method by which   
   mobility/multicast would be unimpeded by the use of IPv4, whereas #2   
   would prevent [regular] mobility and [regular] multicast.   
   -->   
      
   yeah.   
      
   what I had immagined essentially combined both options.   
      
      
   I guess an issue at the present time, with the current NAT mess, is that   
   directly basing a network on a v4 address is problematic (as in the 6to4   
   case), but tunnelling is lame (it is a hassle to set up, is not very   
   reliable, and more subtly, Windows may be stupid and end up trying to run   
   their local v4 traffic over the tunnel, interferring with ones' ability to   
   access stuff on their LAN, and also making internet access slow, requiring   
   one to fiddle with their network settings, ...).   
      
      
   an idle thought is that a hybrid strategy could be used, where there are   
   "tunnel routers", which sit around and keep track of where networks are in   
   v4 space (similar to a broker), but traffic may be routed indirectly (if   
   possible, this may depend some on the specific NAT routers).   
      
      
   for example, a local host (behind NAT), connects to a tunnel router, which   
   opens up a UDP port with a global v4 address, which reaches the "router"   
   (and it keeps track of this).   
      
   sending traffic to this network may hit this, and the traffic may be bounced   
   to the outgoing IP:port pair. however, maybe this could be passed along (in   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|