So there’s been this really interesting thread on nanog of late, about address allocation in IPv6 address space, which hinges on a very strange question: can you be too wasteful with something that seems like it shouldn’t run out?
The background, for the non-technically inclined, is this: at some point in the future, we are going to have to abandon the addressing scheme that has brought you the Internet so far (IPv4) and transition to a new scheme (IPv6) because we’re running out of physical addresses. Most people are aware that the name of a particular machine on the network is just an alias for a numeric address — it’s the numeric addresses we’re running out of, thanks to the limitations of the addressing scheme. IPv4 has a theoretical maximum of 4,294,967,296 addresses; I say “theoretical” because large chunks of the address space are reserved and can’t actually be assigned.
It’s a little bit like the problem we all had about a decade ago, when we discovered we were running out of phone numbers because suddenly everyone had cell phones and fax machines and modem lines. The difference is that we can’t just open up a whole new whack of prefixes by changing the area codes and introducing ten-digit dialing. In Internet land, we’ve hacked around this problem for a long time, pushed the day of reckoning back a couple of times with elegant and not-so-elegant solutions, but we’re going to have to face the music eventually, and deploy the new addressing scheme. Wiki, uncharacteristically, has a nice summary of the scope of the problem.
IPv6 offers the possibility of having 3.4 x 1038 hosts. That’s a lot of addresses. The way it works now is that when you call up your ISP to provision service to your house, you typically get an address. In IPv6-land, we can basically allocate you, as an individual customer, something like a current Internet’s worth of addresses for you to do with as you please. These wouldn’t be private or reserved addresses; they’d be globally routeable and globally accessible, and things like NAT and hiding the number of machines hooked up to your connection wouldn’t be necessary anymore. This has some profound implications.
The nanog thread I linked to has a simple question at its core: given the exceptional size of the IPv6 address space, is it in fact a good idea to hand out that many addresses in one go? Should we be conserving addresses by not handing out a couple billion to people who might use one or a dozen individual addresses? IPv4 worked like this for a while, at the beginning, when we handed out huge blocks of IPv4 space to people who never actually ended up using them (see visual example), and the various registries haven’t really worked very hard to reclaim them. We’d have the same problems under IPv6, too, but 3.4 x 1038 is a really big number — staggeringly big.
If you accept the premise that running out of addresses will take an absurdly long time, and/or require networking many many many things in our lives that may or may not come to fruition (and even then it’ll still take an absurdly long time), do you support giving people way more than they’d need? There are technical arguments for and against this strategy, but the philosophical question remains: given a really large resource, where you’d have to be staggeringly stupid and unlucky over a shockingly long period of time to run out of it, is there such a thing as being wasteful with its allocation?