On 12 Feb 2009, at 15:45, Andrew McMillan wrote:
On Thu, 2009-02-12 at 14:45 +1300, Ian Batterbee wrote:
But wasn't there a big discussion at NZNOG about how applications shouldn't know anything (or care) about the underlying communication protocols being used ?
Yes, and wouldn't it be nice if *libc* didn't want applications to make different calls when using IPv6 addresses vs. IPv4 addresses?
Surely the bits of libc that are address-family dependent are those that remain entrenched for historical reasons and rely upon data structures that carry endpoint addresses in 32-bit words. That being the case, requiring the use of different APIs seems fairly understandable. It'd be glorious if there was a non-lossy mapping from 128-bit addresses into 32-bit holes in data structures, but if we could do then we could also encode infinite amounts of data into a single bit and travel backwards in time, so perhaps the libc optimisation would not seem so exciting, in context. Joe