---------------------------------------------------------------------- Date: 11 Aug 1996 21:03:05 -0700 From: Kent Crispin Subject: Re: CONSENSUS PN TOP LEVEL DOMAIN NAMES Dan Busarow allegedly said: > > On Sun, 11 Aug 1996, Kent Crispin wrote: [snip] > > This would be a potentially enormous, high-transaction rate database that > > contained the entire second-level domain name structure. > > Let's say they are 150 new TLDs (per Postel draft, I think fewer would be > prudent) and let's say each one is wildly succesful, rivaling COM. > > 150 TLDs * 400000 domains/TLD -> 60M domain entries. > > Granted this is a big number but it is nowhere near the limits of current > database technology. I have customers with more records in databases > running on 486's with 0.1 second response time to queries. Yes, good point. Undoubtedly, however, your customers run on a private network. In our case, there is another wrinkle to consider. Perhaps you could add some insight: This database needs some kind of authorization control -- it can't just be on-line, listening to port 12345, for example, because then anyone could reserve any name in any TLD. Obviously you wouldn't want plain-text passwords flying around the net, either. The most straightforward model would be for each registry for a TLD to be given a public/private key pair to authorize reservations for that TLD. However, PK solutions with decent key sizes demand significant CPU, and a denial-of-service attack is quite easy to imagine... I believe that a private network is out of the question. Any ideas about how to handle this problem? Perhaps we should move this to the shared-tld list (shared-tld@higgs.net). There is considerably less noise there... - -- Kent Crispin "No reason to get excited", kent@songbird.com,kc@llnl.gov the thief he kindly spoke... PGP fingerprint: B6 04 CC 30 9E DE CD FE 6A 04 90 BB 26 77 4A 5E ---------------------------------------------------------------------- Date: 11 Aug 1996 21:03:35 -0700 From: Dan Busarow Subject: Re: CONSENSUS PN TOP LEVEL DOMAIN NAMES On Sun, 11 Aug 1996, Kent Crispin wrote: > plain-text passwords flying around the net, either. The most > straightforward model would be for each registry for a TLD to be given > a public/private key pair to authorize reservations for that TLD. > However, PK solutions with decent key sizes demand significant CPU, > and a denial-of-service attack is quite easy to imagine... Two methods come to mind. 1) a virtual private network using on the fly encryption. Off the shelf hardware and virtually no performance hit at T1. 2) Digital certificates. Both server and client auth are now available but I don't really know how much CPU they take to authenticate. Secure web servers are reported to be about three times slower than normal ones, but I would guess that the transaction could be completed in the clear after an authorization handshake. The time interval should be too short for a hijacked connection. Any coderpunks here? > Perhaps we should move this to the shared-tld list > (shared-tld@higgs.net). There is considerably less noise there... I thought that's where we were, oops, better pay more attention to headers. Dan - -- Dan Busarow 714 443 4172 DPC Systems dan@dpcsys.com Dana Point, California 83 09 EF 59 E0 11 89 B4 8D 09 DB FD E1 DD 0C 82 ---------------------------------------------------------------------- Date: 11 Aug 1996 21:03:41 -0700 From: Dan Busarow Subject: Re: CONSENSUS PN TOP LEVEL DOMAIN NAMES On Sun, 11 Aug 1996, Kent Crispin wrote: > plain-text passwords flying around the net, either. The most > straightforward model would be for each registry for a TLD to be given > a public/private key pair to authorize reservations for that TLD. > However, PK solutions with decent key sizes demand significant CPU, > and a denial-of-service attack is quite easy to imagine... Two methods come to mind. 1) a virtual private network using on the fly encryption. Off the shelf hardware and virtually no performance hit at T1. 2) Digital certificates. Both server and client auth are now available but I don't really know how much CPU they take to authenticate. Secure web servers are reported to be about three times slower than normal ones, but I would guess that the transaction could be completed in the clear after an authorization handshake. The time interval should be too short for a hijacked connection. Any coderpunks here? > Perhaps we should move this to the shared-tld list > (shared-tld@higgs.net). There is considerably less noise there... I thought that's where we were, oops, better pay more attention to headers. Dan - -- Dan Busarow 714 443 4172 DPC Systems dan@dpcsys.com Dana Point, California 83 09 EF 59 E0 11 89 B4 8D 09 DB FD E1 DD 0C 82 ---------------------------------------------------------------------- Date: 11 Aug 1996 21:03:47 -0700 From: "David R. Conrad" Subject: Re: Coordination protocols (was Lightweight vs. heavyweight registries) Hi, >There are some unique characteristics to >problem that might make it tractable -- in particular, the fact that >collisions should be almost totally non-existent -- what are the odds >that two separate parties are asking for exactly the same domain name >at exactly the same time? I'd say pretty high -- you're dealing with a competitive environment where the various players could conceivably make gobs of money if they were able to obtain a domain before one of their competitors. I guess you have to treat this like a security problem -- everyone is out to try to break the system... >You could, for example, just forget >coordination altogether, assign the domain name, and eventually get >back an error from DNS itself saying the name had been assigned >already. Ah, the chaotic approach :-) >Use a timestamp to resolve such conflicts, with the caveat >that any name that has been in DNS for a week takes precedence, >regardless of the date of initial registration. Introduce a week's >delay between initial registration and finalization of the deal. > >What do you think of such a scheme? Not sure I see what the week's wait gets you, and wouldn't you have to worry about the global clock problem -- I guess you'd have to assume people would move their clocks back in order to obtain a domain they wanted (or did I misunderstand your idea?). One way of avoiding having a global clock would to use random numbers and some non-deterministic algorithm in the case of collisons... >If the LWRs are run by a centralized entity they become a much bigger >deal, because the centralized entity then essentially maintains a >database for *all* domain names. Such a database could become very >large. Unless you want to redeploy the entire "whois" installed base and habits (arguably a good idea, but I'm trying to keep things simple), I am assuming the final recording of the domain information is done with another entity, e.g., the existing Internet registries -- LWRs would be responsible to update the appropriate registry as part of ther service. In any event, to deal with scaling issues, one approach could be to assume a significant number of "D" servers with the IANA delegating a few domains to one, a few others to another, etc. E.g., say you have 100 new TLDs and 10 "D" servers, you could allocate 10 domains to each "D" server. If the concept of "for the good of the Internet" is truly dead, the IANA could put out RFQs and the "D" operators could derive some (fixed?) level of profit simply operating the the servers. Given the registratin data is maintained at one of the existing registries, if a LWR goes out of business, no information is lost. Similarly, if a "D" operator goes out of business, since D only provides a mutual exclusion and zone file building service, no information would be lost with the possible exception of the most recent version of the zone file. I have to think about the heavy-weight registries a bit more... Cheers, - -drc ---------------------------------------------------------------------- Date: 11 Aug 1996 21:03:58 -0700 From: "Dave Collier-Brown" Subject: Re: WG charter timeline Kent Crispin wrote: >I also added a sentence about motivation. Please, more comments! Ok, but it's minor (:-)) Michael Dillon, kent and a cast of thousands said: >The Shared Top Level Domains Working Group is concerned with the >technical and logistic requirements of creating shared domain name >registration databases, and the distribution of updates to and the >administration of delegated top level >domains by multiple domain name registries. The motivation for this >concern is to minimize centralized management of all components of >the name space. I think this is usefull as it directs attention to the mechanisms (zone transfer???). It was triggered by the comment of John R Levine's about the number of actual dns servers a shared domain might well have! - --dave - -- David Collier-Brown, | Always do right. This will gratify some people 185 Ellerslie Ave., | astonish the rest. -- Mark Twain Willowdale, Ontario | davecb@hobbes.ss.org, unicaat.yorku.ca N2M 1Y3. 416-223-8968 | http://java.science.yorku.ca/~davecb ---------------------------------------------------------------------- Date: 11 Aug 1996 21:14:07 -0700 From: johnl@iecc.com (John R Levine) Subject: Re: authorization >This database needs some kind of authorization control -- >I believe that a private network is out of the question. Any ideas >about how to handle this problem? Given that the set of valid clients will change slowly, you could probably get away with requiring that each client connect only from a fixed IP address. Alternatively, each client could have a shared secret key with the server and use that for validation, e.g. for each message sent, create an MD5 checksum of the message XORed with the client's secret key and append that to the message. Then key exchange need only be done once when the client signs up for the server. - -- John R. Levine, IECC, POB 640 Trumansburg NY 14886 +1 607 387 6869 johnl@iecc.com "Space aliens are stealing American jobs." - Stanford econ prof ---------------------------------------------------------------------- Date: 11 Aug 1996 23:01:27 -0700 From: Kent Crispin Subject: Re: authorization John R Levine allegedly said: > > >This database needs some kind of authorization control -- > > >I believe that a private network is out of the question. Any ideas > >about how to handle this problem? > > Given that the set of valid clients will change slowly, you could probably > get away with requiring that each client connect only from a fixed IP > address. That would make spoofing the IP address far too tempting. In good conscious I don't think I could recommend it. > Alternatively, each client could have a shared secret key with the server > and use that for validation, e.g. for each message sent, create an MD5 > checksum of the message XORed with the client's secret key and append that > to the message. Then key exchange need only be done once when the client > signs up for the server. Something like this sounds much, much better. I take it you mean send(msg+md5(xor(msg,key))), rather than send(msg+xor(md5(msg),key)). The latter would trivially reveal the key... - -- Kent Crispin "No reason to get excited", kent@songbird.com,kc@llnl.gov the thief he kindly spoke... PGP fingerprint: B6 04 CC 30 9E DE CD FE 6A 04 90 BB 26 77 4A 5E