Amazon has announced a new authoritative DNS service – Route 53.
Sign up is straightforward – click a few buttons on aws.amazon.com, and a few moments later, you’ll have an email confirming your access to the service. If you dig into the Getting Started Guide, you’ll note that “Part of the sign-up procedure involves receiving a phone call and entering a PIN using the phone keypad,” however, that wasn’t necessary for me. Perhaps it’s only for new AWS accounts?
There is no user interface in the AWS Console although there are indications one is on its way. The Route 53 developer tools are fairly bare-bones at this point – four Perl scripts. Those scripts require relatively uncommon Perl modules, not included in the default Ubuntu (Lucid) repositories, although they are available through CPAN.
However, the third-party Boto Python interface to Amazon Web Services already includes support, and presumably other tools are also rapidly adding support, if they don’t have it already.
Using the Perl tools, I created a zone for an example domain – gearlister.org – and was given four name servers:
ns-1945.awsdns-51.co.uk (126.96.36.199) ns-39.awsdns-04.com (188.8.131.52) ns-690.awsdns-22.net (184.108.40.206) ns-1344.awsdns-40.org (220.127.116.11)
(See bottom of post for an update running similar tests on OpenDNS.)
Methods: I searched Google for keywords that I believed fell somewhere between obscure and common and collected the first ten hostnames printed on the screen. I then used local installations of dig to query a collection of DNS servers for the hostnames’ A records and collected the response times. The different resolvers used were:
- A local BIND installation (127.0.0.1, cache empty) with Comcast Internet connectivity;
- A Comcast DNS server (18.104.22.168) via Comcast Internet connectivity;
- My employer’s internal caching DNS;
- Google (22.214.171.124) via my employer’s Internet connectivity (mostly Level 3);
- Google (126.96.36.199) via Comcast; and
- Google (188.8.131.52) via an Amazon EC2 instance in us-east-1a.
Anticipating a bimodal distribution of results, I assumed high latency responses were cache misses, while low latency responses were cache hits, and categorized results correspondingly.