Tagged: opendns

What’s Wrong With OpenDNS

First off, before I get to anything that’s wrong, there’s a lot that’s right about OpenDNS: It’s a simple, effective and flexible tool for content filtering. As a company, they’re trying to improve the state of DNS for end users with tools like DNSCrypt. You can’t beat their entry-level price – free. Their anycast network is good, especially if you’re on the west coast of the United States, like I am (in fact, it’s better for me than surely-much-larger Google’s 8.8.8.8 and 8.8.4.4). Their dashboard is pretty neat, too.

Second, let’s get the most common complaint about OpenDNS – one that isn’t going to be discussed here any further – out of the way: Their practice of returning ads on blocked or non-existent sites in your browser, via a bogus A RR of 67.215.65.132 (if you don’t go with one of their paid options). OpenDNS is upfront about doing this, so you can decide if the trade-off is worthwhile before you sign up – and you can quit using them any time you want.

Those two preliminaries covered, here’s a case study of what I think is a serious problem with OpenDNS, plus some thoughts on how they could fix it.
Continue reading

Advertisements

Test Driving Google Public DNS (Updated with OpenDNS comparison)

Google announced its Public DNS service this morning, claiming enhanced performance and security; I took it for a brief test drive with the following results.

(See bottom of post for an update running similar tests on OpenDNS.)

Methods: I searched Google for keywords that I believed fell somewhere between obscure and common and collected the first ten hostnames printed on the screen. I then used local installations of dig to query a collection of DNS servers for the hostnames’ A records and collected the response times. The different resolvers used were:

  • A local BIND installation (127.0.0.1, cache empty) with Comcast Internet connectivity;
  • A Comcast DNS server (68.87.69.150) via Comcast Internet connectivity;
  • My employer’s internal caching DNS;
  • Google (8.8.8.8) via my employer’s Internet connectivity (mostly Level 3);
  • Google (8.8.8.8) via Comcast; and
  • Google (8.8.8.8) via an Amazon EC2 instance in us-east-1a.

Anticipating a bimodal distribution of results, I assumed high latency responses were cache misses, while low latency responses were cache hits, and categorized results correspondingly.
Continue reading