My small contribution to the update-your-DNS-server panic

How to find the version of BIND that you’re running:

> dig @localhost version.bind txt chaos

; <<>> DiG 9.3.2 <<>> @localhost version.bind txt chaos
; (2 servers found)
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7775
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;version.bind. CH TXT

;; ANSWER SECTION:
version.bind. 0 CH TXT "9.3.5-P1"

;; AUTHORITY SECTION:
version.bind. 0 CH NS version.bind.

;; Query time: 57 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Mon Jul 14 11:45:14 2008
;; MSG SIZE rcvd: 65

On Parity Lost

I just finished reading a paper presented at FAST ’08 from the University of Wisconsin, Madison (including the first and senior authors) and NetApp: Parity Lost and Parity Regained. The paper discusses the concept of parity pollution, where, in the words of the authors, “corrupt data in one block of a stripe spreads to other blocks through various parity calculations.”

With the NetApp employees as (middle) authors, the paper seems to have a slight orientation towards a NetApp perspective, but it does mention other filesystems, including ZFS both specifically and later by implication when discussing filesystems that use parental checksums, RAID and scrubbing to protect data. (Interestingly, the first specific mention of ZFS contains a gaffe where they refer to it using “RAID-4” instead of RAID-Z.) The authors make an attempt to quantify the probability of loss or corruption – arriving at 0.486% probability per year for RAID-Scrub-Parent Checksum (implying ZFS) and 0.031% probability per year for RAID-Scrub-Block Checksum-Physical ID-Logical ID (WAFL) when using nearline drives in a 4 data disk, 1 parity disk RAID configuration and a read-write-scrub access pattern of 0.2-0.2-0.6.
Continue reading

Kickstarting CentOS 5.1 – Not from a yum repository any more

In the past, I’ve used our local mirror of the CentOS yum repository to kickstart machines booted using PXE; apparently, this no longer works with CentOS 5.1, although it did with 5.0. If you attempt to do so, after the initial PXE boot, you get the following message:


The CentOS installation tree in that directory does not seem to match your boot media.

Continue reading

Running FreeBSD 6.3 on VMware ESX (Updated)

So, you recognize that FreeBSD isn’t officially supported on VMware ESX, but you want to give it a try anyway? Here’s what I did to get it installed, with VMware Tools and using e1000 Ethernet drivers:

Installation was for the most part straightforward – I chose “Other” for the operating system type, and allocated resources like I would for pretty much any other operating system. The install from an NFS-mounted ISO image worked fine; I’ve only run into two issues so far: Installing VMware Tools and changing the Ethernet drivers from the default Lance drivers.
Continue reading