Keeping your RHEL VMs from crushing your storage at 4:02am

Running a lot of Red Hat VMs in your virtual infrastructure, on shared storage? CentOS, Scientific Linux, both versions 4 and 5, they count for these purposes; Fedora should likely be included too. Do you have the slocate (version 4.x and earlier) or mlocate (version 5.x) RPMs installed? If you’re uncertain, check using the following:

> rpm -q slocate
slocate-2.7-13.el4.8.i386

or

> rpm -q mlocate
mlocate-0.15-1.el5.2.x86_64

If so, multiple RHEL VMs plus mlocate or slocate may be adding up to an array-crushing 4:02am shared storage load and latency spike for you. Before being addressed, this spike was bad enough at my place of employment (when combined with a NetApp Sunday-morning disk scrub) to cause a Windows VM to crash with I/O errors. Ouch.

Details and ideas for resolution:

By default, a line in /etc/crontab runs the scripts within /etc/cron.daily at 4:02am each morning:

02 4 * * * root run-parts /etc/cron.daily

One of those scripts – mlocate.cron or slocate.cron, depending on your OS version – launches updatedb; as the man page says, “updatedb creates or updates a database used by locate(1).” (The “locate” binary is a filesystem search tool, see “man locate” for more information.) Updatedb refreshes its database by walking the filesystem, generating a fair amount of I/O on a single system. Imagine upwards of thirty of these running in parallel through VMDKs on one shared storage system carrying out internal maintenance at the same time, and you’re pretty much picturing the problem my employer had.

I see three options for addressing this issue:

1) Uninstall mlocate or slocate. If you don’t currently use “locate” and you’re not interested in learning to use a tool that will likely make you more effective at your job (again, see “man locate”), this is probably the best option. (Yeah, I know, people that fit this bill generally don’t read blogs more technical than this one, so I could probably have skipped it here. Consider it an option for completeness, or if you really need to strip down an install.)

2) Disable the scheduled job by removing mlocate.cron or slocate.cron from /etc/cron.daily. This keeps locate available for your use, but requires that you update locate’s database ad-hoc and interactively by running the following as root:

# updatedb

This will take a few minutes to return, depending on the size of your file systems.

I don’t recommend this option either; at least it doesn’t fit the way I work. I often find myself using locate in high-pressure situations in which I need to quickly get a file location on a system. Waiting minutes for updatedb to return is extra painful when every second counts.

3) Stagger when updatedb runs by inserting a random delay into the script.. This is my preferred alternative; locate’s database is kept current automatically, and your storage doesn’t have to bear a sudden spike in load. I implemented this by adding the lines in bold (lines 2-7 if your browser doesn’t display the bold text clearly):

#!/bin/sh
# sleep up to two hours before launching job:
value=$RANDOM
while [ $value -gt 7200 ] ; do
value=$RANDOM
done
sleep $value

nodevs=$(/dev/null 2>&1
/usr/bin/updatedb -f "$nodevs"

The added code inserts a pseudo-random sleep delay of up to two hours before updatedb runs, with the key being the built-in Bash function $RANDOM. In our environment, this removed a 2000 IOPS spike at 4:02am, and eliminated a corresponding jump in filer latency. Obviously, adjust the delay period as appropriate for your environment. Additionally, be sure to add this change to your configuration management or installation management tools so that all of your RHEL and RHEL-derived VMs get the updated script.

Using $RANDOM to avoid this variant of the thundering herd problem also works nicely for a range of similar problems; I believe I first saw it at Moundalexis.com.

(This problem may apply to other Linux distributions being run as VMs, and FreeBSD does something equivalent – weekly – with /etc/periodic/weekly/310.locate. A similar solution can be applied to these environments, if necessary.)

Advertisement

3 comments

  1. nate

    also disable the makewhatis cron I found that to be expensive as well.

    I just checked my configs and it turns out I don’t have updatedb disabled on my VMs, I thought I did, suppose I should go and disable it. I do have my cfengine installation push out a version of the makewhatis cron file that just has an exit 0 at the top of it.

  2. Andy

    Thanks for the comment, Nate. I hadn’t considered makewhatis when I wrote this article, but I believe that it is called with the “-u” flag which greatly reduces its I/O load; a couple quick experiments on my workstation seem to bear this out, but, obviously, test in your own environment.

    Adding the random delay loop in a separate file in /etc/cron.daily – say “1randomdelay” so it runs after the “0*” files – may also be effective (and cleaner) and will stagger all the files after it, sorted by the glob in run-parts.

  3. AskApache Apache

    This causes sleep to sleep for at a minimum, 10 minutes, at a maximum 2hours 10m – if you use s you can really get specific.
    sleep $(( $RANDOM % 120 +10 ))m;

    That plus this turns it into 1 command, so at least you aren’t just creating processes that are just hanging out all day *shudder*.

    ( renice +19 -p $$ &>/dev/null; ( sleep $(( $RANDOM % 120 + 10 ))m; /usr/bin/updatedb –prunefs=”$(< /proc/filesystems awk '$1 == "nodev" { print $2 }')"; ) & )

    sleep $(( $RANDOM % 120 + 10 ))m && /usr/bin/updatedb –prunefs=”`sed -e /nodev/!d; s/nodev[t ]*(.*)/1/ < /proc/filesystems`"

    Or you can do this –
    nice -n 19 sh -c ( ( sleep $(( $RANDOM % 120 + 10 ))s; /usr/bin/updatedb –prunefs=”`sed -e /nodev/!d; s/nodev[t ]*(.*)/1/ < /proc/filesystems`"; ) & )

    Also make sure to put the correct limits on the slocate group in /etc/security/limits.conf, or use ionice also.

    Since locate is only used by interactive users, I have it setup to run with /etc/cron.hourly – before it starts running it tests if the user is logged in and active (within the past hour), if not it just exits. Having this program run every night regardless of whether anyone even uses the command is dumb. On top of running every hour, I also run updatedb on login (again checking the last login time, to make sure it’s not repeatedly running).

    And I also use this alias:

    alias updatedb='( ( nice -n 19 updatedb &>/dev/null ) & )’

    locate has been one of my favorite utils since day 1 on unix, sadly it’s come to the point where a simple find command can do much better than locate (part of the findutils after all).