Don't Defrag the Whole SAN

I see a numer of environments that are running routine defragmentation scripts on Xsan volumes. I do not agree with this practice, but given certain edge cases I have watched it happen. When defragmenting a volume, there is no reason to do so to the entire volume. Especially if much of the content is static and not changing very often. And if specific files doesn’t have a lot of extents then they are easily skipped. Let’s look at a couple of quick ways to narrow down your defrag using snfsdefrag.

The first is by specifying the path. In this case you would specify a -r option and follow that with the path starting path you want to recursively seek fragmented files. The second is to limit the number of extents in the file. To combine these, let’s assume that we are looking to defragment a folder called Seldon on an Xsan volume called Harry.

snfsdefrag -r -m 25 /Volumes/Harry/Seldon

You should also build logic into scripts if you are automating the events. For example, you could also use the -c option to just look at how many extents there are and perform the actual defragmentation as part of an if/then only in the event that there are more than a specified threshold. Another example is to check that there isn’t an existing process running in snfsdefrag.

Also, if there is then don’t fire up yet another instance:

currentPID=$(ps -ewo pid,user,command | grep snfsdefrag | grep -v grep | cut -d ” ” -f 1)
echo The current snfsdefrag PID is ${currentPID} so we are aborting the process. > $logfile

If you insist on automating the defragmentation of an Xsan volume, then there’s lots of other little sanity checks that you can do as well. Oh, you’re backing up, right?

5 Comments

  • sh
    February 14, 2010 - 9:58 am | Permalink

    snfsdefrag does not defragment the free space. So you end up with contiguous files, but not with contiguous free space. It means that everytime you defragment your volume, you’re setting it up for even more fragmentation in the future. It is my opinion that snfsdefrag should never be used.

    • February 14, 2010 - 10:09 pm | Permalink

      I agree in some cases. Defragmenting the SAN for the sake of defragmenting the SAN is a bad idea. However, snfsdefrag can be useful in reallocating data across LUNs, reducing speed issues in some cases, etc. One would hope that the issue with free space will be resolved when the latest StorNext code is ported in since it will handle free space.

  • sh
    February 16, 2010 - 7:48 pm | Permalink

    Are you saying that stornext 4.0 snfsdefrag actualy defrags the free space ?

  • Michael
    September 14, 2010 - 1:37 pm | Permalink

    I stumbled across this website while browsing via google. If there are any more issues regarded defragmentation issues on StorNext, just hit me up. I am working in an international consulting agency with excellent connection to Quantum´s dev team and got a lot of experience in designing / fixing snfs issues.

  • Pingback: Removing A LUN Label in Xsan | Krypted.com

  • Comments are closed.