[wplug] linux system administration

Bill Moran wmoran at potentialtech.com
Thu Jul 8 20:13:41 EDT 2004


Michael Skowvron <skowvron at verizon.net> wrote:
> Bill Moran wrote:
> 
> > No.
> 
> Despite Bill's optimism of the unix filesystems, all filesystems 
> fragment. Some filesystems are better than others at minimizing 
> fragmentation, but no filesystem is immune. Fragmentation is also 
> going to be somewhat related to how full your filesystem is and how 
> much "flux" (file creation/deletion/appending) the filesystem 
> experiences.

I hope this doesn't start a flame war ...

The first problem is that the term "fragmentation" doesn't really mean
anything, because it means different things for different filesystems.

On a fixed-block filesystem (such as FAT or NTFS) fragmentation refers
to file data becomming non-contiguous.  The actual blocks themselves
never become fragmented because they are fixed in size.  The random
fragmentation of file data causes performance degredation because the
heads have to seek all over the place to read the file.

On a filesystem such as FFS, the disk space is organized into a hierachy.
The basic unit of storage is a block, and a block is usually about 16K.
If you have a file that isn't a multiple of 16k in size, the filesystem
will either break a free block into "fragments" to store the data, or
find an existing fragmented block with space that can be used.  The
interesting thing is that FFS _never_ attempts to allocate files in 100%
continguous blocks ... thus FFS _intentionally_ creates what Windows would
call "fragmentation", but in a predictable and efficient manner (instead
of random file data fragmentation that Windows causes)

As far as I know, ext2 works in about the same way as FFS (although I
could be wrong) but I can't speak for XFS or many of the other filesystems.

The simple fact of the matter is that it would be totally impossible to
completely "defragment" a FFS partition.  It is possible that certain
usage patterns can cause fragmentation to become severe, but this happens
so seldom that the typical solution is to dump the filesystem and restore
it.

> When there is lots of free space in the filesystem, large expanses of 
> blocks can be allocated to files. This tends to minimize 
> fragmentation. However, if your filesystems are close to full, 
> filespace must get used "wherever it can be found" and this can lead 
> to lots of fragmentation.

This isn't 100% true.  You're right that filling up a partition will cause
the filesystem code to use a much less optimial scheme for laying out the
data, that phenomonon is _not_ the same as fragmentation.  They are two
completely different things, and unrelated.

> I am most familiar with the XFS filesystem. The engineers that 
> developed it swore that it didn't need to be defragmented. They worked 
> hard on it's design and were confident that they built a virtually 
> fragment-proof filesystem.
> 
> Eventually they conceeded to customer demand for a defragmenter. 
> Customers (especially those running realtime streaming of data) were 
> able to prove that the filesystem fragmented despite it's design. 
> Quite a few years after XFS had been widely deployed, SGI bundled a 
> defragmenter for XFS and set it run regularly from cron.

Interesting.  I'm not very familiar with XFS, so this is an interesting
story.

-- 
Bill Moran
Potential Technologies
http://www.potentialtech.com



More information about the wplug mailing list