[wplug] Re: MD on atop of LVM (or LVM2's native RAID), instead of LVM on top of MD -- WAS: Install Question

Bryan J. Smith b.j.smith at ieee.org
Wed Jul 25 16:20:36 EDT 2007


[ Sorry I forgot to change the subject in my prior response ]

On Wed, 2007-07-25 at 16:05 -0400, Bryan J. Smith wrote:
> It's all about "localization."
>  ...  
> Yes, it's mainly for "ease of use."  Having to create a MD for each and
> every set of LVM logical volumes gets tiresome.

I can't emphasize this "issue" enough in the Linux world.  So many
people just "want to get it working" and don't think of the consequences
down the road.  From a single MD volume to compressing backups onto CD,
etc..., resilience is a key issue that gets forgotten about.

Furthermore, people often proliferate the wrong "root cause" of the
issue -- like LVM completely failing when its atop of 1 MD (because the
MD's meta-data was lost), but rarely is the opposite heard going the
other way (it's typically only 1 slice of the LVM failing, not the
entire LVM disk label) -- or 3Ware hot-swap issues when it's actually
lack of proper kernel hot-plug support because 3Ware is merely
presenting devices JBOD (so it's up to the software to manage that, not
3Ware's on-board microcontroller).

Some even say they won't use hardware RAID because of "proprietary"
arguments.  DeviceMapper not only removes most of those, but established
vendors (e.g., 3Ware), have a very, very long-history (8+ years) of
newer models reading _all_ older volume organization.  It should
_always_ what you have experienced "first hand," not what you assume or
heard "second hand" -- especially when you don't stop and understand the
issues.

The only issues I've ever had with LVM are:  
- Boot-time (e.g., initrd)
- Losing a slice
But those are the same issues with MD too.
Don't rely on "one big anything" for "anything."  ;)

Experience teaches segmentation.

A common constant issue is the "single / partition" because of, again,
"ease of use."  "Oh, I don't want my individual filesystems to fill up."

Considering there are always "run away" /var logs, users (assuming
you're not using quota, etc...), you're often going to "fill up" one way
or another.  Then there are the corruption issues (/var is a common
problem, /tmp somewhat as well), fragmentation (again, /var and /tmp are
culprits -- /usr should _never_ as its static), etc...

Experience teaches segmentation.

E.g., Have you ever had a journal miss and required a fsck?  What do you
do in those situations when you've gotta full fsck and your users are
stuck?

I segment volumes, which means if I've got 1 bad data filesystem, or
possibly something like /tmp or /var (using another slice temporarily),
I can _still_ bring up the server, and then try to solve that one
filesystem.

Done that more than once in my career.  ;)


-- 
Bryan J. Smith         Professional, Technical Annoyance
mailto:b.j.smith at ieee.org   http://thebs413.blogspot.com
--------------------------------------------------------
        Fission Power:  An Inconvenient Solution



More information about the wplug mailing list