Fwd: [wplug] Re: NFS Availability Issues -- why I should just help off-list ...

Brandon Poyner bpoyner at gmail.com
Thu Sep 13 16:15:22 EDT 2007


On 9/12/07, Bryan J. Smith <thebs413 at yahoo.com> wrote:

>  Then what do you use?  Or more relevant to the thread, what do you
> see as a solution here?

I don't know near enough details of the OP's problem, budget, and
remaining time to implement to jump to a solution.  I'm not taking a
stand against NFS, I'm saying that it leaves much to be desired.  I'll
fully admit I've done some interesting things with NFS such as
diskless servers with FreeBSD, but it takes money to make it reliable
and resilient.

> > and not everybody has the money to buy a SAN for their tiny
> project.
>
> But that's the _only_ way you get _compatibility_ with what he needs.
>

Not exactly true, but therein I hit limitations on things I've
actually done myself.  There are some other ways available to export
block devices over the network.  GNBD is a project by Redhat that does
it in the kernel (with userland tools to manage it).  You can also try
ATA over ethernet (AOE).  NASA is using AOE and the Goddard Space
Flight Center.  I don't see any reason why I can't mention
technologies I've never tried myself in case people might be
interested and investigate the issue further themselves.

> > I'm not disagreeing that NFS over DRBD _could_ break on you,
>
> It _will_.  ;)
>
> Every single link and other commentary you posted was for supporting
> multiple _web_ servers (or other, stateless Internet protocols) from
> the same data pool.

That's a bit broad to say that one couldn't find a way to make it
work.  Still talking out my @ss because I've never done this
production but I have done it in testing and read of others doing it
in production.  You can use the DRBD device as the backing storage for
a Xen guest that serves up NFS.  You can migrate the guest with no
downtime if you know you need to perform maintenance on the host.  A
significant risk is having the host fail unexpectedly while the guest
is running on it.  Some people have _wildly_ different expectations of
service uptime.  You'd _never_ recommend the above to a bank; Banks
don't like risk and are willing to pay to avoid it.  Joe Schmoe
running a SOHO might be perfectly OK with that risk.

> And make no mistake, I've even seen someone else lose their job over a LUG
> recommended solution.  Luckily I was able to "deal that guy back in"
> after I got them back up'n running, and explained what I call
> "Linux's 97% problem" to his employer.  ;)

If anybody out there isn't doing their own research after getting
advice from any source .. is begging for trouble.

> So, again, I ask ... _what_ "network filesystem" do you use for
> remote home directories then?  What about general, local subnet file
> sharing?  At a "mount" level?
>
> Or what strategies do you use to _avoid_ that?  Local home
> directories with rsync?  Selective remote network filesystems as
> little as possible?  What?
>
> You can say "I'm not a fan of NFS" and that you don't have anything
> against network filesystems, but to date, you have offered _nothing_
> that helps him.  And even that aside, and I don't mean to harp on
> this, but you're showing me you have _no_ familiarity with the issues
> involved here.
>
> Every single one of your analogies and examples has been web or
> simple Internet protocol related, 0 related to home directories and
> management.  At most, I have "thin client/terminal servers" to go on
> in a brief statement by you.

Yes, I've used NFS for exporting /home, exactly what you're claiming
I've never done.  Yes, I understand that NFS is full of pit-falls and
that you can end up with a hosed mount.   You don't know what I have
and haven't done except for what I've told you.  By all means, tell me
my life story or stop with the personal attacks.

> There's nothing more dangerous than a Linux consultant (or any
> consultant for that matter) who doesn't know the application or
> context they are getting into.  Worse yet is the one that recommends
> a solution they have never used under a context, much less never used
> under _any_ context, one that utterly _breaks_ under the context.
>
> That's what we have here.

First off I'm not a 'Linux consultant' and never claimed to be one.
Second, I don't see any reason I cannot chime in with a message that
says 'consider using method X' if I've heard that it works for people,
which is what I did in the first place.

I guarantee that there are people doing exactly what you're claiming
breaks under the context.  They're willing to take the risk that if
there is an _unexpected_ failover that some clients might get hosed in
the process.  That's their own choice and there's little you or I say
that will change their minds.

-- 
Brandon
Kiva.org - Make a small loan, Make a big difference


More information about the wplug mailing list