[wplug] NFS Availability Issues -- more info on your network ...

Bryan J. Smith b.j.smith at ieee.org
Tue Sep 11 02:33:17 EDT 2007


Michael Semcheski <mhsemcheski at gmail.com> wrote:
> Regarding the UPS...  The persons responsible for that will be
> sacked.

Understood.  Although if it's a room or grid type UPS, and something
like this happens, I demand a local UPS on the box itself.

> Question one:    I believe the only option is "no root access", but
> I don't have it in front of me.

I didn't mean the /etc/exports, I meant the _client_ options.
  Hard v. soft, intr v. (no intr), etc...

> Question two:    No.  /home is the only share.

You can still use the automounter.  Yes, it kinda defeats the purpose
of doing so.  Although I'm pretty much an "auto.home" + "auto.local"
type of guy, and I share out per-user (auto.home) and per-share
(auto.local) basis.

I use /export/(server-purpose)/user or
/export/(server-purpose)/share.  I then mount to
/home/(domain-purpose)/user or /home/(domain-type)/share for each NIS
domain or LDAP lowest-CN.  I know that's overkill for most single
networks, but it's a good habit to get into as you find your NFS
shares sprawling.

Let alone for multiple servers and them being exported SMB as well,
since SMB creates all kinds of issues.

> Question three:  Depends.  Am I at home when it happens?  Usually
> not a long time.

You mean this has happened before?

> Question four:   They can wait.  I'd rather they didn't have to,
> but the world did not end at 18:47 today, as you can plainly see.

I asked #3 and #4 because NFS is designed to be stateful, and "resume
peacefully" when the server comes back up.  That's also why I asked
#1, because client mount options can _change_ or give you "unwanted
behavior" as a result.

E.g., I typically stick with "hard" and _never_ "soft" for a reason. 
Although I _do_ combine "hard" with "intr" so software that can
properly interrupt itself can still use that option.

> A working UPS will solve this particular problem in the future, but
> there are always problems which you can't anticipate.

And you want to throw more complexity into the pile?  ;)

The first thing I used to run into in the late '90s, shortly after
Linux 2.2 added kernel NFS support, was no shortage of long-time web
"experts" trying to sell Linux-based NFS servers.  They thought
failover was basically like web.

He he he.  He he he.  He he.  Not.

In all honesty, I think you have your answer, a local/working UPS. 
If you want to really complicate things, then look at NFS failover. 
It's nothing like web failover.  And, in all honesty, there are
better ways to solve the problem, let alone just having your users
"sit there for 2-3 minutes."

If you're NFS server is having other issues, then we need to talk. 
There are many things that could be causing that.  But if it's not
going down on its own, then don't worry about it.

> Regarding rsync'ing the home directories:  Doesn't work.  Disk on
> the server is that much bigger than the disks on the workstations.
> That is, all the disks in the workstations combined don't equal
> the size of the home partition on the server.

Then use a combination.  It's _not_ an "all-or-nothing" proposition.

Use local home directories with rsync for just basic home directory
data -- settings, user-specific files, etc...  In the case of
automounter, this means no "auto.home" (aka "auto_home") map.

Then use NFS server shares for project data, which users know to use
for various projects, sub-projects, etc...  This is where I use the
"auto.local" (aka "auto_local") map. 

Data management 101.  You've told me how much experience you've had
in this area, so you know how it works.  Change the solution if it's
not working for you.  But do _not_ add complexity you do _not_ need.


-- 
Bryan J. Smith   Professional, Technical Annoyance
b.j.smith at ieee.org    http://thebs413.blogspot.com
--------------------------------------------------
     Fission Power:  An Inconvenient Solution


More information about the wplug mailing list