[wplug] Re: NFS setup -- the can'o worms (but there's a simple HOWTO as well)

Bryan J. Smith b.j.smith at ieee.org
Tue Jul 31 12:37:08 EDT 2007


On Tue, 2007-07-31 at 10:45 -0400, Michael H. Semcheski wrote:
> At present, I've got a server and five workstations.  We may get
> another server later if we need it, and we'll be getting five more
> workstations in another six months.
> What I would like is to have each of the workstations mount /home from
> a network share on the server.
> This seems like a not uncommon thing, but I've never done it.

Understood.  The legacy UNIX concept of the network filesystem (which is
basically what Microsoft calls the equivalent distributed filesystem,
DFS, for server message block (SMB)) is nice because home directories
are the same everywhere.

> Any thoughts on what I should do?  If left to my own devices, I'd
> probably try to setup NFS on the server, and mount the workstations
> through it.
> I don't really know what questions to ask, but I thought it would be
> good to put out some feelers with the folks...

- But is it secure?  Well, even Samba is often not secure either ...

First off, when people first talk NFS, to things are often thrown out:  
A. Easy
B. Insecure

In reality, NFS is as easy or as secure as you want to deal with.  For
the most part, 90%+ of SMB ("Windows Networking") setups I see are just
as insecure as most NIS/NFS.  I.e., there are a lot of "false security"
aspects to 90%+ of SMB implementations.  E.g., SMB password hashes
traverse the network and could be used in a "replay attack" just as bad
as legacy NIS passwd maps.

In other words, I have to "deprogramming" a lot of SMB advocates from
the alleged and marketing "superiority" of SMB and break it down into
what actually goes on.  Furthermore, there _are_ many authorization
capabilities (e.g., Kerberos) that can and do make even NFSv3 far more
secure from an authorization standpoint.  But most people don't want to
deal with those.

- Just the HOWTO ...

So, secondly, "staying shallow" to start, the [aging] NFS HOWTO:  
  http://nfs.sourceforge.net/nfs-howto/  

This will basically cover running NFS using "auth_sys".  I.e., if your
NFS server trusts the NFS client system, then it simply uses the user ID
mapping on the client.  Other than "squashing root" (the root user of a
workstation can't write to the server), it will trust any other user ID.
It's not the most secure, but it works.

BTW, my _highest_ recommendations for NFS bliss on kernel 2.4/2.6 are
the following options ...
  vers=3,bg,hard,intr,rsize=8192,wsize=8192,udp

That's "version 3" (_never_ use NFSv2 in Linux, it is _not_ remotely
IETF compliant -- the 2.2.18+/2.4+ SGI/Trond+Higgen implementation is
much, much better), "background" mount, "hard" mount (soft mounts are
always an issue with Linux's NFS client ), "interruptable" (so software
can break their NFS operation/RPC), read/write size of 8KiB (NFSv2
default is 8KiB, NFSv3 default is 32KiB -- but many Linux kernels are
only built with 1-8KiB -- that was a long-gone Alan Cox decision, long
story) and UDP (_if_ you're on the same subnet -- you may want to use
TCP if you have a lot of subnets).

*BUT* that's not the full story.  NFS is _not_ a good "mount" to just
throw into /etc/fstab.

- Wield the kernel automounter!  It's easier done than said!

So, third, I highly recommend you learn the built-in kernel
_automounter_ as well, so you only mount NFS mounts when people actually
need them, which it does automatically, since "mount" is usually a
root-level only privilege -- the automounter does this for "regular
users" without issue.  The "autofs" user-space daemon is for updates (so
you need to start it at boot).  Although most people confuse the kernel
automounter with NFS, they are _separate_ concepts entirely.

[ SIDE NOTE:  Linux's automounter, even version 4, is rather _immature_
compared to Solaris (just like NFS as well, but don't get me started ;).
You should _never_ use "direct" automounts (in /etc/auto.master) on a
Linux system, as its automounter regularly tries to "take over" control
of the parent directory.  In other words, _always_ create
"maps" (in /etc/auto.master) -- such as home (/etc/auto.home) and local
(/etc/auto.local) to select subdirectories under /home
(like /home/subdomain and /home/local), and not /home itself. ]

Furthermore, most people don't realize that you want to use
network-distributed (NIS, LDAP, etc...) automounter maps so you don't
have to update automounter tables on systems.  In your case, you
probably don't need to worry, you only have 6 systems -- just put an
"auto.master", "auto.home" and, optionally, "auto.local" map in each
system's /etc directory.  But in an enterprise, you need to create
automounter maps.

[ ANALOGY:  The best analogy I can make for "automounter maps" to the
Windows world is that they are like "publishing a share in
ActiveDirectory."  UNIX, by default, doesn't have the "I'm here, come
hack me!" auto-broadcast like Windows Networking, so you have to create
directories for anything.  In retrospect, this is how you *ARE* supposed
to *PROPERLY* manage Windows Networks as well -- don't "broadcast"
shares but "publish" in ActiveDirectory.  But 90%+ of MCSEs don't know
that either ... sigh. ]

- Beyond NFSv3, NFSv4 and drastically improved security ...

Lastly, if you hit the main NFS SourceForge.NET page:  
  http://nfs.sourceforge.net/  

You'll see a lot of reference to newer capabilities, including NFSv4.
NFSv4 is really not so much about the kernel-level server/client
capabilities, but the whole slew of new, user-space RPC capabilities
built around the NFS protocol.  This includes GSSAPI for security, IDMAP
for user ID mapping, etc...  There was a past effort to create a
Self-certifying Filesystem (SFS) which basically tunneled NFSv2/v3 over 

Lastly, despite common rhetoric, NFS and SMB are _complementary_.  The
_only_ time I've seen SMB exported shares (via Samba) have an issue when
they were also exported via NFS is when Samba was _misconfigured_ (often
with grave security holes in the smb.conf!).  NFS _forces_ you to use
proper filesystem-level permissions, Samba can bypass them, so actually
using NFS with SMB will _force_ you to have proper filesystem-level
permissions (that you can't bypass easily in the smb.conf).  You'll also
see people comment about "pam_smb" being "more security" or "superior"
to using NFS/autmounter.  That depends on your viewpoint of
"superior" ("pam_smb" has its own limitations).

In fact, it is often _easier_ to just configure Samba (a _single_line_
in /etc/smb.conf) to leverage the auto.home (auto_home) NIS/LDAP
automounter map than to create Samba DFS sylinks.  If you don't know
what DFS is, then I assume you've only maintained 1 Samba server.  ;)

[ SIDE NOTE:  I don't know why, but NFSv4 has its own Access Control
Entry (ACE) approach different than POSIX Access Control Lists (ACLs)
that UNIX/Linux filesystems use.  This can cause some issues with NFS
+Samba, although I've noted several extensions to NFSv4 that now support
POSIX ACLs and people _are_ using them.  I haven't had to use them
myself yet, simply because I don't like people dorking with permissions
too much -- or I'd just run AFS. ;) ]

I mean, if people are really "anal" on running only one protocol, then
forget both NFS and SMB and run a _real_ secure protocol.  People at CMU
know it, Andrew Filesystem (AFS) under a Kerberos realm.  Especially
since AFS is a virtualized filesystem (not "direct" server
filesystem-level access) which solves a _lot_ of the issues that NFS,
SMB, etc... run into in the host server OS' limitations (like ACE/ACLs,
permissions, locking, etc...).

[ SIDE NOTE:  Despite proliferation, you do _not_ need to use Samba for
Windows integration, _only_ if you want the SMB (file/print service),
NMB (name) or [legacy] Winbind (legacy mapping).  _Real_, very large
enterprise implementations actually use a _real_ set of directory
servers and objects, and then use services that use those objects, and
then just the SMB service for Samba (not NMB or Winbind, which are
issues in themselves -- not because of Samba's implementation, but how
the protocols "act" by their very nature).  E.g., The same IDMAP for
NFSv4 and SMB, typically over LDAP -- _not_ direct Samba services.  In
many cases, I've used "peer trees" of iPlanet (now Fedora/Red Hat) LDAP
and MS ActiveDirectory.  I.e., I don't believe in "making Linux
ActiveDirectory's bitch -- I want my Linux network up when
ActiveDirectory self-toasts, and peer LDAP replication gives me
that. ;) ]

-- Bryan

P.S.  I created a pair of "run-on/long-winded" blog entries back in 2005
attempting to document my standard NFS/SMB filesystem paths/mount
points.  Back in 1999, I begged the main author of "Samba Unleashed" to
add a chapter on "distributed filesystem" concepts in general, but he
was rather and utterly ignorant of them (and of basic UNIX stuff in
general -- long story).

  "Filesystem Fundamentals and Practices"  
  http://thebs413.blogspot.com/2005/08/filesystem-fundamentals-and-practices.html  

  "Linux Servers:  Eccentric Practices for Disk Slicing"  
  http://thebs413.blogspot.com/2005/09/linux-servers-eccentric-practices-for.html  

Some excerpts ...




"
The Data Volume (sdb/vg01/lvm1## used in examples)
...

With that all said, there is actually only 1 essential Data
Volume, /export/systemname. If this filesystem will never be exported
via NFS, then I symlink /home/systemname to it. If it is unlikely that
you will never have more than -- again, Golden Rule -- 1GB of user
homedirectory data, then you can probably just leave /home this on the
root (/) filesystem. But for countless recovery, security and other
reasons, I recommend you always create a separate /home filesystem, and
on a LAN (where network filesystems are in use), I highly recommend it
be /export/systemname. Do this even if you have no plans for NFS or
another network filesystem protocol, and symlink it to /home/systemname.

Now if your server is a network fileserver, then I recommend you
actually create at least two (2) /export/systemname filesystems. I
covered why in my other blog on filesystem fundamentals -- in case you
have to fsck one user data volume, you can still bring up the other. And
then there's the standard localization of corruption, etc... I typically
name the second "home" filesystem /export/systemname2 -- e.g., on a file
server server named "bssrv", I would have at least a "/export/bssrv" and
a "/export/bssrv2." Note, I like to start numbering the second on-ward
volumes at "2" instead of "0" or "1" -- reserving two two numbers
inserting another filesystem (possibly on a different Data Volume),
"just in case" (possibly as a symlink to another volume, or countless
other, eccentric operations I've done in the past).

But more "real world" on a file server is to breaking them down by
department/usage/users.
E.g., /export/accounting, /export/engineering, /export/marketing, etc...
Groups, filesystem types, filesystem sizes/creation/usage (important for
fragmentation considerations), etc... tend to be similar for those in
the same department, using the same applications, etc... But pr4obably
the "common denominator" of doing things by department is security --
not just Groups or even for Discretionary Access Controls (DACs,
traditional UNIX as well as newer Extended Attributes like Access
Control Lists, ACLs), but more for Mandatory Access Controls (MACs, like
Extended Attributes such as SELinux, or various alternatives). I could
literally write a book on this (and I just might at some point ;-),
which is why I feel strongly about it being a "best practice."

Which is why we use definitely use /export for user home directories and
do not symlink it to /home on a LAN NFS file server -- because our
NIS/LDAP Automounter maps will mount those /export filesystems
into /home. I.e., /home becomes the root for an automounted subdirectory
(typically the standard auto.home/auto_home map -- but I won't go any
deeper). There is no performance loss of accessing the
local /export/systemname from the /home/systemname NFS mount because it
occurs over loopback (and directly inode access resulting at the kernel
level).

      * Again, there should always be at least an /export/systemname
        (possibly symlinked to /home/systemname), although for file
        servers, you should avoid just creating many /export/systemname#
        filesystems -- while also not creating the proverbial "all eggs
        in one basket" as I've preached before. In a nutshell,
        the /export/ sytemname[#] convention is what I use for creating
        local, exportable data filesystems on UNIX workstations, cluster
        nodes, etc... where there are cross-automount of each other, as
        well as by UNIX desktops (accessing workstation, cluster, etc...
        storage). For LAN file servers, you're typically serving up data
        to different departments and, again, those departments have
        similar needs, usage, criteria, resulting security, result
        fragmentation, resulting application, etc... Build your
        filesystem names (for all your servers) around those
        departments, including any "shared" directories for a department
        underneath (E.g., /export/dept/project/XYZ with a "catch-all"
        of /export/dept/temp and I explicitly use temp because I want
        users to realize there is no "permanent catch-all" directory for
        arbitrary projects -- usage, security, groups, etc... need to be
        defined when they are formalized).
      * For Andrew Filesystem (AFS), my source filesystemss that house
        the virtualized AFS volumes (for those that don't know, you do
        not share "local files" out via AFS -- AFS is a virtualized
        filesystem that is more like a set of "database files" that you
        can't directly read) have their own tree. Since /afs is where
        they are typically mounted, I call the local filesystems on AFS
        file servers /expafs (for exported filesystems via afs). Again,
        the /expafs/systemname* filesystems are NEVER exported
        themselves, but putting them in a different tree PREVENTS me
        from accidentally exporting them or otherwise making them
        available via another protocol (which would not only do nothing,
        but possibly cause corruption).
"







-- 
Bryan J. Smith         Professional, Technical Annoyance
mailto:b.j.smith at ieee.org   http://thebs413.blogspot.com
--------------------------------------------------------
        Fission Power:  An Inconvenient Solution



More information about the wplug mailing list