[wplug] NFS Availability Issues -- more info on your network ...

Bryan J. Smith b.j.smith at ieee.org
Tue Sep 11 17:50:10 EDT 2007


Jonathan Billings <billings at negate.org> wrote:
> AFS certainly has read-write support, and has for decades, and
> OpenAFS has existed since 2000.

Yes, I know.  But I was never able to get OpenAFS write-caching to
work when the server was off-line.  That was my point, it was
"read-only" and limited for "server down" purposes.

> OpenAFS is a possible solution, however, the "local caching" won't
> help you if your only volume server goes offline, it's really only
> useful for speeding up frequent reads to the same file.

Oh, so we are on the "same wavelength."  ;)

Even IETF NFSv3 offers async operation (which Linux implements
proper, unlike its IETF NFSv2 implementation).  I know, not exactly
the same,  and there are various AFS advantages since it's a
"virtual" volume.

> You can have multiple replicas of the same volume, in case one
> volume server goes down, but that'll only work for read-only
> replicas, and since AFS has a built-in preference to use the
> read-only volume, it's not terribly useful for /home.

Yep.  And that's the "whole can-o-worms" I was talking about when you
get into failover.

I was addressing more of the "can my users still work" for the "few
minutes" they are down.  If they are just reading existing files,
sometimes the read-cache of AFS (single server) "does the job" for
those "few minutes."  But there is still some of the "hang" issues on
writes.

> Also, keep in mind that running an AFS server isn't as simple as 
> exporting a filesystem, like in NFS.  You need dedicated partitions
> that store the data, and the only way to access that data is
through
> the AFS service.  You need to run several other AFS daemons which
> manage the database and volume management.

Although that has advantages as well.  But, as I was referring to,
more complexity.  But it's certainly "less complex" than dealing with
multi-targetted storage for NFS failover (let alone multiple AFS
servers).

> Also, you'll need a kerberos 5 server, and your users will need
> to have krb5 principals for their AFS tokens.

Yeah, but that's not necessarily a "bad thing" in an university
environment.  And it could be leveraged for rsync (or rsync over SSH)
as well.  But it certainly is more complex too.



-- 
Bryan J. Smith   Professional, Technical Annoyance
b.j.smith at ieee.org    http://thebs413.blogspot.com
--------------------------------------------------
     Fission Power:  An Inconvenient Solution


More information about the wplug mailing list