4

At my University Department we are about to upgrade the computers of our student lab (about 25-30 machines). The machines will be running Linux.

One thing about the new machines is that they have huge (1TB) hard disks (we did not ask for them, but anyways these days you cannot find considerably cheaper disks!)

Currently the users home directories are stored on a central file server and mounted via nfs.

So the question is, is there any way we could use all this disk capacity? I would think about

  • expanding our central file store, or
  • replicating the home directories for faster access.

The main issue would be that the lab machines are not guaranteed to be up all the time.

Browsing around this site I read about GlusterFS and AFS.

GlusterFS seems to have many friends and be a nice general purpose solution.

What about AFS? I've read that it has performance problems, any experience with it?

nplatis
  • 141

2 Answers2

6

I've been there, not wanting to "waste" what appears to be good storage. It's not "good", it's a fool's errand trying to use that storage as anything but local. The system would have to keep a full copy of everything on every machine, as it would never know what machine is going to be turned on/off. The replication traffic alone would make a noticeable impact on your network.

If you really want to use those disks, pull them out of the workstations (PXE boot the workstations) and use the disks in a SAN (there are many reasons against using consumer grade disks in a SAN too!)

Chris S
  • 78,455
0

Did you look on CEPH filesystem http://ceph.com/ceph-storage/

Also, about caching, if you really want this, - you can try CacheFS, here is nice article about http://www.c0t0d0s0.org/archives/4727-Less-known-Solaris-Features-CacheFS.html

BVA
  • 101