4

I'm planning to implement a 'poor man's storage' using Openfiler or NexentaStor CE.

I need the filer solution to provide iSCSI target and CIFS sharing abilities. The iSCSI target and CIFS share will later be mounted as XenServer Storage Repositories.

I might also need replication ability, but no need for RAID since the filer will be installed on an 'elderly' server that already has an honest-to-goodness hardware RAID 1+0.

Between the two solutions above, which one do you recommend, and why?

Or, do you have in mind another solution besides Openfiler and NexentaStor CE?

ewwhite
  • 201,205
pepoluan
  • 5,248

6 Answers6

8

A quick note about Openfiler (and I hear NexentaStor is the same) when used as an iSCSI target - you are almost guaranteed to see timeout errors and targets dropping offline, requiring a reboot of the server to correct. This usually happens under heavy load (though I've seen it happen under light loads, too).

We went through hell with Openfiler using iSCSI for several weeks while we tried to nail down the problem. The issue isn't really Openfiler itself, but the iSCSI module it uses (EIT). There was some talk about them converting to SCST, which doesn't have the problem, but so far not a whole lot has happened. A Google search for "Openfiler cmd_abort" will tell you all about the current problems.

What we ended up doing is dumping iSCSI and just using NFS with Openfiler, and everything has been fine since - but since you mentioned iSCSI, I thought I'd mention the problem before you built everything and last had nothing but problems.

Also, Openfiler's management interface has a couple of interesting bugs. We had continuous issues managing large volumes (4Tb), with the interface not letting us delete volumes, or not letting us recreate them after we finally managed to delete them. It appeared that the interface wasn't always cleaning up after itself, leaving things behind. When we later tried to work with the volume, these left behind bits would result in an error behind the scenes when the script issued new commands to the OS, and the web interface would simply refuse to do what it was asked to do (and it didn't report the error it was experiencing when issuing the commands, so you have no idea why it's not working).

In another experience, we wandered into the NIC setup to look at bonding interfaces. We walked through the setup for it, just to take a look, then clicked cancel - Openfiler applied the changes anyway, disconnecting us from the server in the process. We had to jump to the server's console and log in locally, then edit configuration files by hand to regain access to the server.

All in all, not a great experience with Openfiler - and with the project being virtually abandoned since 2009, I'd be inclined to avoid it, or be prepared to fight with it to get it set up, then not touch it for fear of breaking something and losing your data.

Paul
  • 96
5

There seems to be more momentum behind NexentaStor. You haven't provided much detail on the hardware arrangement other than it being old. What are the CPU/RAM numbers? However, one reason I'd go the NexentaStor route is the presence of inline compression of its storage volumes. Your setup probably isn't suitable for the deduplication features, but the compression comes with a negligible penalty on ZFS-based storage systems.

Nexenta is reasonably-easy to manage and the GUI can access most of the day-to-day features. Can you provide more detail on the disk setup?

ewwhite
  • 201,205
3
  • If you plan to continue using your Hardware RAID, you should be aware that ZFS (on Nexenta) really needs direct access to the single disks in order to be fully operational.
  • CIFS is somewhat limited on Nexenta CE as it currently can't work with LDAP users for access control, this only works with NFS. For CIFS, you need to create local users on the Nexenta appliance. For me this is a major drawback, but the documentation claims that there is ongoing work to fix this. I am not sure if it would work when bound to an AD domain though, but maybe it's totally irrelevant for you anyway.
  • Snapshots on ZFS are really nice. You can create unlimited amounts of snapshots, which have basically no overhead at all. Openfiler works with Linux LMV as far as I know, so I guess it will have the LVM-typical quite heavy performance penalty when doing snapshots.
  • For Xen storage, the deduplication offered by Nextena could come in very handy, but this needs loads of RAM.
  • I don't want to spread FUD, but the future of Nexenta is still a bit unclear for me with Oracle controlling ZFS and Solaris.
Sven
  • 100,763
2

I personally use Solaris 11 Express with my VMware cluster presenting the various ZFS pools to VMware via NFS.

I'm quite comfortable with the Solaris 11 console and prefer the direct ZFS control this presents.

I tried Openfiler (no ZFS), NexentaStore CE (limited web interface for some functions), FreeNAS (ZFS version was too old) and OpenSolaris (current fork situation needs to settle down) before making the decision to just use Solaris 11 natively.

2

If you plan on sticking with the hardware RAID controller, go with OpenFiler. If you can invest in a JBOD controller, go with NexentaStor simply because of the better features the ZFS filesystem has to offer over Linux's LVM+EXT4/XFS/ReiserFS/etc. I would make the ZFS investment simply because it would eliminate the possibility of data corruption. But if you're just testing stuff & don't want to spend a penny, OpenFiler is a good distro.

churnd
  • 4,237
0

I should point out, NexentaStor CE is not free for business use. I've spoken directly to Nexenta about this. Up to 18TB is free for hobbyist and lab use, but production is always paid for, no matter what.

Check our NAS4Free and FreeNAS for cheap alternatives to OpenFiler and NexentaStor that don't iSCSI issues or high cost associated with them.

And if you don't need the web interface, just use Linux or FreeBSD on its own.

http://www.smbitjournal.com/2012/04/choosing-an-open-storage-operating-system/

But I agree with everyone that NFS is the best option regardless.