4

I setup the ZFS-HA following the excellent description Github (see here). After extensive testing, I rolled the setup out to production using 5x12 disks in RAIDZ3 connected to two nodes using HBA Controllers. This ran quite smooth until last night when one of the two storage pools suddenly faulted with "The pool metadata is corrupted." during a scrub run. At this point I can only speculate about what caused this, both pools were set up with SCSI fencing in pacemaker and disk reservations worked flawlessly during all failure scenarios I tested before going into production. The only major incident which occurred recently were two complete power outages without UPS support (read: the power was just gone from one moment to the next). However, it might also be that the true reason for the corruption is something completely different.

The situation now is that I cannot import the pool anymore (kindly see the output of zpool import at the end of this question). So far, all my intents to rescue the pool failed:

# zpool import -f tank
cannot import 'tank': one or more devices is currently unavailable

# zpool import -F tank
cannot import 'tank': one or more devices is currently unavailable

This puzzles me a bit since it does not really say that the only option would be to destroy the pool (which would be the expected response on a lethally corrupted pool).

# zpool clear -F tank
cannot open 'tank': no such pool

I also manually removed all SCSI reservations, e.g.:

# DEVICE=35000c5008472696f
# sg_persist --in --no-inquiry --read-reservation --device=/dev/mapper/$DEVICE
# sg_persist --in --no-inquiry --read-key --device=/dev/mapper/$DEVICE
# sg_persist --out --no-inquiry --register --param-sark=0x80d0001 --device=/dev/mapper/$DEVICE
# sg_persist --out --no-inquiry --clear --param-rk=0x80d0001 --device=/dev/mapper/$DEVICE
# sg_persist --in --no-inquiry --read-reservation --device=/dev/mapper/$DEVICE

I further tried removing A/C from the disk shelves to clear any temporary information that might remain in the desks.

I am quite frankly running short on options. The only thing left on my list is the -X option to zpool import - which I will try after all other measures failed.

So my question is, did you run into anything like this before and - more importantly - did you find a way to resolve this? I would be very grateful for any suggestions you might have .

=========

Pool layout/configuration:

   pool: tank
     id: 1858269358818362832
  state: FAULTED
 status: The pool metadata is corrupted.
 action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
   see: http://zfsonlinux.org/msg/ZFS-8000-72
 config:

        tank                   FAULTED  corrupted data
          raidz3-0             FAULTED  corrupted data
            35000c5008472696f  ONLINE
            35000c5008472765f  ONLINE
            35000c500986607bf  ONLINE
            35000c5008472687f  ONLINE
            35000c500847272ef  ONLINE
            35000c50084727ce7  ONLINE
            35000c50084729723  ONLINE
            35000c500847298cf  ONLINE
            35000c50084728f6b  ONLINE
            35000c50084726753  ONLINE
            35000c50085dd15bb  ONLINE
            35000c50084726e87  ONLINE
          raidz3-1             FAULTED  corrupted data
            35000c50084a8a163  ONLINE
            35000c50084e80807  ONLINE
            35000c5008472940f  ONLINE
            35000c50084a8f373  ONLINE
            35000c500847266a3  ONLINE
            35000c50084726307  ONLINE
            35000c50084726897  ONLINE
            35000c5008472908f  ONLINE
            35000c50084727083  ONLINE
            35000c50084727c8b  ONLINE
            35000c500847284e3  ONLINE
            35000c5008472670b  ONLINE
          raidz3-2             FAULTED  corrupted data
            35000c50084a884eb  ONLINE
            35000c500847262bb  ONLINE
            35000c50084eb9f43  ONLINE
            35000c50085030a4b  ONLINE
            35000c50084eb238f  ONLINE
            35000c50084eb6873  ONLINE
            35000c50084728baf  ONLINE
            35000c50084eb4c83  ONLINE
            35000c50084727443  ONLINE
            35000c50084a8405b  ONLINE
            35000c5008472868f  ONLINE
            35000c50084727c6f  ONLINE
          raidz3-3             FAULTED  corrupted data
            35000c50084eaa467  ONLINE
            35000c50084e7d99b  ONLINE
            35000c50084eb55e3  ONLINE
            35000c500847271d7  ONLINE
            35000c50084726cef  ONLINE
            35000c50084726763  ONLINE
            35000c50084727713  ONLINE
            35000c50084728127  ONLINE
            35000c50084ed0457  ONLINE
            35000c50084e5eefb  ONLINE
            35000c50084ecae2f  ONLINE
            35000c50085522177  ONLINE
          raidz3-4             FAULTED  corrupted data
            35000c500855223c7  ONLINE
            35000c50085521a07  ONLINE
            35000c50085595dff  ONLINE
            35000c500855948a3  ONLINE
            35000c50084f98757  ONLINE
            35000c50084f981eb  ONLINE
            35000c50084f8b0d7  ONLINE
            35000c50084f8d7f7  ONLINE
            35000c5008539d9a7  ONLINE
            35000c5008552148b  ONLINE
            35000c50085521457  ONLINE
            35000c500855212b3  ONLINE

Edit:

Servers are 2x Dell PowerEdge R630, Controllers are DELL OEM versions of Broardcom SAS HBA (should be similar to SAS 9300-8e) and all 60 disks in this pool are Seagate ST6000NM0034. The Enclosure is Quanta MESOS M4600H.

Edit 2:

OS is CentOS 7

ZFS is zfs-0.7.3-1.el7_4.x86_64

Michael
  • 300

5 Answers5

5

In the end I resorted to using the option -X for the import. This exercised all disks by reading at 2GB/s for about 36hrs. After that, no error message was given, the file system was mounted and is now fully accessible again. Until now, no data inconsistencies were detected (zfs scrub is still running). Thanks for all your replies.

However, for future readers I want to pass the warning about the -X option from the man page: This option can be extremely hazardous to the health of your pool and should only be used as a last resort.

Michael
  • 300
1

Seems like the upstream doesn't have much of options here (this is from Oracle Solaris ZFS Troubleshooting and Pool Recovery document, stating that zpool import -F is the only option you really have except hiring the ZFS guru which will actually look into how the metadata is corrupt):

If the pool cannot be recovered by the pool recovery method described above, you must restore the pool and all its data from a backup copy.

And I don't think the OpenZFS alliance has brought much here that would change the situation. And this is indeed sad news.

P.S. This has nothing to do with the reason the pool got to it's state but don't you think that creating 10 disk wide arrays is the problem by itself ? Even with 2+ spare disks. Cold data and so on, you know.

drookie
  • 9,120
  • 1
  • 21
  • 29
0

For future reference you can try these commands when all else fails:

zpool import -F "pool-name", then -FX, then -T in that order. (First take a backup of the original media.)

PS. T stands for Terminator.

SamTzu
  • 33
0

I had 10 x 6TB disks (unfortunately) on a linear array on Xen Center. Afterwards I noted that if one of the disks drops, all the data on the array is lost, so the first learnings were to i) do regular full backups ii) use mirroring or RAIDZ.

# zpool import
  pool: tank
    id: 13141478163464403695
 state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
       The pool may be active on another system, but can be imported using
       the '-f' flag.
  see: http://zfsonlinux.org/msg/ZFS-8000-72
config:
    hdd         FAULTED  corrupted data
      sda       ONLINE
      sdb       ONLINE
      sdc       ONLINE
      sdd       ONLINE
      sde       ONLINE
      sdf       ONLINE
      sdi       ONLINE
      sdj       ONLINE
      sdk       ONLINE

Running zpool import -f tank did not work:

cannot import 'tank': I/O error
    Destroy and re-create the pool from
    a backup source.

By this instruction https://www.truenas.com/community/threads/zfs-pool-corrupted.98445/post-742141 I was able to run first

 zpool import -f -FXn tank

for roughly 6 hours and it exited without any output. Then I ran

zpool import -f -FX tank

Which finally gave an error:

cannot import 'tank': one or more devices is currently unavailable

Which made me to realize that the 10th disk was offline. This was actually due to the hardware RAID controller. I re-enabled the 10th disk and got the array to work again. After that I did a full backup, destroyed the array and recreated it as RAIDZ2.

0

What are the hardware details? Makes and models of servers, disks, enclosures and controllers.

I would disable all HA features and focus on working on one system.

  • Put one node in standby: pcs cluster standby or just disable pacemaker.

  • Manually import the pool on the node you'll be working on:

    zpool import tank -d /dev/mapper/ and observe the result.

Also, what do you see in dmesg after you do this?

ewwhite
  • 201,205