ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    ZFS Pool Online but Cannot Import

    IT Discussion
    zfs truenas proxmox storage
    3
    10
    859
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller
      last edited by

      We have a ZFS pool from a ProxMox server that died (not an install that we did.) There is no backup (not an environment we set up.) The drives didn't fail, they are clean and healthy. We moved the drives to another host and they show up fine. Everything registers fine. We do an import and we can see the pool to import but when we import it we get "one or more devices is currently unavailable", even though it clearly shows that they are available.

      There used to be more pools showing in this as well. Others have disappeared over time. Originally these all imported with only minor problems. But they've stopped importing. The device names have changed over time, too. But they are correct.

      root@pve1:/usr/local/mesh_services/meshagent# zpool import
         pool: rpool-pmx3
           id: 9234020319468906434
        state: ONLINE
      status: The pool was last accessed by another system.
       action: The pool can be imported using its name or numeric identifier and
              the '-f' flag.
         see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
       config:
      
              rpool-pmx3  ONLINE
                mirror-0  ONLINE
                  sdb3    ONLINE
                  sdc3    ONLINE
                mirror-1  ONLINE
                  sdd     ONLINE
                  sde     ONLINE
      root@pve1:/usr/local/mesh_services/meshagent# zpool import -f rpool-pmx3
      cannot import 'rpool-pmx3': one or more devices is currently unavailable
      
      1 Reply Last reply Reply Quote 0
      • scottalanmillerS
        scottalanmiller
        last edited by scottalanmiller

        Some useful nuggets of info from this link..

        https://serverfault.com/questions/656073/zfs-pool-reports-a-missing-device-but-it-is-not-missing

        This bit from "Jim" in 2020 is super useful for background...


        I know this is a five year-old question, and your immediate problem was solved. But this is one of the few specific search results that come up in a web search about missing ZFS devices (at least the keywords I used), and it might help others to know this:

        This specific problem of devices going "missing", is a known problem with ZFS on Linux. (Specifically on Linux.) The problem, I believe, is two-fold, and although the ZOL team could themselves fix it (probably with a lot of work), it's not entirely a ZOL problem:

        • While no OS has a perfectly stable way of referring to devices, for this specific use case, Linux is a little worse than, say, Illumos, BSD, or Solaris. Sure, we have device IDs, GUIDs, and even better--the newer 'WWN' standard. But the problem is, some storage controllers--notably some USB (v3 and 4) controllers, eSATA, and others, as well as many types of consumer-grade external enclosures--either can't always see those, or worse, don't pass them through to the OS. Merely plugging a cable into the "wrong" port of an external enclosure can trigger this problem in ZFS, and there's no getting around it.

        • ZOL for some reason can't pick up that the disks do actually exist and are visible to the OS, just not at any of the previous locations ZFS knew before (e.g. /dev, /dev/disk/by-id, by-path, by-guid, etc.) Or the one specific previous location, more to the point. Even if you do a proper zpool export before moving anything around. This is particularly frustrating about ZOL or ZFS in particular. (I remember this problem even on Solaris, but granted that was a significantly older version of ZFS that would lose the entire pool if the ZIL went missing...which I lost everything once to [but had backups].)

        The obvious workaround is to not use consumer-grade hardware with ZFS, especially consumer-grade external enclosures that use some consumer-level protocol like USB, Firewire, eSATA, etc. (External SAS should be fine.)

        That specifically--consumer grade external enclosures--has caused me unending headaches. While I did occasionally have this specific problem with slightly more "enterprise"-grade LSI SAS controllers and rackmount chassis with a 5x4 bay, moving to a more portable solution with three external bays pretty much unleashed hell. Thankfully my array is a stripe of three-way mirrors, because at one point it literally lost track of 8 drives (out of 12 total), and the only solution was to resilver them. (Which was mostly reads at GBs/s so at least it didn't take days or weeks.)

        So I don't know what the long-term solution is. I wouldn't blame the volunteers working on this mountain of code, if they felt that covering all the edge cases of consumer-grade hardware, for Linux specifically, was out of scope.

        But I think that if ZFS did a more exhaustive search of metadata that ZFS manages itself on each disk, would fix many related problems. (Btrfs, for example, doesn't suffer from this problem at all. I can move stuff around willy-nilly completely at random, and it has never once complained. Granted, Btrfs has other shortcomings compared to ZFS (the list of pros and cons is endless), and it's also native Linux--but it at least goes to show that the problem can, in theory, be solved, at least on Linux, specifically by the software itself.

        I've cobbled together a workaround to this problem, and I've now implemented on all my ZFS arrays, even at work, even on enterprise hardware:

        • Turn the external enclosures off, so that ZFS doesn't automatically import the pool. (It is frustrating that there still seems to be no way to tell ZFS not to do this. Renaming the cachefile or setting it to "none" doesn't work. Even without the addressing problems, I almost never want the pools to auto-mount but would rather an automatic script do it.)

        • Once the system is up and settled down, then turn on the external enclosures.

        • Run a script that exports and imports the pool a few times in a row (frustratingly sometimes necessary for it to see even legit minor changes). The most important thing here, is to import in read-only mode to avoid an automatic resilver kicking off.

        • The script then shows the user the output of zpool status of the read-only pool, and prompt the user if it's OK to go ahead and import in full read-write mode.

        Doing this has saved me (or my data) countless times. Usually it means I have to move drives and/or usually just cables around, until the addressing gets back to where it was. It also provides me with the opportunity to try different addressing methods with the -d switch. Some combination of that, and changing cables/locations, has solved the problem a few times.

        In my particular case, mounting with -d /dev/disk/by-path is usually the optimal choice. Because my former favorite, -d /dev/disk/by-id is actually fairly unreliable with my current setup. Usually a whole bay of drives are simply missing entirely from the /dev/disk/by-id directory. (And in this case it's hard to blame even Linux. It's just a wonky setup that further aggravates the existing shortcomings previously noted.)

        Sure, it means the server can't be relied upon to come up automatically without manual intervention. But considering 1) it runs full-time on a big battery backup, 2) I've knowingly made that tradeoff for the benefit of being able to use consumer-grade hardware that doesn't require two people and a dolly to move... that's an OK tradeoff.

        1 Reply Last reply Reply Quote 0
        • scottalanmillerS
          scottalanmiller
          last edited by

          Current status... getting additional drives mounted so that we can take block level images of these devices so that we can more safely experiment.

          1 Reply Last reply Reply Quote 0
          • scottalanmillerS
            scottalanmiller
            last edited by

            One big thing we've learned about ZFS risks is that it forces a situation where we are dealing with enormous pools of block data in order to do anything and the ability to copy, image, move, backup and so forth is heavily curtailed by the fact that we are forced to work at the array level before ZFS merges the RAID, LVM and filesystem layers together into a single monolith that, if it fails, leaves you so dramatically exposed.

            travisdh1T 1 Reply Last reply Reply Quote 0
            • travisdh1T
              travisdh1 @scottalanmiller
              last edited by

              @scottalanmiller said in ZFS Pool Online but Cannot Import:

              One big thing we've learned about ZFS risks is that it forces a situation where we are dealing with enormous pools of block data in order to do anything and the ability to copy, image, move, backup and so forth is heavily curtailed by the fact that we are forced to work at the array level before ZFS merges the RAID, LVM and filesystem layers together into a single monolith that, if it fails, leaves you so dramatically exposed.

              Yep. Just because LVM and MD are separate things, that's not necessarily a bad thing. Especially if you've got devices that can change where they are in the /dev system.

              scottalanmillerS 1 Reply Last reply Reply Quote 0
              • scottalanmillerS
                scottalanmiller @travisdh1
                last edited by

                @travisdh1 said in ZFS Pool Online but Cannot Import:

                @scottalanmiller said in ZFS Pool Online but Cannot Import:

                One big thing we've learned about ZFS risks is that it forces a situation where we are dealing with enormous pools of block data in order to do anything and the ability to copy, image, move, backup and so forth is heavily curtailed by the fact that we are forced to work at the array level before ZFS merges the RAID, LVM and filesystem layers together into a single monolith that, if it fails, leaves you so dramatically exposed.

                Yep. Just because LVM and MD are separate things, that's not necessarily a bad thing. Especially if you've got devices that can change where they are in the /dev system.

                Really, it's a very important good thing. ZFS merging that all together adds so much confusion and risk exposure, it's nuts. There is a reason that no production storage ever has done that.

                1 Reply Last reply Reply Quote 1
                • scottalanmillerS
                  scottalanmiller
                  last edited by

                  Just a quick update. We are imaging the drives, converting the images to qcow2, mounting to an Ubuntu desktop and UFS Explorer is, so far, able to see the data in them. Not ideal, but it's working so far.

                  https://www.ufsexplorer.com/articles/how-to/recover-data-zfs-volume/

                  1 Reply Last reply Reply Quote 1
                  • EddieJenningsE
                    EddieJennings
                    last edited by

                    You may want to seek out Jim Salter's content concerning ZFS. This is the community he's started since leaving the ZFS subreddit.

                    https://discourse.practicalzfs.com/

                    scottalanmillerS 1 Reply Last reply Reply Quote 0
                    • scottalanmillerS
                      scottalanmiller @EddieJennings
                      last edited by

                      @EddieJennings said in ZFS Pool Online but Cannot Import:

                      You may want to seek out Jim Salter's content concerning ZFS. This is the community he's started since leaving the ZFS subreddit.

                      https://discourse.practicalzfs.com/

                      Like everywhere else, not one single thing similar to this issue 😞

                      1 Reply Last reply Reply Quote 0
                      • scottalanmillerS
                        scottalanmiller
                        last edited by

                        After scouring countless sites and articles, only thing that could fully read the drives was UFS Explorer. $700 later and many, many crashes, we are starting to have a reliable process of recovering the data. We have to use UFS Explorer and recover as raw disk images. Then attach those raw images to new VMs manually. Then do a Windows recover to each one.

                        1 Reply Last reply Reply Quote 0
                        • 1 / 1
                        • First post
                          Last post