@scottalanmiller Not yet, but plan to.
Posts made by NerdyDad
-
RE: Replacing the Dead IPOD, SAN Bit the Dust
-
RE: Replacing the Dead IPOD, SAN Bit the Dust
@scottalanmiller I'm looking at proposals right now for new storage refresh which is consisting of currently at least 6 TB of production data and looking at a price somewhere in the vicinity of $35k to $55k. How much more would I be looking at spending for Hyperconvergence for the same setup? I also have concerns about it of course, such as how is hyperconvergence better than the current hosts/storage setup if its all in one box? Wouldn't it by nature be the worse single point of failure? How about backing up outside of the box, to say a local NAS box or a private cloud storage?
-
RE: Replacing the Dead IPOD, SAN Bit the Dust
@scottalanmiller I likely won't put in my last spare drive unless I absolutely have to. My main end goal is to somehow migrate the data and retire the SAN. It went from 0-17% in about 3 hours. I'm going to let it continue and hopefully it will be done in the morning. I will check on it once I get back to the office.
-
RE: Replacing the Dead IPOD, SAN Bit the Dust
Thanks @scottalanmiller for helping me out with this predicament.
Current status of SAN. Firmware is as updated as it can go right now. I have 2 drives that are rebuilding from a RAID6 array. I have one more drive that is warning me about potential failure but not going to replace it until the other 2 are done rebuilding. The SAN is a Dell EqualLogics PS5000X. Firmware of the controllers are second to the latest firmware.
Host is a Dell PowerEdge R610 with the 86 GB of RAM and 16 vCPUs with VMware ESXi 6.0. This host currently supports 3 VM's, totaling at about 350 GB of production data. 2 of these VM's is on the local datastore of the host, but 1 VM is actually on that SAN that we need. It totals at 220 GB of data. There are no backups (my mistake).
We've tried flipflop failovers with the controllers and it only lasts us so long. Long enough to boot the VM backup but not enough time to actually backup the data. The backplane has been replaced. We've tried replacing controllers and all of the disks turned orange instead of green. We went back with the original controller and array began to operate normally again.
Dell support has advised us to allow for the array to continue rebuilding which was at 17%. Once done, I'm going to attempt to connect to it again and try to pull off the data. Support guy thought that we were overtaxing the SAN and basically freezing it up.
Besides retiring the thing, are there any pointers that I should consider in order to ensure that the backup or migration is a success?