@johnhooks said
Our cafe had pulled pork and Mac n cheese today. It was pretty good.
That would call for an immediate afternoon nap.
@johnhooks said
Our cafe had pulled pork and Mac n cheese today. It was pretty good.
That would call for an immediate afternoon nap.
So there is no "this always happens".
It's a "make sure you have a good backup and see what happens" kind of thing?
Was rocking the MangoCon 2016 shirt at soccer training tonight.
Was going to take a picture and forgot.
Powered down my last DELL server from 2004 today.
Don't judge.
@Dashrender said:
Because they are sales people who don't care about your data, only selling their stuff.
ML Useful Tip #43:
Salespeople are always out to get you!
When you post pictures of people, can you tag them so newbies can start matching names to faces?
#watchingfromafar
How else do you expect a little startup like Microsoft to make money?
I have a new server that has seen me go from an H310 to an H710. It's also seen a change from 7200RPM SATA drives to EDGE SSD.
I'd like to post the numbers that I got from testing, and have some questions answered. I am sure this all makes sense somewhere, I'm just not sure where.
I'll post the numbers, and then my questions.
Hopefully this thread can bring about some configuration settings for anyone looking to configure their RAID cards optimally.
We just use their phones.
Much like oxygen, our kids would die without constant touching of their phones, so we are assured they will always have them.
@YevP said in Something Happen at BackBlaze?:
@scottalanmiller Yea, happy to be here! Mango Lassi's been something I've known about for a while, but just like some very technical subreddits I tried to stay away as my expertise is more cat memes and silly videos - but I'm here now, so feel free to ping me when/if you got a Backblaze question! I'll do my best
You'll find a good mix of cat memes and silly videos here!
@olivier said
Imagine if only I had a bigger team
Well at least you have some willing testers here at ML.
I mean thanks to ML it's a pretty easy install and upgrade.
If running a script is too much, then doing backups and stuff like that might be too much, as well.
You can do it right from the graphical user thing on the host.
@Dashrender said in DPI - Deep Packet Inspection in Unifi:
Seems to be working so far.
Hmmm, what's up with the redacted streaming media...
:
@travisdh1 said in Is this the right place to troll SAM?:
@BRRABill said in Is this the right place to troll SAM?:
@AshleyJR said in Is this the right place to troll SAM?:
@scottalanmiller I have to say Scott I haven't seen any interesting conversations going on here yet.
You promised me argumentsAre you kidding?
I've been arguing with him for days on that one thread.
lol, yeah, been great popcorn time over here
I'm responsible for a lot of popcorn on this site.
And a lot of FFS references.
I like cards that stand out, yet are in the traditional format.
NOTE: Anything I think I've learned here, discussed here, or needs discussed here, I'll mark with a *ML.
My day started with only a mild ominous touch. It was raining, and my youngest daugher thought she was going to throw up. My wife decided to stay home, so I went in to work. Good thing I did.
The day was going fine until around 3:00PM. I noticed our e-mail server was moving slowly. Tried Remote Desktop, was not responding. Time to go to the local box, possibly the old fashioned hard reboot.
When I got into the server room, I noticed one of the drives on our main (and ONLY) data server was blinking amber. I go from a 1 to a 5 on the 1 to 10 anxiety scale because that kind of stuff always makes me nervous. Anyway, no problem, I have spare drives on the shelf ready to go. I pull out the old drive. No problem. I put in the new drive, no problem. I go to log in to start rebuilding the array, and I notice that the server is rebooting. Hmm, that's odd. I look at the drive. Now TWO of the four are blinking amber. I've now gone to a 10, LOL.
Turns out a second drive failed after I did the hot plug. I now realize my data server array is gone. 25 years of data possibly gone forever. Let's hope our Datto device is as good as advertised.
I started up a a hybrid virtualization on our Datto. (Our Datto ALto 2 device cannot virtualize locally, only in the cloud.) Within about 15 minutes of the "event", we had a virtual server up and running on the LAN. The users could go about working as they had been. I made an announcment not to save anything to the server, and began the process of doing a BMR.
*ML1: I actually have extra non-OEM licenses of Server 2003, so this was actually a legit use of the technology.
*ML2: The reason I said not to save is because the Datto allows you to save, but then you need to do a backup on the virtual device, and then do the BMR from that. Since our device is VM in the cloud only, that would not be a great option. All other Datto devices virtual locally and in the cloud, so that would be more feasible.
The BMR is where the trouble began. We have a brand new server, but I did not want to use that, as that will be the platform for our new Hyper-V VMs. I grabbed a spare desktop we had around that also had an Intel RAID controller in it. I plugged in an SSD, and began the BMR. In my tests, I had some issues with BMR, so just in case, I only restored the boot drive. In those test issues, I was able to fix it with the StorageCraft Recovery Environment. (Datto uses ShadowProtect as its backup program.) But we were not able to fix this particular issue. It booted to a black screen. After a while on the phone with Datto support, I decided to BMR another machine while the tech did some backend work on the Datto box to try another BMR method. I got the second desktop up and running, received a STOP 7B error, and was able to fix it with the StorageCraft recovery CD. But then got another strange error, a C0000135 error. I started Googling this while the Datto tech did another BMR on the Intel RAID machine.
Google told me this error was caused by a recent Windows Update. I was able to boot into the SC recovery environment and manually "uninstall" the KB update (by copying files from the uninstall folder for the KB) that caused the 135 error. With fingers crossed I rebooted the machine, and it came up. I started to restore the data drive image.
This took about 75 minutes for 100GB. Rebooted again, and everything was exactly how it was at 2:59.
So it took about 12 hours, but I had the server back up as it was. About 9 of the hours was getting the BMR to work. I think most of the wasted time was due to a driver issue with the Intel RAID card.
*ML3: There has been a lot of discussion about BMR and why it's not always a great idea. I'm not sure how I could have made this better. I considered in the future having a machine I knew I could BMR to, but I'm not sure if the image itself (and the filesit has loaded) makes a difference in what to BMR to.
The server is now running on a DELL desktop with a single SSD, but it's up. Considering the age of the server, this is honestly probably a better solution! The data on this machine will be moved to the new server once that is up and running (whenever 2016 comes out).
The main thing I took away from this is ... working backups are so, so important. I also understand why virtualizing is so awesome ... no need to worry about these hardware issues.
I'll be running a session on how to keep your cool online.