@Pete-S They dropped the price to 1061.24 since I posted. lol Interesting. Yes, but that is a max of 12 nvme. I may have misunderstood that option with 8 SAS/SATA. I am guessing that the max of 12 would allow for more SAS/SATA, although it doesn't mention it. My issue was also with the available drive capacities and cost per TB for spinning disks in the 2.5" spec.
Yeah, especially direct from the OEM. Have you thought about buying the storage from xByte instead?
Are their drives brand new? I did price out a server with specs as similar to Dell's as possible and it was only off by a couple grand.
IMHO, I consider their drives are 99.9% brand new as its possible an OEM install was done on the drive or something like that. Plus testing of the drive by the OEM and xByte.
Their hardware is manufacturer refurbished, not used. Big difference.
If you can get a Dell ProSupport (w/w-out) Plus 7 year warranty on the server with the drives from xByte, it doesn't really matter if they are new or not. They are under warranty for 7 years and you have no worries.
What did he mean when he said there's only a limited number or writes on a USB flash drive.
So this is a small mistake on the A+ material. They are making assumptions about the storage media based on the communications protocol. But it's based on very common things. A standard USB flash drive using a flash memory technology that "wears out" as you write to it, but essentially never wears out as you read it.
So the way that a USB stick is "meant to be used" is that you store things on it and read it a lot. You can change it, but it isn't meant for constant writes.
So under "normal" use, USB sticks last a long time. But it you write to it constantly (like when using it as swap) you will cause the memory chip(s) to die quickly.
However the / or root part can be placed as raid, I did this and I will show you how to do this, but please don't do it.
Here where it gets bad.
You wish to update the system and kernel and all this can affect the RAID/ now the config for the raid is done in the kernel and stored in boot part, so you always updating this and your always keeping the disks busy, even on network packet level. Everything happens will happen on the /
What you want to do is look into purchasing a small reliable drive
Like 64 GB or 128 GB NVMe SSD and using that as your /
or If that is too fancy or expensive, get USB thumb drive and pair it with samsung SD card and install the root part
Then create RAID 10 on the 4 other drives, cause even if the /root fails it wont matter . pop another one and it will auto detect the raid , with cockpit it is very easy to detect the RAID and activate it and mount it
4_1538835763447_2018-10-06 17_18_47-Centos Gi (Ansible 2) - VMware Workstation.png 3_1538835763447_2018-10-06 17_18_38-Centos Gi (Ansible 2) - VMware Workstation.png 2_1538835763447_2018-10-06 17_18_33-Centos Gi (Ansible 2) - VMware Workstation.png 1_1538835763447_2018-10-06 17_17_52-Centos Gi (Ansible 2) - VMware Workstation.png 0_1538835763447_2018-10-06 17_17_43-Centos Gi (Ansible 2) - VMware Workstation.png
SSD NV protection is to allow the SSD's cache to flush safely should power be lost. RAID NV / battery protection is to allow the RAID's cache to flush safely should power be lost. Each is important on its own, neither covers for the other one.
That's technically slightly incorrect.
The non-volatile cache memory on the raid controller is to be preserve the data that has not yet been written to the drives, until power is restored again.
On the SSD the capacitors hold enough charge so that the drive can write the remaining data in the cache memory to the actual flash memory after the power is gone. The cache is DRAM so it will loose it's contents after a few seconds.
The only time details like this matter is if you remove the battery from a raid card, your data might be lost.
I'm missing how that is different than what I said. What you said is correct, but I feel like you just reworded what I said, with the added detail that the RAID card flush is not until power is restored, which one hopes is obvious.
Sorry Scott, you're right. I was just thrown off by you said "SSD NV protection" and because you worded both thing the same. Obviously both things are to protect from data loss at power failures.
OIC, you are saying that the SSD is volatile, but has a battery in most cases? makes sense.
Almost, let me explain. Below is a picture of an Samsung enterprise SSD, SM863.
The SSD controller (yellow) is the brain. The flash memory (green cross) is non-volatile so it will not suffer data loss without power. There are also more flash memory on the backside.
The cache memory however is the blue ring and it will lose it's memory as soon as the power is removed. It's the same type as the memory in your computer, DRAM. That would cause immediate data loss and that is not good and that is why enterprise drives have a lot of capacitors (red circles).
The capacitors (red) act like small rechargeable batteries. When the drive loses it's external power these small capacitors will work as a reserve power for the entire drive. The controller (yellow) knows that it has lost external power so it will quickly write the data from the cache memory (blue) to the flash memory (green) before the reserve power from the capacitors (red) are empty. That way data loss is prevented. This will only take a couple of seconds at most.
PS. I had a look on the guest side of thing just now because that is what Microsoft talked about.
Most OSs are virtualization aware. I had a look at debian running as guest under Xen with a clean install without any Xen guest tools. Debian installation automatically sense it's running on virtualized hardware and sets it's I/O scheduler to "none", thereby letting the host handle whatever I/O scheduling needed. This also makes sense because the guest doesn't know what kind of storage the host is using.
Web servers typically do not benefit from extra space, either. basically web servers don't use storage in any way. But SSDs at least give you faster boot times, swap space, updates, etc. Extra space would give us literally nothing, in our cases.
Are your DBs for your WP servers running on the same or different hosts from WP?
You want them on the same for performance. There is no advantage to centralization except a tiny bit of consolidation that would come at a cost of performance and reliability to the customers. If they are on a separate box there are more points of failure to worry about, more noisy neighbour risks, and huge latency bumps.