Is this server strategy reckless and/or insane?
-
@scottalanmiller said in Is this server strategy reckless and/or insane?:
@creayt said in Is this server strategy reckless and/or insane?:
Interesting, I'd never heard that before and RDBMS has been so great for any use case I've hit so far that I'd kind of written off NoSQL as being extraneous in any project I've needed a db for. Will look into it, thank you.
Until ~10 years ago, RDBMS were so dominant that it was just "how everything was done." But as SaaS started to explode, the need for growth and performance change needs and NoSQL systems started to take off. They are really where the bulk of new stuff goes today, at least of big commercial stuff. SaaS vendors outside of financial use them for nearly everything. They are what power things like Google, Facebook, Change and other large websites that have to handle insane levels of data all over the world.
Have you found any interesting sources talking about what Facebook uses NoSQL for? Here's a recent article from one of their lead DB engineers talking about how they primarily use MySQL for what sounds like most of the persistent stuff that needs to scale to large numbers of users ( mentions shares, comments, and likes explicitly ). Apparently they've written their own storage engine for MySQL which dominates InnoDB and actively maintain their own branch of MySQL itself, which was last committed to 2 hours ago.
https://code.facebook.com/posts/190251048047090/myrocks-a-space-and-write-optimized-mysql-database/
-
In the article I linked to, dude says this: "There are many reasons why we use MySQL at Facebook. MySQL is amenable to automation, making it easy for a small team to manage thousands of MySQL servers..."
Gulp. Thousands. Of. Nodes. Those guys.
-
@creayt said in Is this server strategy reckless and/or insane?:
@scottalanmiller said in Is this server strategy reckless and/or insane?:
@creayt said in Is this server strategy reckless and/or insane?:
Interesting, I'd never heard that before and RDBMS has been so great for any use case I've hit so far that I'd kind of written off NoSQL as being extraneous in any project I've needed a db for. Will look into it, thank you.
Until ~10 years ago, RDBMS were so dominant that it was just "how everything was done." But as SaaS started to explode, the need for growth and performance change needs and NoSQL systems started to take off. They are really where the bulk of new stuff goes today, at least of big commercial stuff. SaaS vendors outside of financial use them for nearly everything. They are what power things like Google, Facebook, Change and other large websites that have to handle insane levels of data all over the world.
Have you found any interesting sources talking about what Facebook uses NoSQL for? Here's a recent article from one of their lead DB engineers talking about how they primarily use MySQL for what sounds like most of the persistent stuff that needs to scale to large numbers of users ( mentions shares, comments, and likes explicitly ). Apparently they've written their own storage engine for MySQL which dominates InnoDB and actively maintain their own branch of MySQL itself, which was last committed to 2 hours ago.
https://code.facebook.com/posts/190251048047090/myrocks-a-space-and-write-optimized-mysql-database/
That's a weird article. I'm not sure how much I'd trust that, even those it is hosted on Facebook, it doesn't feel logical. And doesn't match anything we see anywhere else. It sounds like, from how they describe it, it's one small piece used for isolated processes. But even in what they describe, it's not how you are picturing it. They are using a NoSQL database that is just managed by MySQL. MySQL itself is a management platform, not a database. Rocks is their database and that is non-relational. So nothing they are talking about there applies to you. That they manage it via MySQL is interesting, but not useful in your case.
Generally, though, Hadoop and Cassandra are what is behind Facebook's main services.
-
@creayt said in Is this server strategy reckless and/or insane?:
In the article I linked to, dude says this: "There are many reasons why we use MySQL at Facebook. MySQL is amenable to automation, making it easy for a small team to manage thousands of MySQL servers..."
Gulp. Thousands. Of. Nodes. Those guys.
This is the NoSQL behind the scenes of what they are using.
-
This topic definitely exploded for today! It did not seem like it was that busy when it was going on. But nearly 200 posts on a single topic!
-
@dustinb3403 I'm running it on microSD .... Brrrr
-
@matteo-nunziati said in Is this server strategy reckless and/or insane?:
@dustinb3403 I'm running it on microSD .... Brrrr
So many posts... running what?
-
@scottalanmiller said in Is this server strategy reckless and/or insane?:
@matteo-nunziati said in Is this server strategy reckless and/or insane?:
@dustinb3403 I'm running it on microSD .... Brrrr
So many posts... running what?
Running hyperv on microSD. HPE microSD.
-
About bench. I've made some tests with my new server before deployment. Disabling controller and disk cache helped a lot understanding real perf of disks.
I've seen sata ssd x4 raid5 outperform 15k sas x4 raid 10.
Enabling cache at controller level blends things, even with big files making benches a bit more blurry. -
@matteo-nunziati said in Is this server strategy reckless and/or insane?:
I've seen sata ssd x4 raid5 outperform 15k sas x4 raid 10.
x4 SSD any RAID level will outperform x4 15k HDD in any configuration
You're looking at a max of like 250ish realistic IOPS with 15k HDDs. Sure, you can get more at like 100% sequential reads, but not in typical use.
An SSD will give at least tens of thousands IOPS drives, up to hundreds of thousands per drive. There really is no comparison.
-
@matteo-nunziati said in Is this server strategy reckless and/or insane?:
@scottalanmiller said in Is this server strategy reckless and/or insane?:
@matteo-nunziati said in Is this server strategy reckless and/or insane?:
@dustinb3403 I'm running it on microSD .... Brrrr
So many posts... running what?
Running hyperv on microSD. HPE microSD.
Gotcha. Thanks.
-
@tim_g said in Is this server strategy reckless and/or insane?:
An SSD will give at least tens of thousands IOPS drives, up to hundreds of thousands per drive. There really is no comparison.
And that's on SATA. Go to PCIe and you can breach a million per drive!
-
@matteo-nunziati said in Is this server strategy reckless and/or insane?:
@scottalanmiller said in Is this server strategy reckless and/or insane?:
@matteo-nunziati said in Is this server strategy reckless and/or insane?:
@dustinb3403 I'm running it on microSD .... Brrrr
So many posts... running what?
Running hyperv on microSD. HPE microSD.
You can do it. It is not recommended, and Windows will not install itself there. You have to work around the installer to do it to a SD card.
-
@tim_g said in Is this server strategy reckless and/or insane?:
@matteo-nunziati said in Is this server strategy reckless and/or insane?:
I've seen sata ssd x4 raid5 outperform 15k sas x4 raid 10.
x4 SSD any RAID level will outperform x4 15k HDD in any configuration
You're looking at a max of like 250ish realistic IOPS with 15k HDDs. Sure, you can get more at like 100% sequential reads, but not in typical use.
An SSD will give at least tens of thousands IOPS drives, up to hundreds of thousands per drive. There really is no comparison.
I know. My point was that cache tend to blurry things. Disable it is best way to compare ios in different configurations.
-
@jaredbusch said in Is this server strategy reckless and/or insane?:
@matteo-nunziati said in Is this server strategy reckless and/or insane?:
@scottalanmiller said in Is this server strategy reckless and/or insane?:
@matteo-nunziati said in Is this server strategy reckless and/or insane?:
@dustinb3403 I'm running it on microSD .... Brrrr
So many posts... running what?
Running hyperv on microSD. HPE microSD.
You can do it. It is not recommended, and Windows will not install itself there. You have to work around the installer to do it to a SD card.
What happened to me was that I had to run install commands from cmd line because hpe controller didn't set the removeable bit of the microsd. Actually is the "removeability" of the device which drives the installer crazy.
Btw hpe certifies this config so it is something like "oem allowed". Let see! -
@tim_g said in Is this server strategy reckless and/or insane?:
@matteo-nunziati said in Is this server strategy reckless and/or insane?:
I've seen sata ssd x4 raid5 outperform 15k sas x4 raid 10.
x4 SSD any RAID level will outperform x4 15k HDD in any configuration
You're looking at a max of like 250ish realistic IOPS with 15k HDDs. Sure, you can get more at like 100% sequential reads, but not in typical use.
An SSD will give at least tens of thousands IOPS drives, up to hundreds of thousands per drive. There really is no comparison.
If you dectivate cache, ssd write perf is not so stellar on sata. Read is still fast. -
@tim_g said in Is this server strategy reckless and/or insane?:
x4 SSD any RAID level will outperform x4 15k HDD in any configuration
ehhhhh There are some low end SSD's that are "read optimized" that if I throw steady state throughput writes at they will fall over and implode (latency, shoot to the roof especially when Garbage collection kicks in, or the drive starts to get full).
If you don't have NVDIMM's doing write Coalesce or a 2 tier design (write endurance to absorb the writes) you can get really unpredictable latency out of the lower tier SATA SSD's.
-
@scottalanmiller said in Is this server strategy reckless and/or insane?:
And that's on SATA. Go to PCIe and you can breach a million per drive!
Technically the "Million IOPS" card is NVMe (also it's reads only and that is most CERTAINLY reporting numbers coming from the DRAM on the card and not the actual NAND).
-
@matteo-nunziati said in Is this server strategy reckless and/or insane?:
About bench. I've made some tests with my new server before deployment. Disabling controller and disk cache helped a lot understanding real perf of disks.
I've seen sata ssd x4 raid5 outperform 15k sas x4 raid 10.
Enabling cache at controller level blends things, even with big files making benches a bit more blurry.Running benchmarks is a dark art. Especially with Cache.
-
Some workloads are cache friendly (So a hybrid system of DRAM or NAND cache and magnetic drives will work the same as an "all flash").
-
There are a lot of Cache's.. There are controller caches, their are DRAM caches inside of drives (You can't disable this on SSD's, and can only sometimes turn they off on SATA magnetic drives and others). Some SDS systems use one tier of NAND as a Write Cache also, some do read/write caches.
-
Trying to maximize drives when testing them for Throughput or IOPS is different than trying to profile steady state latency under low queue depth.
99% of people I talk to who are testing something are doing something fairly terrible that doesn't test what htey want. They are doing Crystal Disk or some desktop class system to test a single disk, on a single vHBA, on a single VM that's touching only a handful of the disks or a single cache device.
For bench-marking on VMware vSAN with HCI bench there is now a cloud analytic platform that will diagnose if you are properly creating a workload and configuration that is truly trying to maximize something (Throughput, Latency, IOPS). If it's not optimized it will give you improvements (Maybe stripe objects more, tune disk groups, generate more queued IO with your workers). This is actually pretty cool in that it helps make sure you are doing real benchmarking and not testing the speed of your DRAM
-
-
@storageninja said in Is this server strategy reckless and/or insane?:
@tim_g said in Is this server strategy reckless and/or insane?:
x4 SSD any RAID level will outperform x4 15k HDD in any configuration
ehhhhh There are some low end SSD's that are "read optimized" that if I throw steady state throughput writes at they will fall over and implode (latency, shoot to the roof especially when Garbage collection kicks in, or the drive starts to get full).
If you don't have NVDIMM's doing write Coalesce or a 2 tier design (write endurance to absorb the writes) you can get really unpredictable latency out of the lower tier SATA SSD's.
Yup, there's always exceptions to everything.