How to tell if your hardware is compatible w/ I/O Acceleration Technology
-
@MattSpeller said:
H700 datasheet:
https://www.dell.com/downloads/global/products/pvaul/en/perc-technical-guidebook.pdf"4.2 CacheCade
CacheCade provides cost-effective performance scaling for database-type application profiles in a
host-based RAID environment by extending the PERC RAID controller cache with the addition of Dellqualified
Enterprise SSDs.
CacheCade identifies frequently-accessed areas within a data set and copies this data to a Dellqualified,
Enterprise SSD (SATA or SAS), enabling faster response time by directing popular Random
Read queries to the CacheCade SSD instead of to the underlying HDD.
Supporting up to 512 GB of extended cache, CacheCade SSDs must all be the same interface (SATA or
SAS) and will be contained in the server or storage enclosure where the RAID array resides.
CacheCade SSDs will not be a part of the RAID array.
CacheCade is a standard feature on, and only available with, the PERC H700/H800 1 GB NV Cache
RAID controller.
CacheCade SSDs can be configured using the PERC BIOS Configuration Utility or OpenManage."Sorry, posted that before this popped in.
-
@creayt great minds etc
-
Kind of leads into another question, which is, if I'm running 100% high-performance SSDs, should I go ahead and turn off the cache of the Raid controller itself? I guess I could benchmark it with and without.
-
@creayt said:
Kind of leads into another question, which is, if I'm running 100% high-performance SSDs, should I go ahead and turn off the cache of the Raid controller itself? I guess I could benchmark it with and without.
Even if there was an answer out there already to this, I'd still encourage you to do it and post more benchmark porn.
-
@MattSpeller said:
Even if there was an answer out there already to this, I'd still encourage you to do it and post more benchmark porn.
-
Overall, super disappointing write performance.
It's possible that the RAID-level underprovisioning does nothing. Really, really wish Rapid Mode worked across more than one drive, it'd be the perfect solution for use cases like this.
-
/me drools uncontrollably
-
In the Dell r720 "Lifecycle Controller" --> "System BIOS Settings" --> "Integrated Devices" sub section, both default to disabled:
-
"i/oat dma engine" defaults to disabled
-
"SR-IOV Global Enable" defaults to disabled
Hoping "i/oat dma engine" enables Remote DMA or RDMA over ConvergedEthernet or RoCE for hyperconverged storage. Thoughts?
If running xcp-ng, would you turn both of these on nowadays or just the SR-IOV "Virtualization Mode" in the "Integrated NICs" of "Device Level Configuration"?
-
-
@rjt said in How to tell if your hardware is compatible w/ I/O Acceleration Technology:
In the Dell r720 "Lifecycle Controller" --> "System BIOS Settings" --> "Integrated Devices" sub section, both default to disabled:
-
"i/oat dma engine" defaults to disabled
-
"SR-IOV Global Enable" defaults to disabled
Hoping "i/oat dma engine" enables Remote DMA or RDMA over ConvergedEthernet or RoCE for hyperconverged storage. Thoughts?
If running xcp-ng, would you turn both of these on nowadays or just the SR-IOV "Virtualization Mode" in the "Integrated NICs" of "Device Level Configuration"?
AFAIK neither will enable RDMA / RoCE. SR-IOV requires that the NICs and hypervisor be compatible, it basically splits up a physical NIC into x number of virtual NICs that do a real HW passthrough to the assigned guests.
FWIW, IOAT DMA appears to have been depreciated in Linux and Windows according to it's Wikipedia entry
-
-
@notverypunny, had some wishful thinking after seeing references to netDMA and Andy Grover, the developer of the iSCSi targetcli freebranch and now stratis who posted all the benchmarks and papers on ioat, and speeding up iSCSi type IO using RDMA / RoCE between Hypervisors is my goal. But then could not find Andy Grovers actual ioat patches or benchmarks to look for myself - 404s.