Here is a nice document https://communities.vmware.com/servlet/JiveServlet/previewBody/21181-102-1-28328/vsphere-oversubscription-best-practices[1].pdf that might bring some more light on the question
Posts made by Net Runner
-
RE: how to assign vCPU and memory on virtual machines on VMware vSphere?
-
RE: "Home" Lab - Is it Cost-Effective to Run at Home?
Here is a nice review https://www.starwindsoftware.com/blog/choosing-ideal-mini-server-for-a-home-lab I found once I was planning my own home lab. The part with LEGO-based rack is just incredible
As for hypervisor I would recommend you to go with Hyper-V first just because it's the simplest and fastest start and after having enough of it to proceed to KVM. -
RE: Xenserver and Storage
Almost any vSAN works pretty the same way which is just mirroring the data and caches between two or more hosts and keeping those intact. The above mentioned StarWind Free https://www.starwindsoftware.com/starwind-virtual-san-free is a great fit for 2-node deployments since it is capable of running on top of hardware RAID and has some intelligent split-brain protection either over additional Ethernet link or using a witness node. The nice thing is that you still have community support even with free version. XOSAN/GlusterFS is an overkill here (not talking about the performance) and using/supporting DRBD-based scenario looks like shooting in the foot for me unless you are completely familiar with it and know what you are doing.
-
RE: Enterprise 15K SAS drives vs consumer grade SSD in a Dell server?
I would also recommend you not to waste two drive slots for RAID1 OS partition especially if it's a hypervisor host. USB Flash/SD Card/SATADOM/whatever will do the job leaving you with 8 free bays for OBR10 or OBR5/6. Speed all the things up with StarWind (as mentioned above) and you are good to go.
-
RE: Caddy vs. Nginx
I would consider the web server which you have the most experience are usually going to be the most secure.
Security depends on all of the layers, not just the web server. If you pick one with very few vulnerabilities, but don't understand how to configure it, you will most likely not understand how to configure it securely.
-
RE: Question regarding lab setup for Starwind Virtual San Hyperconverged install on Hyper-V Server 2016
As for Homelab 3 node scenario, you could choose either 3-way or 2-way replication.
3-Way replication means you have 3 copies of the data across all three hosts. 2-way replication in 3 node cluster means that:
target1 is on Node1 & Node2
target2 is on Node2 & Node3
target3 is on Node3 & Node1And there is no need to partition the drive. We'll place a container file and mirror them across nodes.
-
RE: Is it possible to install GitLab on Fedora 26?
The solution might be downloading that file and manually move it into place with:
wget 'https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/config_file.repo?os=fedora&dist=26&name=centauri&source=script'
sudo mv config_file.repo /etc/yum.repos.d/gitlab.repo
sudo restorecon /etc/yum.repos.d/gitlab.repo
sudo dnf makecache
sudo dnf install gitlab -
RE: Storage Spaces and ReFS
Other than a drawback when comparing the performance of a Storage Spaces mirror using NTFS, against a Storage Spaces mirror using ReFS, there are no issues i've heard of.
If your interested about any kind of stress test, take a look here - https://www.starwindsoftware.com/blog/log-structured-file-systems-microsoft-refs-v2-investigation-part-1
-
RE: Migrating away from XenServer
@danp said in Migrating away from XenServer:
Have you taken a look at StarWind V2V Converter?
Thanks for mentioning StarWind, just wanted to make an addition of the features available in the V2V Converter which includes a Windows Repair Mode which may become useful in the process of converting to VHDX. The end result would be the automatic VM adaptation to the given hardware environment, negating any possible compatibility problems.
Take a look here - https://www.starwindsoftware.com/converter for any other additional information. -
RE: What's Running in your Home Lab? - July 2017
2 x SuperMicro MiniServers
Running:
- Free Hyper-V 2016 Server (was Nano initially but MSFT killed baremetal deployments)
- Free Starwinds VSAN
- NethServer
- FreePBX
- Dozens of VMs with various stuff for experiments.
-
RE: Should I build it myself (iSCSI Storage) or use AS5008T ?
Have you considered a backup ready node like StarWind does for example https://www.starwindsoftware.com/starwind-storage-appliance? They usually go preconfigured having all the required licenses including VEEAM. As far as I know, iSCSI/SMB/NFS are present and there is an option to seamlessly offload your backups to the cloud. Unfortunately, it is currently out of my budget so I am using their free product https://www.starwindsoftware.com/starwind-virtual-san-free which converts two of my older storage servers into a single mirrored backup pool. Works great so far.
-
RE: Vendor Mistake - VMware Infrastructure Decisions
I have a VMware-based cluster of two ready-nodes purchased from Starwind https://www.starwindsoftware.com/starwind-hyperconverged-appliance half a year ago so I will try to share my experience on that matter. These are completely DELL-based and the pricing is very fair compared to what DELL OEM-partners want for the same configurations.
As already mentioned above, in this particular scenario, StarWind runs inside a VM on each host. The underlying storage is presented over a standard datastore. Alternatively, you can pass-through the whole RAID controller to StarWind VM in case if your ESX resides on a bootable USB/SD/SataDOM/whatever which is a common and good practice nowadays. The usage of hardware RAID makes the overall performance of a single server much faster than you can achieve using software RAINs provided by either VMware vSAN or MSFT S2D (I’ve done some benchmarking on that matter).
ESX hosts are connected over iSCSI to both StarWind VMs simultaneously. These VMs are mirroring the internal storage and presenting this storage back to ESX as a single MPIO-capable iSCSI device. Since round robin policy is used there is no storage failover in case if one StarWind VM is being softly restarted for patching or the whole physical host suddenly dies. In the case of single host power outage, only the migration of production VMs takes place but storage remains active which I find quite awesome.
Another thing that I do enjoy in StarWind is that it uses RDMA-capable networks (I have Mellanox Connectx3) for synchronization which leaves a lot of CPU resources for primary tasks instead of serving storage requests.
Right now I am waiting for Linux-based StarWind VSA implementation which is told to arrive soon. -
RE: Port - Exporting VM from Hyper-V and into XenServer - having issues
@Lennertgenbr Looks like either read rights are failing or drive surface/filesystem issues.
-
RE: Port - Exporting VM from Hyper-V and into XenServer - having issues
I would recommend you to try StarWind V2V Converter https://www.starwindsoftware.com/converter. It does hardware patching during conversion and can automatically enable PC rescue mode. Saved me a lot of times and time. Of course it supports most of the common virtual machine drive formats.
-
RE: Cheap Cloud Storage for Offsite Backup.
Well, there are various options to do a cloud backup both cloud vendor and technology. We are using a couple of them:
- Amazon Storage Gateway - https://aws.amazon.com/storagegateway/ - a virtual machine that acts like a virtual tape library and transparently offloads your tapes to Amazon S3 and Amazon Glacier. Works nice with MSFT DPM and VEEAM.
- StarWind VTL - https://azure.microsoft.com/en-us/marketplace/partners/starwind/starwindvtl/ - a virtual machine in Azure with virtual tape library that is connected over iSCSI to a local backup virtual machine with VEEAM.
- AcloudA - http://www.aclouda.com/ - a very cool thing, a hardware-based SAS/SATA cloud gateway that presents itself to a host as a usual drive offloading all the data at block level directly to cloud over iSCSI or SMB.
-
RE: SANs in the Enterprise?
If you need a separate SAN than Nimble https://www.nimblestorage.com/technology-products/ is a good choice but it depends on deployment size. Directly attached storage is usually much faster because of lower latency. That is why we tend to use virtual SAN instead of traditional SAN for quite a while already. We are using VMware VSAN http://www.vmware.com/products/virtual-san.html for our ESX cluster and StarWind VSAN https://www.starwindsoftware.com/starwind-virtual-san for Hyper-V. Both are very good, however, we are going to replace VMware VSAN with StarWind too because of better performance and RDMA/iSER support.
-
RE: Understanding 3-2-1 backup rule and son/father/grandfather model backups.
Here are some good explanations on the rule:
https://knowledgebase.starwindsoftware.com/explanation/the-3-2-1-backup-rule/
https://www.veeam.com/blog/the-3-2-1-0-rule-to-high-availability.htmlWe have a highly-available cluster based on StarWind https://www.starwindsoftware.com/starwind-virtual-san and thus having 2 copies of data as a synchronous replica and a third copy as an on-site backup (which is part 3 of the rule). Obviously, cluster is running on primary internal storage and backups are stored on a separate NAS (wich is 2 part of the rule). And we have a VTL virtual machine https://azure.microsoft.com/en-us/marketplace/partners/starwind/starwindvtl/ running in Azure that hosts our offsite backups (which is part 1).
-
RE: Announcing the Death of RAID
I would treat RAID as a kind of hardware offload since RAIN is known to consume more resources and thus resulting in less performance from the storage array. That is probably one of the major reasons why vendors like StarWind keep using hardware RAID. Especially on smaller deployments (storage capacities).