• Debian VM/cloud optimized kernel

    IT Discussion
    2
    0 Votes
    2 Posts
    990 Views
    1

    I don't know if ubuntu has a similar cloud kernel.

    Update: It looks like there is a linux-kvm kernel amongst others. Haven't tried it though.

    BTW, in Debian/Ubuntu and other distros the different kernels are often referred to as kernel flavors. Good to know if you want to search for them.

  • 1 Votes
    6 Posts
    1k Views
    DustinB3403D

    @stuartjordan said in Convert Between Popular VM Formats for Free with StarWind:

    Looks good though if it does all. I assume it's a windows based product though?

    IIRC, yes you have to install it on a Windows system and then connect to the hypervisor in question to create / export the VM you're wanting to convert.

    It's been a while since I've last had to convert anything.

  • Virtual appliances?

    IT Discussion
    27
    1 Votes
    27 Posts
    3k Views
    travisdh1T

    @stacksofplates said in Virtual appliances?:

    @travisdh1 said in Virtual appliances?:

    @stacksofplates What the what?

    Install Fedora sudo dnf install -y kubernetes `systemctl enable --now podman1

    That's all it takes.

    Yeah I see you haven't actually done that.

    Podman is not Kubernetes. Also when you install Kubernetes you don't get a podman1 service (or any type of podman service). When you install Kubernetes that way you don't get a Kubernetes service. You seemingly have to start the kube-proxy, kube-scheduler, kube-controller-manager, kube-api-server, and the kubelet separately. It installs docker, which is deprecated in k8s now. They have switched to using containerd which is pretty much the standard runtime now.

    So I'll stick with my original recommendation.

    Yep, this is why I need to mess with this stuff in my home lab. I can't even talk about it intelligently yet!

  • 6 Votes
    108 Posts
    41k Views
    ObsolesceO

    @callimarie said in Run virt-manager on Windows 10:

    uhh i keep getting this error "The libvirtd service does not appear to be installed. Install and run the libvirtd service to manage virtualization on this host."

    So did you do what it said?

  • 1 Votes
    30 Posts
    3k Views
    S

    @JaredBusch said in Dashrender why did you migrate to Hyper-V from XenServer:

    The XCP-NG team is a team that had a horrible business model that they were trying to implement around XenServer (XOA). Great concept, poor business model.

    I wish them well but they are fighting a few things...

    Citrix couldn't make any real money even when they charged more and people were taking the product seriously.

    Last time I checked they were just replacing some management components and packaging some storage stuff. They are not investing in upstream and there's a lot of... changes coming in hardware that are going to require non-trivial investments for hypervisors to remain relevant.

    The real problem with Xen is upstream investment is drying up. Citrix has pulled back, Amazon and other cloud providers have moved on to KVM, SuSE doesn't even market virtualization (SAP HANA support, containers, OS is as close to bare metal as they get). Outside of some people in ARM/automotive virtualization I haven't seen anyone picking it up for net new projects. In the enterprise Oracle is the only champion of it these days. KVM won the open source hypervisor war (although at this point does anyone really care?)

  • XCP-NG/XenServer tapdisk error

    IT Discussion
    10
    1 Votes
    10 Posts
    2k Views
  • 3 Votes
    27 Posts
    3k Views
    1

    So to answer my own question:
    "Which hosts belong in what pool when running local storage?"

    The answer is none - at least with xenserver. Don't use pools when using local storage.

  • Geekbench observations

    IT Discussion
    6
    1 Votes
    6 Posts
    895 Views
    scottalanmillerS

    @Pete-S said in Geekbench observations:

    @dafyre said in Geekbench observations:

    @Pete-S said in Geekbench observations:

    The relationship between the single-core and multi-core score should be about 80% of theoretical max on the multi-core score.

    So if single core score is 3000 and you have 4 vCPUs then multi-core score should be 80% of 3000 x 4 cores = 9600. If the host is under heavy load the multi-core score will go lower and lower.

    I think you are on the right track. This is largely in part due to how the underlying Hypervisor handles multi-core VMs. The way I understand it, is that in a multi-core VM, the Hypervisor has to wait for that number of cores to be ready to process before it signals to the VM that it can keep running.

    IE: In your example, a 4 core VM, the underlying hypervisor will have to wait to have 4 cores waiting for work before it will tell the VM that it's cores are available.

    I've read that before but I think it is some old feature of very old hypervisors called strict co-scheduling. It's not used anymore.

    Nowadays basically every hypervisor has their scheduler that puts vCPU on real pCPUs according to the time share principle. So every vCPU get's a piece of the pie. But it has to account for hyperthreading, more than one CPU socket (NUMA), power saving, VM priority and other things. The underlying principle is though that all VMs and their vCPUs should get their fair share of CPU time.

    Some hypervisors have different scheduler algorithms so you can pick other ways of scheduling that might be more optimized for your workload.

    Depends, SMP doesn't really allow for that, all cores have to be in lock step. Only is AMP is supported can the hypervisor do that. It requires the hypervisor and system above it together to do non-SMP processing.

  • 1 Votes
    14 Posts
    2k Views
    wirestyle22W

    @scottalanmiller It's not. I changed after I posted this.

  • 0 Votes
    6 Posts
    5k Views
    1

    Thanks guys.

    Unfortunately the link @dbeato provided is how you add a new disk to xenserver when you want it to be Storage Repository - a place to store VM partitions. So if you have a disk already xenserver will wipe it clean and put LVMs or EXT3 with VDI files on it.

    When it's passed through as a block device to a VM it will have whatever filesystem the VM formats it with.

    The problem with the info in the link @black3dynamite provided is that it's for xenserver 5.x so it doesn't work straight up with Xenserver 7.x (I'm running 7.4).

    What I ended up doing was adding a raid 1 array instead of just a disk. The principle is the same though, just another name on the block device.

    The array /dev/md0 is passed through to the VM as a block device.

    I did it by adding a rule to /etc/udev/rules.d/65-md-incremental.rules almost at the end.

    KERNEL=="md*", SUBSYSTEM=="block", ACTION=="change", SYMLINK+="xapi/block/%k", \ RUN+="/bin/sh -c '/opt/xensource/libexec/local-device-change %k 2>&1 >/dev/null&'"

    This rule will pass all md arrays to the VMs as Removable Storage in Xenserver (so you can attach it to whatever VM you want).

    Note that * in KERNEL=="md*" is a wildcard. So this will match the devices /dev/md0, md1 md2 etc. Just replace md* with whatever block device you want to pass through.

    The array is 2TB so I don't know if this works with bigger arrays.
    After trying some larger drives I can verify that it works fine with larger than 2TB arrays.
    Also the disks were empty so I'm not sure if xenserver will wipe the disk when you set this up the first time.
    After some experimenting it looks like Xenserver will not touch the drive.

    I'll add the complete file for reference.

    KERNEL=="td[a-z]*", GOTO="md_end" # This file causes block devices with Linux RAID (mdadm) signatures to # automatically cause mdadm to be run. # See udev(8) for syntax # Don't process any events if anaconda is running as anaconda brings up # raid devices manually ENV{ANACONDA}=="?*", GOTO="md_end" # Also don't process disks that are slated to be a multipath device ENV{DM_MULTIPATH_DEVICE_PATH}=="?*", GOTO="md_end" # We process add events on block devices (since they are ready as soon as # they are added to the system), but we must process change events as well # on any dm devices (like LUKS partitions or LVM logical volumes) and on # md devices because both of these first get added, then get brought live # and trigger a change event. The reason we don't process change events # on bare hard disks is because if you stop all arrays on a disk, then # run fdisk on the disk to change the partitions, when fdisk exits it # triggers a change event, and we want to wait until all the fdisks on # all member disks are done before we do anything. Unfortunately, we have # no way of knowing that, so we just have to let those arrays be brought # up manually after fdisk has been run on all of the disks. # First, process all add events (md and dm devices will not really do # anything here, just regular disks, and this also won't get any imsm # array members either) SUBSYSTEM=="block", ACTION=="add", ENV{ID_FS_TYPE}=="linux_raid_member", \ RUN+="/sbin/mdadm -I $env{DEVNAME}" # Next, check to make sure the BIOS raid stuff wasn't turned off via cmdline IMPORT{cmdline}="noiswmd" IMPORT{cmdline}="nodmraid" ENV{noiswmd}=="?*", GOTO="md_imsm_inc_end" ENV{nodmraid}=="?*", GOTO="md_imsm_inc_end" SUBSYSTEM=="block", ACTION=="add", ENV{ID_FS_TYPE}=="isw_raid_member", \ RUN+="/sbin/mdadm -I $env{DEVNAME}" LABEL="md_imsm_inc_end" SUBSYSTEM=="block", ACTION=="remove", ENV{ID_PATH}=="?*", \ RUN+="/sbin/mdadm -If $name --path $env{ID_PATH}" SUBSYSTEM=="block", ACTION=="remove", ENV{ID_PATH}!="?*", \ RUN+="/sbin/mdadm -If $name" # Next make sure that this isn't a dm device we should skip for some reason ENV{DM_UDEV_RULES_VSN}!="?*", GOTO="dm_change_end" ENV{DM_UDEV_DISABLE_OTHER_RULES_FLAG}=="1", GOTO="dm_change_end" ENV{DM_SUSPENDED}=="1", GOTO="dm_change_end" KERNEL=="dm-*", SUBSYSTEM=="block", ENV{ID_FS_TYPE}=="linux_raid_member", \ ACTION=="change", RUN+="/sbin/mdadm -I $env{DEVNAME}" LABEL="dm_change_end" # Finally catch any nested md raid arrays. If we brought up an md raid # array that's part of another md raid array, it won't be ready to be used # until the change event that occurs when it becomes live KERNEL=="md*", SUBSYSTEM=="block", ENV{ID_FS_TYPE}=="linux_raid_member", \ ACTION=="change", RUN+="/sbin/mdadm -I $env{DEVNAME}" # Added line # Pass-through of all /dev/md* arrays. # Will end up as Removable Storage that can be assigned to a VM. KERNEL=="md*", SUBSYSTEM=="block", ACTION=="change", SYMLINK+="xapi/block/%k", \ RUN+="/bin/sh -c '/opt/xensource/libexec/local-device-change %k 2>&1 >/dev/null&'" LABEL="md_end"
  • 1 Votes
    65 Posts
    6k Views
    wirestyle22W

    @emad-r said in Nested hypervisors?:

    also as vendor they dont want the complexity advantages of Virtualization

    ftfy

  • Problems Upgrading XenOrchestra

    IT Discussion
    26
    1 Votes
    26 Posts
    3k Views
    P

    @black3dynamite I used the old one, and it worked. I succesfully installed xo now. Normally, I'd like to know what went wrong, but it's too late and friday 🙂

  • 7 Votes
    17 Posts
    3k Views
    scottalanmillerS

    @jaredbusch said in Ubuntu Set Default Resolution Through GRUB for VM:

    @scottalanmiller said in Ubuntu Set Default Resolution Through GRUB for VM:

    @aaronstuder said in Ubuntu Set Default Resolution Through GRUB for VM:

    @scottalanmiller Ubuntu is a surprising choice for you 😉

    Runs ScreenConnect.

    So does Fedora.

    Always fail when I do it, there is some trick to it. Happen to know what's needed so that it doesn't just put hashes on the screen for forever?

  • 1 Votes
    27 Posts
    5k Views
    black3dynamiteB

    @krisleslie said in Adding additional Drive to XenServer host, get error:

    @black3dynamite i use 1 hdd 😞

    I meant virtual disks. VM1 with virtual disk 1 = OS and virtual disk 2 = data

  • 2 Votes
    29 Posts
    10k Views
    olivierO

    Also we could achieve hyperconvergence "the other way" (unlike having a global shared filesystem like Gluster or Ceph) but use fine grained replication (per VM/VM disk). That's really interesting (data locality, tiering, thin pro etc.). Obviously, we'll collaborate to see how to integrate this in our stack 🙂

  • If all hypervisors were priced the same...

    IT Discussion
    102
    4 Votes
    102 Posts
    16k Views
    stacksofplatesS

    @stacksofplates said in If all hypervisors were priced the same...:

    @storageninja said in If all hypervisors were priced the same...:

    @stacksofplates said in If all hypervisors were priced the same...:

    Also, decisions are often more nuanced than simple TCO decisions. If you have compliance requirements this often shifts to commercial solutions that have validated FIPS 140-2 modules/solutions. If you need a DISA STIG at a given level paying some money and being able to deploy a single VIB to harden compliance vs. go through checklists and argue with auditors can be a big deal. How do you quantify the cost of applying with NIST for validation with a do it yourself setup vs. a turnkey solution?

    RHEL/RHV have a good solution here. Auditors go through OpenSCAP scans with nice HTML reports and we justify any “failures.” It’s a pretty nice system.

    That just audits if it was set. What I'm talking about is a single package you deploy that goes ahead and sets the configuration settings up for you.

    On ESXi you can use Update Manager to track compliance with the DISA VIB, and use that for tracking it. Just attach as a baseline to your clusters and let Update Manager keep it up to date. Ed Groggin I think has a tool that will do an auto-generation of a report on the hardening guidelines.

    Looking online, I'm not seeing Server 2016 in STIG viewer yet. Has Microsoft not gotten a STIG out yet?

    Also Redhat Virtulization licensing cost as much (or more) than vSphere Standard. At that point if you don't need/want Redhat support VMware looks a lot more attractive. Oddly the only STIG for Suse I'm seeing is for Z series.

    Well yes and no. They have built in remediations with OpenSCAP, so you can have it auto remediate your machine. We ran an auto remediate to get the correct settings and then pushed it all out with Ansible since we can apply specific rules or not based on the type of machine since they are all RHEL based (workstations, servers, hypervisors, etc). We don’t use RHV, but they have a subset of rules for RHV which is why I mentioned it. We use bare KVM for systems and it works out pretty well. Ya I’m not sure about 2016 but I wouldn’t be surprised seeing how slow they are.

    The remediations are in Bash, Ansible, and I think Puppet? Anyway I have written a few of the Ansible remediations for them and have had them pulled into the project.

  • Xen 4.10 Released

    News
    1
    1 Votes
    1 Posts
    785 Views
    No one has replied
  • 5 Votes
    127 Posts
    20k Views
    matteo nunziatiM

    @storageninja

    (I know some Nutanix guys tried it but kept burning out SATA DOMs).

    Sata dom? Who the f*** still uses them? Even when I was a c++ coder in automation industry we were phasing out them 3 years ago! Just mainteinance of production stuff.
    Btw I see your point. For these applications
    MS releases Windows embedded compact only

  • Updating XenOrchestra to 5.11

    IT Discussion
    40
    0 Votes
    40 Posts
    6k Views
    DanpD

    FWIW, xo-server updated to 5.11.1 without issue.

  • 1 Votes
    3 Posts
    3k Views
    DustinB3403D

    Read this. especially if you're using the community version.