Just tried to update UNMS (running Debian 10 thx to @JaredBusch guide) from 1.14 to 1.15
The update failed.
It gets "stuck" at a point where it's checking if ports 80 & 443 are free.
It was late last night & I did not further investigate.
Just tried to update UNMS (running Debian 10 thx to @JaredBusch guide) from 1.14 to 1.15
The update failed.
It gets "stuck" at a point where it's checking if ports 80 & 443 are free.
It was late last night & I did not further investigate.
Catalogic vProtect is https://storware.eu/products/vprotect/
@scottalanmiller said in HyperVisor:
We mostly use Fedora, but are looking at switching.
Why?
Switching to what?
This looks like a really, really nice laptop.
Might have to get one....
One downside to KVM (in virt-manager) is lack of snapshots for UEFI VMs.
Not to threadjack...
Now that I've experienced Fedora WS & Server updates, why do/does Windows updates suck so bad?
@Obsolesce said in Hypervisors: revisit your choices!:
Exactly my point. It's entirely subjective, based on personal experience and expertise, etc., no basis behind the question.
How you read the original post & concluded with VPS is a head scratcher.
This thread is about choice of hypervisors.
Feel free to start your own thread about VPS's, if you prefer VPS's, good for you.
@Obsolesce said in Hypervisors: revisit your choices!:
That is already sounding expensive for no reason. Why not a $2.50 / mo VPS? That's way cheaper than server hardware, and all the time and resources spend dealing with that.
Where did I say anything about VPS's?
No Type 2 discussion here.
Was having a discussion with myself & just wondering out loud........
Tomorrow, if I was starting from scratch, I would use "........"
Started in 05/06 with Virtual Iron (VI got "swallowed" by Oracle).
then switched to Hyper-V (it was beta, back in the '06 timeframe).
Since then, I've also tried some XenServer, oVirt & ESXi.
Currently use KVM.
Now back to that tomorrow statement....
For a standalone setup, my 1st choice would be KVM + Virt-Manager.
For a hyperconverged setup, my 1st choice would be VMware + VSAN.
I'd never even bother with Hyper-V.
@Pete-S said in Windows Server licensing for HA?:
If you have two servers and run HA, does that mean that you have to license Windows Server standard for the maximum number of VMs running when you have a failure?
So for example,
Server A: 16 cores, runs 6 VMs normally
Server B: 16 cores, runs 6 VMs normallySo each server has to be licensed for all 12 VMs running on 16 cores - so 6 x Windows Server Standard licenses for each server, total of 12 licenses?
But if you didn't run HA, you would only license each server for 6 VMs, with 3 x Windows Server Standard, a total of 6 licenses?
Is this correct?
I don't believe this is correct.
You must license for all the cores, 16 per server, in your case.
Even if you choose to run 1 VM, you must still license for 16 cores..."All physical cores on the server must be licensed, subject to a minimum of 8 core licenses per physical processor and a minimum of 16 core licenses per server"
@wrx7m said in Netgear Insight Managed Switches:
@pmoncho I am guessing that none of those have a dedicated stacking interface?
Is there any switch (Unifi or Edge line) that is "stackable"? I don't think so.
@JaredBusch said in UNMS backup question:
If you want to restore an individual unit, that process is already built into the system so what are you trying to get exported?
I'm just asking for info purposes in case of a future restore.
@JaredBusch said in UNMS backup question:
@FATeknollogee said in UNMS backup question:
How are you doing/handling your UNMS backups?
The automatic back up process runs daily. And I currently randomly download them. I keep meaning to script those off to another system.
Does this auto backup process include a "backup" of all the individual devices that are being managed by UNMS such that one can restore a single ER or switch etc?
Would be nice to "move" that auto backup "offsite".
Update: this is what I ended up with.
Route based VPN using this guide as a template.
Master site: 1x ER 12 + 1x ER 4
Sites A, B, C & D :1x ER4 each location
Colo: 1x ER4 & 1x pfSense (SM x10SDV-TLN4F+)
@scottalanmiller said in MailCow in Production Datacenter:
Docker is truly terrible here, IMHO. So much unnecessary complexity. But at least it is working.
Basically, no other choice bit to "accept" Docker! (in this use case).
Is it good enough for you guys to ditch Zimbra & switch?
@scottalanmiller said in MailCow in Production Datacenter:
@FATeknollogee said in MailCow in Production Datacenter:
@scottalanmiller said in MailCow in Production Datacenter:
@scottalanmiller said in MailCow in Production Datacenter:
Great, glad to hear that that works. I'll be testing it more once I see if it comes up at all. The first time it didn't at all, so you are two steps farther than we got
Been a while, and now it is a different project. This time for a customer. But we have MailCow up and running now and behind an Nginx proxy in a datacenter. Working great so far.
Using Docker?
Is there any other way? MailCow appears to be 100% dependent on Docker.
I don't know of any other way which is why I asked!
Previously, we had that forum discussion & I thought the general consensus was Docker for this use case wasn't a great idea?
@scottalanmiller said in MailCow in Production Datacenter:
@scottalanmiller said in MailCow in Production Datacenter:
Great, glad to hear that that works. I'll be testing it more once I see if it comes up at all. The first time it didn't at all, so you are two steps farther than we got
Been a while, and now it is a different project. This time for a customer. But we have MailCow up and running now and behind an Nginx proxy in a datacenter. Working great so far.
Using Docker?