Networking and 1U Colocation
-
@jaredbusch said in Networking and 1U Colocation:
Or just not be super paranoid and just have it on the LAN always.
Ha!
-
@eddiejennings said in Networking and 1U Colocation:
@jaredbusch said in Networking and 1U Colocation:
Or just not be super paranoid and just have it on the LAN always.
Ha!
It is no different in that way than one in your office.
In fact you will need a way to the internet for ZeroTier to come online.
-
@scottalanmiller said in Networking and 1U Colocation:
@eddiejennings said in Networking and 1U Colocation:
@scottalanmiller said in Networking and 1U Colocation:
@eddiejennings said in Networking and 1U Colocation:
@aaronstuder said in Networking and 1U Colocation:
What are the specs of the server?
Intel Xeon CPU Quad Core X3430 2.4GHz
32 GB RAM
Two 2 TB SATA drives in RAID 1Could be worth calling xByte and getting something a little beefier.
I'll give it some thought. I need to think through how I intend to use it beyond just building and destroying VMs just to tinker. Might start a "spec my server" thread.
Your CPU is fine, but 64GB of RAM might be worthwhile.
That CPU dates back to when I began my IT career. I'm fairly certain Intel hasn't released Meltdown/Spectre Microcode patches for it.
-
@storageninja said in Networking and 1U Colocation:
@scottalanmiller said in Networking and 1U Colocation:
@eddiejennings said in Networking and 1U Colocation:
@scottalanmiller said in Networking and 1U Colocation:
@eddiejennings said in Networking and 1U Colocation:
@aaronstuder said in Networking and 1U Colocation:
What are the specs of the server?
Intel Xeon CPU Quad Core X3430 2.4GHz
32 GB RAM
Two 2 TB SATA drives in RAID 1Could be worth calling xByte and getting something a little beefier.
I'll give it some thought. I need to think through how I intend to use it beyond just building and destroying VMs just to tinker. Might start a "spec my server" thread.
Your CPU is fine, but 64GB of RAM might be worthwhile.
That CPU dates back to when I began my IT career. I'm fairly certain Intel hasn't released Meltdown/Spectre Microcode patches for it.
Dell says a BIOS update for that stuff is in progress, other x10's have been patched:
-
Experimented with this tonight, so I figured share what I did. I was curious to see if I could access the iDRAC web interface without exposing iDRAC to the Internet.
The solution I used seemed simple enough. I created another NIC in a VM, and set it to be bridged to the 2nd NIC on the host via macvtap. I gave the NIC a static IP. On the host, I connected the iDRAC port to the 2nd NIC using a crossover cable, and added an appropriate static IP in the DRAC settings. I connected to my VM via ScreenConnect (which for now is how I'll be connecting remotely to my management VM), and was able to browse to the iDRAC web page.
I don't plan on doing this when I ship my server off, but I was curious to see if I could do it and make it work.
-
@eddiejennings said in Networking and 1U Colocation:
Experimented with this tonight, so I figured share what I did. I was curious to see if I could access the iDRAC web interface without exposing iDRAC to the Internet.
The solution I used seemed simple enough. I created another NIC in a VM, and set it to be bridged to the 2nd NIC on the host via macvtap. I gave the NIC a static IP. On the host, I connected the iDRAC port to the 2nd NIC using a crossover cable, and added an appropriate static IP in the DRAC settings. I connected to my VM via ScreenConnect (which for now is how I'll be connecting remotely to my management VM), and was able to browse to the iDRAC web page.
I don't plan on doing this when I ship my server off, but I was curious to see if I could do it and make it work.
What about setting up ZeroTier bridge VM?
-
@black3dynamite said in Networking and 1U Colocation:
@eddiejennings said in Networking and 1U Colocation:
Experimented with this tonight, so I figured share what I did. I was curious to see if I could access the iDRAC web interface without exposing iDRAC to the Internet.
The solution I used seemed simple enough. I created another NIC in a VM, and set it to be bridged to the 2nd NIC on the host via macvtap. I gave the NIC a static IP. On the host, I connected the iDRAC port to the 2nd NIC using a crossover cable, and added an appropriate static IP in the DRAC settings. I connected to my VM via ScreenConnect (which for now is how I'll be connecting remotely to my management VM), and was able to browse to the iDRAC web page.
I don't plan on doing this when I ship my server off, but I was curious to see if I could do it and make it work.
What about setting up ZeroTier bridge VM?
ZeroTier is my next thing to try. I wanted to do my little iDrac experiment, and I had screenconnect handy. Connecting to my management VM via ZeroTier is probably the best way to go.
-
@eddiejennings ZT is some slick stuff. You'll like it a lot. @adam-ierymenko
-
@reid-cooper said in Networking and 1U Colocation:
@eddiejennings ZT is some slick stuff. You'll like it a lot. @adam-ierymenko
Yep. When a VPN is needed, it's the easiest/quickest solution I've found.
-
@travisdh1 said in Networking and 1U Colocation:
@reid-cooper said in Networking and 1U Colocation:
@eddiejennings ZT is some slick stuff. You'll like it a lot. @adam-ierymenko
Yep. When a VPN is needed, it's the easiest/quickest solution I've found.
One use I'm considering is installing it on my KVM host so I can manage it from home via SSH without having to do additional firewall configuration.
-
@eddiejennings said in Networking and 1U Colocation:
@travisdh1 said in Networking and 1U Colocation:
@reid-cooper said in Networking and 1U Colocation:
@eddiejennings ZT is some slick stuff. You'll like it a lot. @adam-ierymenko
Yep. When a VPN is needed, it's the easiest/quickest solution I've found.
One use I'm considering is installing it on my KVM host so I can manage it from home via SSH without having to do additional firewall configuration.
FreePBX is what I use it for currently. Just makes so I don't have to arse with setting up dynamic DNS services from the 5 different places I typically work from, let alone all the other random places I end up.
-
My final networking challenge is making sure my host is configured to be able to talk to the outside world. This is the current version of the topology.
The question becomes how to configure a default route on the host that sends traffic to virbr1.
I notice in
/etc/sysconfig/network-scripts/
there are no scripts for the virtual interfaces. I could useip route add default via 192.168.1.1
; however, that will not survive a reboot.When I run
nmcli
I see the bridge (virbr1) with the IP it was assigned viavirsh net-edit
, since when creating the the network in Virt-Manager it assigned an address of 192.168.1.1, and I wanted the LAN interface on the VyOS firewall to have 192.168.1.1.Methinks the solution is likely with using
nmcli
. -
Honestly, if it were me, I would just add a NIC on the LAN to your host and let it reside on your LAN like everything else.
I know other things were mentioned and recommended out of paranoid security concerns, but realistically, that is mitigating such a small risk, that I generally find it not to be worth the effort.
The thought exercise has to be gone through to determine that though.
-
@jaredbusch said in Networking and 1U Colocation:
Honestly, if it were me, I would just add a NIC on the LAN to your host and let it reside on your LAN like everything else.
I know other things were mentioned and recommended out of paranoid security concerns, but realistically, that is mitigating such a small risk, that I generally find it not to be worth the effort.
The thought exercise has to be gone through to determine that though.
There is a gap of understanding. The LAN interface of the VyOS VM is connected to virbr1. For the host, do you mean create an interface and [attach] it to virbr1? Or do you mean assigning an IP address to virbr1?
This is how the virtual networks appear on the host (from
nmcli
).virbr0: connected to virbr0 "virbr0" bridge, 52:54:00:55:91:EB, sw, mtu 1500 inet4 192.168.122.1/24 virbr1: connected to virbr1 "virbr1" bridge, 52:54:00:BF:E7:FB, sw, mtu 1500
-
If you're not having other people connect to it and it's just for testing, I'd just leave the connection go to the host (SSH and Cockpit) and then join all of your VMs to ZeroTier.
-
@stacksofplates said in Networking and 1U Colocation:
If you're not having other people connect to it and it's just for testing, I'd just leave the connection go to the host (SSH and Cockpit) and then join all of your VMs to ZeroTier.
Would you expose your hypervisor to the Internet with no firewall in between?
-
@eddiejennings said in Networking and 1U Colocation:
@stacksofplates said in Networking and 1U Colocation:
If you're not having other people connect to it and it's just for testing, I'd just leave the connection go to the host (SSH and Cockpit) and then join all of your VMs to ZeroTier.
Would you expose your hypervisor to the Internet with no firewall in between?
For your lab, as long as you use strong SSH keys I don't see an issue with it. I've not tried it but you should be able to set your hosts.allow to only use your workstations ZeroTier IP address. You could also just do an SSH tunnel for Cockpit if you want to use it.
-
You can also do extra hardening with something like SCAP.
-
The story has evolved a bit, as Colocation America gave me a /29 network rather than /30, so it's possible that could just assign a public IP to the other physical NIC on my server -- though, that seems like not a good practice.
It seems like there has to be a way for my host to be able to access the Internet through one of the guests.
-
@eddiejennings said in Networking and 1U Colocation:
The story has evolved a bit, as Colocation America gave me a /29 network rather than /30, so it's possible that could just assign a public IP to the other physical NIC on my server -- though, that seems like not a good practice.
It seems like there has to be a way for my host to be able to access the Internet through one of the guests.
The only way to do that is a full bridge. Either a normal bridge or an OVS bridge.