I'm liking these non-IT technician focused videos.
Thanks! I'm hoping to broaden the audience and make tools that IT staff can either use directly by sharing with management or learn from on how to present to management.
Far too often IT is pressured to make business decisions that other business stakeholders should be involved in.
And vice versa, tons of times untrained people with no IT insight or knowledge make all the critical IT decisions and just have the staff classified as IT deal with the mistakes rather than avoiding them.
Yes, but word of caution. If you get certs from multiple different providers, don't forget to add records for all of them. Otherwise getting certs will fail, and it's almost impossible to troubleshoot.
Yes, like this.
Pretty cool. I’ll have to try it and see how it goes.
I’ve been using Unbound for several years running on a Raspberry Pi and using a custom black list. Love not having to run ad blockers on each computer browser since it’s all taken care of with Unbound.
This all sounds very complicated. Why not use the DNS and DHCP at your datacenter and turn off all the others, and then give the routers an ip helper address config? Does your network hardware not support that?
@Grey It may very well be too complicated. At the same time it has to be fast, robust and the parts have to be able to work independently if a VPN link goes down.
Ok, cut the line to the internet. Can they still function? What doesn't work? What gets cached at your app server? How much data is transferred when the line returns?
How much actual resilience does the business need vs what they can sustain, and what's the risk? Has anyone answered these questions before?
The diagram is a simplified. It's only internal company traffic that goes over the VPN in the drawing. The data centers also serves other clients that are not connected over VPN. That actually their primary job - they are serving customers, not just internal workloads.
When it comes to resilience and risk, it's the data centers that have to be up and running. So they have redundant everything. The rest is just ordinary SMB stuff.
PS. Also in the data center we are doing HA in the application layer and not the hypervisor layer. So having two DNS servers made sense to me since that will be natural HA in the application layer.
I find it to not be of concern. I would never have it happen, because it's a bizarre and problematic way to handle internal DNS. But anyone who can exploit private IP mapping can figure it out without DNS in the first place. So I see no reason to want to hide it.
I was wondering how it works because we see a problem where a couple of Win 10 clients can resolve all the internal Windows servers names, but not the statically assigned names of linux servers.
I thought if the name resolution works over different mechanisms and uses different ports it could be an firewall or L3 switch somewhere that has been misconfigured.
This is common in situations where Linux is not given an opportunity to auto-update the DNS entries, no one makes them manually, and they are not joined to AD.
Exactly - have you or anyone else added these servers to AD's DNS?
They have been added manually. The name of the service is also not the name as the server. So if a webserver is abc001.company.com the name in the DNS that will send you to that server might be logistics.company.com.
if you're being sent to logistics, that's the entry that must be in DNS.. you can have as many entries as are needed for a single server.
each name is it's own entry.