@BBigford and @zuphzuph were asking about this and I figured we should talk about it here. If you are shopping for a SAN, what is your vendor short list?
I have used the tough cable on jobs with outdoor APs for wireless bridges. You should use jacks with the metal shield on them. It's not much harder, but they are harder to find and more expensive.
If you ran the Bootstrap script, this is what you would have gotten as the error:
Fetched 1,035 kB in 1s (959 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
ca-certificates is already the newest version (20160104ubuntu1).
apt-transport-https is already the newest version (1.3.2ubuntu0.1).
apt-transport-https set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded.
Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package gnupg-curl
* ERROR: Failed to run install_ubuntu_stable_deps()!!!
I consider those things really a personal preference.
I'm not sure what makes a lower resolution display more compatibility with standard Linux GUI it's just a resolution. I suppose you could mean - it has the same scaling problem that Microsoft Windows has with high resolution screens (they are a bit better, but really only a bit).
You've nailed it with the list of differences, you have to decide for yourself what is important, that's not something can decide for you.
Gotcha! This would require a service ticket - once you enter that, send me the ticket number and I can follow up and make sure it is addressed quickly.
Support has been on it and I tagged you in the thread with details.
A DNS level approach is very resource efficient because your gateway box does no heavy lifting. So you gain a lot of security without affecting performance.
Is that true? DNS requests still go out and fail, causing traffic on the router and delays for the users. Blocking on the router is actually less resource intensive because the router blocks the traffic entirely.
But how does that work with processing lists of URLs? Hundreds of thousands of URLs in a black list (potentially)
I suppose if you are still allowing and getting lookups but only then blocking and put that on your firewall instead of on the proxy, then that would be a small hit.
I think you might have failed long before you come to this point.
Why not back this whole thing up.
On your website that talks about your services, have a button that says - test my connection to make sure this service will work for you. have you web team install their own speed test - thing - and you can do a test directly from your own system to make sure they won't have any issues because they even look any further.
The issue with this is that we could potentially loose clients without even knowing about them. And we can still provide our service, just a less bandwidth intense variety of it.
Maybe just detect the speed and adapt accordingly.
Our system does this already, and the result is an unacceptable live product. There are other solutions, but others put the cart before the horse here and bought expensive software and are determined to "make it work".
LOL - sunk cost fallacy
Yes it really is... I had a solid solution, explained why, and for 1 minuscule reason it was veto'd. Yet it offers us the best option, without the need to worry about the clients internet bandwidth....
The error says to contact support, I'd start there. While waiting for them. I'd do routine testing... reseat everything, pull some memory out, etc. If it is not time sensitive, I'd just wait for Dell to get back to you.
I've never had to wait more than 5 minutes for Dell chat support.
I have several VLANs but I'm only providing DHCP for two of them, As it turns out, my Cisco 4400 wireless controller was handling the IP Helper -address portion, not my router.
My router didn't have a helper at all.
Once I added the helper IP to the router, all is well, at least on the laptop. Now to test wireless.