If LAN is legacy, what is the UN-legacy...?
-
@scottalanmiller said:
@wirestyle22 said:
I guess what I'm asking is what should I be studying? Network+ MCSE CCNA exam study material?
So the Network+ I recommend to everyone. It's just good base knowledge.
The MCSE is good if you want to work as a Windows Systems Admin or Engineer, but not if you don't.
The CCNA is the first baby step on the path to working as a Cisco-focused network admin. This does not align in any way with your descriptions of jobs you are interested in. This is a wholly different path than you have been alluding to. And on its own it is a useless cert, too junior to get you even an entry level job as a Cisco Admin and too focused to be useful to a generalist.
If you get your Network+ certs and decide you want to go more in depth into networking, I'd definitely recommend the CCNA classes. If you get a good instructor, you'll be in good shape. The beauty of things like Network+ and CCNA, is that the ideas are all the same, no matter what networking vendor you ultimately settle on.
I got my CCNA, and a year later landed a job that had 1 Cisco router and 50 HP Switches. Terms change, and a lot of the jargon changed... But the ideas still remained the same.
-
@scottalanmiller "designed solely around maintaining the LAN ideologically rather than replacing it."
I'd disagree with that, at least insofar as ZeroTier is concerned. It emulates a LAN because it's convenient to do so: everything just works and software can just speak TCP/IP (or any other protocol). But if anything the goal is to embrace the post-LAN world and evolve away from the LAN model. Making LANs work like Slack channels is a step in this direction.
I really like what you wrote above and some of it is exactly what I was thinking when I first started working on ZeroTier years ago.
ZT solves multiple problems: (1) a better p2p VPN/SDN, (2) mobility and stable mobile addressing, (3) providing (1) and (2) everywhere including on vast numbers of WiFi, carrier, and legacy networks that do not permit open bi-directional access to the Internet. Internally we view the existing Internet/Intranet deployment topology with its NAT gateways and such as "the enemy." NAT in particular is the enemy and "break NAT" is an internal development mantra.
An analogy would be RAID, which seeks to achieve reliability using arrays of unreliable disks. In our case we want to achieve a flat reliable global network by running on top of an inconsistent, half-broken, gated, NATed spaghetti mess.
IPv6 should have done these things but didn't and probably won't unless IPv6 mobility becomes a real thing and unless we can convince millions upon millions of IT admins to drop the concept of the local firewall. If IPv6 ever does do these things we'll probably have to wait for the 2030s. If that ever does happen ZT was designed with migration paths in mind. Hint: 64-bit network ID + 40 bit device ID < 128-bit IPv6 address.
Our long term target is not AD or other LAN-centric ways of doing things, which is why we haven't built deeply into AD the way Pertino has. Our long term target is Internet of things, mobile, and apps. If you pull the ZT source you can see this: the ZT network virtualization core is absolutely independent of any os-dependent code and is designed to be able to (eventually, with a little bit more work) build on embedded devices.
-
The biggest concern I see from something like ZT and Pertino is the breakdown of the protections that users get from simple routers - no even counting firewall features. i.e. ethernet packets (MAC based) traditionally can't traverse routers, therefore devices can't be attacked with these lower level MITM attacks that hear hear about on wireless networks, etc.
Am I concerned for nothing?
-
@Dashrender The answer is a huge pile of "it depends." It depends on protocol, application, OS, etc.
If you're running a closed/private ZeroTier network, then you're not at much greater risk than if you have a VPN. A public ZeroTier network is obviously exposing you a lot more, but keep in mind that every time you join a coffee shop, hotel, university, or other public WiFi network you are doing the same thing. Every time you join someone's WiFi you are exposing L2.
So the risk is not as great as you might think. A lot of people think "ZOMG! my machine is exposed I will get hax0r3d in seconds!" This is mostly an obsolete fear. OSes today are a lot more secure than they were in the late 90s / early 2000s when we had remote Windows vulnerability of the week and LAN worms were commonplace. You can still have problems if you have a bunch of remote services enabled but most OSes no longer ship this way.
If you have ZeroTier and join 8056c2e21c00001 (Earth, our public test net) and ping 29.44.238.229, that's my laptop. If you don't get a ping reply it probably means it's asleep. Obviously I am not worried about it. Of course the only remote service I run is ssh and I don't allow password auth so there isn't a lot of exposed surface area.
There is still some risk of course. The only way to perfectly secure a computer is to turn it off.
As far as MITM goes, there are a couple answers there and it depends on the nature of the attack. Network virtualization layers like ZeroTier are generally more secure than cheapo switches or WiFi routers in that the MAC addresses of endpoint devices are cryptographically authenticated. It's harder to spoof endpoints, though it's not impossible. On ZT you can't spoof L2 traffic without stealing someone's
identity.secret
file. It's a bit like a wired network with 802.1X.The only wrinkle is Ethernet bridging, and that's why bridging must be allowed on a per-device basis. Normal devices are not allowed to bridge.
But... the real answer to MITM is: never trust the network. If you are not authenticating your endpoint cryptographically then you are vulnerable to MITM on every network. Use SSL, SSH, etc. and check certificates or you are not safe.
-
@Dashrender Finally, you can count me in the "firewalls are obsolete" camp. I've worked infosec before. During my tenure we had many attacks, and zero were naive remote attacks that the firewall did anything to stop.
A short summary of real world attack vectors we saw: phishing, phishing, phishing, phishing, phishing, malware, phishing, drive-by downloads, phishing, and phishing. Did I mention phishing? The least secure thing on the network is the meat bag behind the screen, but in all of the above cases the firewall is worthless. That's because all those threat vectors are "pull" based, not "push" based. We had malware get in through the web, e-mail, Dropbox (with phishing), etc., and in all cases it was pulled in over HTTPS and IMAPS links that happily went right through the firewall.
Firewalls are dead. Thank the cloud.
-
@adam.ierymenko said:
A short summary of real world attack vectors we saw: phishing, phishing, phishing, phishing, phishing, malware, phishing, drive-by downloads, phishing, and phishing. Did I mention phishing?
ROFLOL - I almost fell out of my chair - I love it!
-
@adam.ierymenko said:
Firewalls are dead. Thank the cloud.
huh - you're the first that I can recall ever saying that firewalls are dead. from your above post about IPV6 and killing local firewalls, I can see I think you really mean that.
How do you propose keeping the baddies that are trying to attack you over the web? I understand pull vectors, but what about the push ones?
-
@adam.ierymenko said:
@Dashrender The answer is a huge pile of "it depends." It depends on protocol, application, OS, etc.
If you're running a closed/private ZeroTier network, then you're not at much greater risk than if you have a VPN. A public ZeroTier network is obviously exposing you a lot more, but keep in mind that every time you join a coffee shop, hotel, university, or other public WiFi network you are doing the same thing. Every time you join someone's WiFi you are exposing L2.
Because I run a local firewall, I worry less about this (but of course my phone doesn't have one (that I know of - Windows mobile). I'm been considering purchasing a portable wireless router for just this reason. your device connects to it, the portable device connects to the local free WiFi, and a VPN is created out of the building. Sure things are a bit slower, but the L2 problem is completely gone.
But it might really be overkill since I can do a VPN from my phone and laptop directly, so short of them MITM'ing me and still being forced to send my VPN traffic to to my VPN provider, they really aren't gaining anything. I'm still weighing my options to see if it's worth the hassle.
-
@Dashrender Here open this attachment!
No joke though. I really honestly think we could have just taken our firewall down and given every machine a public IP and there would have been little or no change to security posture. If anything, firewalls encourage the "soft underbelly" problem by giving people the illusion that the local network is secure. Take that old obsolete crutch away and people who do things like bind unpassworded databases to ::0 will look like dummies real fast and the problem will take care of itself over time.
It's been a while since I've seen a completely deadpan naive remote vulnerability in a consumer OS. By "naive" I mean one that can be exploited in the real world with no credentials, special knowledge, or participation from the user. OSes really have gotten better and if you turn off unnecessary services you're probably not in too terribly much danger. The danger isn't nonexistent but it's probably a lot less than, say, browsing the web with five different plugins enabled or the always popular:
curl http://note_lack_of_https.itotallytrustthissitelol.com/ | sudo bash
-
@Dashrender "How do you propose keeping the baddies that are trying to attack you over the web? I understand pull vectors, but what about the push ones?"
Local firewalls aren't obsolete. They're a pretty good way to limit your surface area. But personally I just like to make sure I'm not running anything I don't need. Also make sure you are up to date on patches, etc.
But the bottom line is that 90% of baddies aren't attacking you over the web anymore. They're trying to phish, scam, sneak malware, and get you to visit malicious URLs. They've moved "up the stack," abusing vectors like social media, Dropbox/Google Drive, e-mail, etc. This is partly in direct response to the firewall and partly because these types of attacks are a lot more effective.
Based on real world experience the only exception I'd give to the above is web apps. There was a case where a vulnerable php web app was attacked. But this of course was in the DMZ, so the firewall also did nothing. It was supposed to be exposed! Most people don't run php web apps on desktops and mobile devices.
I suppose you could still ask: if we got rid of firewalls tomorrow, neglecting unpatched and obsolete OSes would we again see an epidemic of remote attacks? I can't say for sure that we wouldn't but I personally doubt it. You'd see remote attacks against old vulnerable junk but newer patched systems would not fare too badly, and the exposure would probably help harden things more. Firewalls promote immune system atrophy.
Of course ZeroTier has private certificate-gated networks and that's what most people use. Those are similar to VPN endpoints in risk profile. You can still have your boundary. It's just software defined.
A bit beyond IT pragmatism, but I gave this presentation a while back about how firewalls contribute to Internet centralization, surveillance, and the monopolization of communication by closed silos like Facebook and Google: https://www.zerotier.com/misc/BorderNone2014-AdamIerymenko-DENY_ALL.pdf
The core argument I'm making there is that the firewall is a grandfathered-in hack to get around very very bad endpoint security and the fact that IP has no built-in authentication semantics. It's also a fundamentally broken security model since it uses a non-cryptographic credential (IP:port) as a security credential. Non-cryptographic credentials are worthless.
In a later presentation I distilled the "Red Queen's Race" slides to a "law of protocol bloat": any protocol allowed through the firewall accumulates features until it encapsulates or duplicates all functionality of all protocols blocked by the firewall. Examples: SSH, HTTP. In the end you just end up running an inferior version of IP encapsulated within another protocol.
-
@adam.ierymenko said:
@Dashrender Here open this attachment!
No joke though. I really honestly think we could have just taken our firewall down and given every machine a public IP and there would have been little or no change to security posture. If anything, firewalls encourage the "soft underbelly" problem by giving people the illusion that the local network is secure. Take that old obsolete crutch away and people who do things like bind unpassworded databases to ::0 will look like dummies real fast and the problem will take care of itself over time.
It's been a while since I've seen a completely deadpan naive remote vulnerability in a consumer OS. By "naive" I mean one that can be exploited in the real world with no credentials, special knowledge, or participation from the user. OSes really have gotten better and if you turn off unnecessary services you're probably not in too terribly much danger. The danger isn't nonexistent but it's probably a lot less than, say, browsing the web with five different plugins enabled or the always popular:
curl http://note_lack_of_https.itotallytrustthissitelol.com/ | sudo bash
haha I had to fire up a container to see if that was an actual bash script lol.
-
@adam.ierymenko said:. It's also a fundamentally broken security model since it uses a non-cryptographic credential (IP:port) as a security credential. Non-cryptographic credentials are worthless.
Here Here!
In a later presentation I distilled the "Red Queen's Race" slides to a "law of protocol bloat": any protocol allowed through the firewall accumulates features until it encapsulates or duplicates all functionality of all protocols blocked by the firewall. Examples: SSH, HTTP. In the end you just end up running an inferior version of IP encapsulated within another protocol.
That is no lie - So I can't get what I want, you'll give me this little thing over here, OK I'll just create a way to get what I want through that little thing.. done.. yeah - huge problem!
-
@johnhooks said:
@adam.ierymenko said:
@Dashrender Here open this attachment!
No joke though. I really honestly think we could have just taken our firewall down and given every machine a public IP and there would have been little or no change to security posture. If anything, firewalls encourage the "soft underbelly" problem by giving people the illusion that the local network is secure. Take that old obsolete crutch away and people who do things like bind unpassworded databases to ::0 will look like dummies real fast and the problem will take care of itself over time.
It's been a while since I've seen a completely deadpan naive remote vulnerability in a consumer OS. By "naive" I mean one that can be exploited in the real world with no credentials, special knowledge, or participation from the user. OSes really have gotten better and if you turn off unnecessary services you're probably not in too terribly much danger. The danger isn't nonexistent but it's probably a lot less than, say, browsing the web with five different plugins enabled or the always popular:
curl http://note_lack_of_https.itotallytrustthissitelol.com/ | sudo bash
haha I had to fire up a container to see if that was an actual bash script lol.
though the lack of HTTPS really doesn't make you more or less protected in this example.
-
@Dashrender said:
@johnhooks said:
@adam.ierymenko said:
@Dashrender Here open this attachment!
No joke though. I really honestly think we could have just taken our firewall down and given every machine a public IP and there would have been little or no change to security posture. If anything, firewalls encourage the "soft underbelly" problem by giving people the illusion that the local network is secure. Take that old obsolete crutch away and people who do things like bind unpassworded databases to ::0 will look like dummies real fast and the problem will take care of itself over time.
It's been a while since I've seen a completely deadpan naive remote vulnerability in a consumer OS. By "naive" I mean one that can be exploited in the real world with no credentials, special knowledge, or participation from the user. OSes really have gotten better and if you turn off unnecessary services you're probably not in too terribly much danger. The danger isn't nonexistent but it's probably a lot less than, say, browsing the web with five different plugins enabled or the always popular:
curl http://note_lack_of_https.itotallytrustthissitelol.com/ | sudo bash
haha I had to fire up a container to see if that was an actual bash script lol.
though the lack of HTTPS really doesn't make you more or less protected in this example.
The container was for if there actually was a shell script that was going to run.
-
@Dashrender "That is no lie - So I can't get what I want, you'll give me this little thing over here, OK I'll just create a way to get what I want through that little thing.. done.. yeah - huge problem!"
You can't secure things by breaking them. People will find ways around your barriers because they need things to work, and the things they cobble together will probably be less secure than what you started with. You have to secure things by actually securing them.
Fundamentally the endpoint is either secure or it is not. If it's not, all someone has to do is get into something behind your firewall and they own you. Increasingly that something could be a printer, a light bulb, or a microwave oven. How often do you patch your light bulbs? If the cloud killed the firewall, then IoT will dig it up and cremate it and encase it in concrete and re-bury it.
My approach to security is: secure everything as if it will be totally exposed on the public Internet, then add firewalls and such as an afterthought if appropriate. If something is not secure enough to be exposed to the public Internet without a firewall, it's not secure enough to be connected to any network ever.
-
@adam.ierymenko said:
@Dashrender "That is no lie - So I can't get what I want, you'll give me this little thing over here, OK I'll just create a way to get what I want through that little thing.. done.. yeah - huge problem!"
You can't secure things by breaking them. People will find ways around your barriers because they need things to work, and the things they cobble together will probably be less secure than what you started with. You have to secure things by actually securing them.
Fundamentally the endpoint is either secure or it is not. If it's not, all someone has to do is get into something behind your firewall and they own you. Increasingly that something could be a printer, a light bulb, or a microwave oven. How often do you patch your light bulbs? If the cloud killed the firewall, then IoT will dig it up and cremate it and encase it in concrete and re-bury it.
My approach to security is: secure everything as if it will be totally exposed on the public Internet, then add firewalls and such as an afterthought if appropriate. If something is not secure enough to be exposed to the public Internet without a firewall, it's not secure enough to be connected to any network ever.
So what would be an appropriate situation to use a firewall if nothing that is secure enough to be exposed to the public internet without a firewall should be connected to a network?
-
@wirestyle22 I was describing a guiding principle. Obviously not everything measures up to that and firewalls are still needed for a lot of situations. I just consider them "legacy" and think that if you're designing or building something new it's best to design it to be secure in itself rather than assuming your private network is always going to stay private. Never trust the network, especially if it might have light bulbs and cloud connected printers on it.
I also think the firewall's obsolescence is a fact regardless of how I or anyone else might feel about it. IoT, BYOD, and the cloud are killing it so best plan for its death and prepare accordingly. I just happen to be in the camp that's quietly cheering for its demise because I think it's a bad ugly hack that breaks the functionality of networks and endpoint-centric security is better.
Edit: this is good too: http://etherealmind.com/why-firewalls-wont-matter-in-a-few-years/
I basically agree with all of that.
-
@adam.ierymenko said:
@wirestyle22 I was describing a guiding principle. Obviously not everything measures up to that and firewalls are still needed for a lot of situations. I just consider them "legacy" and think that if you're designing or building something new it's best to design it to be secure in itself rather than assuming your private network is always going to stay private. Never trust the network, especially if it might have light bulbs and cloud connected printers on it.
I also think the firewall's obsolescence is a fact regardless of how I or anyone else might feel about it. IoT, BYOD, and the cloud are killing it so best plan for its death and prepare accordingly. I just happen to be in the camp that's quietly cheering for its demise because I think it's a bad ugly hack that breaks the functionality of networks and endpoint-centric security is better.
Edit: this is good too: http://etherealmind.com/why-firewalls-wont-matter-in-a-few-years/
I basically agree with all of that.
This appsec keynote is terrifying. I mean, you kind of expect your security to be somewhat low at the 25 million dollar level but these fortune 500 companies too? Man. The stuff of nightmares
-
If the goal is application security, what is the point of SDNs if not to offer a stop gap in the meantime until the apps get themselves where they need to be?
I don't understand why that article mentioned using SDNs for East - West communications, why wouldn't you just have the apps themselves be secure? Using SDNs is just another layer of the problem he speaks of.
-
@Dashrender SDNs are about connectivity and manageability, not security per se -- though they can of course be secure and have lots of security related features. SDN is about being able to have mobile devices with stable addresses, fail-over without interrupting flows, control over where flows go, ability to provision new network paths without pulling cable, seamlessly link locations, fail-over across ISPs and clouds, etc.