What do you think, did we do this right?
-
@jospoortvliet said in What do you think, did we do this right?:
t
Most companies give 90 days before making things public. That would have been the only change from what you've posted.
-
@JaredBusch said in What do you think, did we do this right?:
@jospoortvliet said in What do you think, did we do this right?:
this seems to be an issue with firefox, it works in chrome, but we're looking into it
EDIT: Even more fun, this is something to do with https://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol and the OCSP server being down... Sigh. Not sure we can fix this...
Bonus!
So to answer your question. I think waiting only 3 weeks was hugely over optimistic of you. A couple months would honestly be needed to get any meaningful updating to happen. Businesses do not work at the speed you seem to expect.
Beyond that opinion, I find the entire thing a good thing.
I agree. I think it was handled well overall. 90 days would have been nice, but this is more than most companies would do with a completely free opensource product. So I think you handled it VERY well.
For those that are saying 3 weeks is too short for large companies. I agree with this, but I would also mention that they should understand the risk of having a completely free product without paid support. I'm not sure why you would complain when somebody warns you of security risks on a completely free outdated product.
-
@IRJ The only reason I say 3 weeks is overly optimistic is because of how long it takes to get notifications all the way down the chain from the CERT notification, and then for the business to react. Small or large business is not important, updates to anything in production need reviewed at the least and tested if possible. Once that is done, the patches can be rolled to production.
Things simply take time.
-
Nothing will make everyone happy. Overall, it sounds like it was handled well to me. People had plenty of time to update and keep their systems maintained.
-
Thanks for the replies, guys.
WRT the 3 weeks vs 90 days, yeah, this was a bit of a balancing act. We wanted to keep this under the radar as long as possible but of course people started tweeting and talking about it so black hats could pick up on it. We asked a few journalists who contacted us with questions to not talk about it and they were kind and responsible enough to keep quiet but we didn't expect to be able to keep it quiet for 3 months... In the end, three weeks is what the CERT's asked for and they're the experts so we went with that.
-
@jospoortvliet said in What do you think, did we do this right?:
Thanks for the replies, guys.
WRT the 3 weeks vs 90 days, yeah, this was a bit of a balancing act. We wanted to keep this under the radar as long as possible but of course people started tweeting and talking about it so black hats could pick up on it. We asked a few journalists who contacted us with questions to not talk about it and they were kind and responsible enough to keep quiet but we didn't expect to be able to keep it quiet for 3 months... In the end, three weeks is what the CERT's asked for and they're the experts so we went with that.
It seems to me like you guys have handled it well, and as others have said, you can't please everybody.
Hat tip to you guys for actually listening to the experts!
-
Maybe a mechanism to push out REALLY emergency alerts to the majority of deployments would make sense in a future release? Not the normal update notifications, but something that makes it essentially impossible to ignore if the system is being used but only comes out for situations like this. It would still not hit 100%, but it might hit 90% of the people that you had to reach out to.
-
@scottalanmiller said in What do you think, did we do this right?:
Maybe a mechanism to push out REALLY emergency alerts to the majority of deployments would make sense in a future release? Not the normal update notifications, but something that makes it essentially impossible to ignore if the system is being used but only comes out for situations like this. It would still not hit 100%, but it might hit 90% of the people that you had to reach out to.
yeah, that's an idea. Separate security updates from the rest. The downside is that you can't "hide" the bad stuff from the bad guys. Right now, we release an update which has both bugfixes and security updates; then 2 weeks later we release the security advisories so admins can check if they had been pnwed before patching for example.
So you essentially have 2 weeks to update...
If the update is ONLY security stuff it is very easy for a bad actor to quickly look what's in there and start to exploit it.
-
@jospoortvliet said in What do you think, did we do this right?:
@scottalanmiller said in What do you think, did we do this right?:
Maybe a mechanism to push out REALLY emergency alerts to the majority of deployments would make sense in a future release? Not the normal update notifications, but something that makes it essentially impossible to ignore if the system is being used but only comes out for situations like this. It would still not hit 100%, but it might hit 90% of the people that you had to reach out to.
yeah, that's an idea. Separate security updates from the rest. The downside is that you can't "hide" the bad stuff from the bad guys. Right now, we release an update which has both bugfixes and security updates; then 2 weeks later we release the security advisories so admins can check if they had been pnwed before patching for example.
So you essentially have 2 weeks to update...
If the update is ONLY security stuff it is very easy for a bad actor to quickly look what's in there and start to exploit it.
Can you ever hide the bad stuff from the bad guys? Bad guys will just run the product and get any announcement that is sent out no matter what. That's a given. But the most important thing is letting good admins know what to do, bad admins that don't update - that's their decision and risk.
-
@scottalanmiller said in What do you think, did we do this right?:
Can you ever hide the bad stuff from the bad guys? Bad guys will just run the product and get any announcement that is sent out no matter what. That's a given. But the most important thing is letting good admins know what to do, bad admins that don't update - that's their decision and risk.
Well, not fully of course, it is all open source. But the barrier to getting at the problem is a fair bit higher when there are hundreds of changes and some might or might not have a security impact vs you have 5 changes and you KNOW they impact security. It won't stop the NSA but might stop a script kiddie and at least give people more time to update.
I'm not saying it is a magic bullet, but it is widely considered security best practice to do it this way
Anyway, I'm hoping for automated minor updates to solve this in a more elegant way. We've decreased the target on the back of Nextcloud users significantly with our security scan - only 3% outdated systems is a quite small thing to put time and effort in if you're looking to do something like ransomware.