@art_of_shred said:
This is so girly-looking.
@art_of_shred said:
This is so girly-looking.
@Dashrender said:
The names are in the lower right of the post instead of next to the name.
I did hit F5.. I will now close my browser.
and it seems faster already... it was always slow for me before.
It was flat out painful before.
It depends on the system. Through the use of test pools, VDI change management's pretty much baked-in. With most virtualized environments, key VMs can be cloned into a test environment.
For me, Sophos UTM includes VPN. Does your edge firewall include VPN? Most business-grade devices do.
@Nic said:
@Nara said:
And Webroot's browser plugin's still giving it a yellow.
I put in a request to have that changed.
And it's green!
As far as why would someone use DAG, it's a requirement if you want to hit 99% uptime without data loss. If you're down for a day due to recovering a corrupted database, you're at around 97%. From a business perspective, that's a day that workers haven't been able to get new customer orders or communicate with vendors effectively. The other option is to restore from the previous backup, but that would entail losing all email since that backup happened. There are services that can replay message transactions, but do incur additional cost, typically on a monthly basis.
@scottalanmiller said:
And these days, when planning for three years out, storage gets to be a big concern. Where are people storing all of the email? If it is like Office 365, people get 25GB+ per person. That adds up fast. Obviously not everyone uses all of that, but some people have so much more. Typically email usage is quite high and gets higher every year. What will storage be like in two or three years? That might be expensive to plan for to store and to back up.
Usage of retention policies tends to cut down total email storage for well-established organizations.
@scottalanmiller said:
Now that is a pretty nice setup, a minimum for how Exchange is meant to be run, but some of the things missing from that pricing:
- Backups. This can be a pretty expensive additional component depending on the quality of those backups.
- Ongoing support. You might not do much, but everything that you do adds up over the years. Doesn't take much to cost a lot.
- Mailbagging. Even if you get it down to $.80/mo it is a huge factor and if it is $2.35/mo it's hugemongous.
Backups for Exchange with DAG don't need brick-level. It's more about restoring the entire server in case of a rare case of database corruption that failover couldn't mitigate. Most maintenance is automated right out of the box, and almost all of the rest can be automated afterwards. There shouldn't be more than 15 hours of annual maintenance.
The thing that could jack up the price of onsite Exchange is if there's only one site. Getting a Colo set up or using hosted VMs will incur additional monthly costs.
I'm not sure how much floor space you have to work with, but you could use a rack-mounted unit inside of a harsh environment rack. The rack has all the filters and such, allowing for regular equipment.
It isn't a service pack, so I wouldn't expect it to show up on the system properties page.
@scottalanmiller said:
The real Windstream debacle was when some purchased service from them.
Unfortunately for some, it's their LEC.
@Minion-Queen said:
NTG is looking for 3 willing interns to work in our lab environment.
As opposed to an unwilling intern?
Did you check your local community college? They may have a degree path that requires an internship.
You could use Cubby by LogMeIn and make the user's desktop, documents, etc. all cubbies.
@NetworkNerd said:
@Nic said:
So did you find out what happened? That reminds me of my interview with Spiceworks story. I got stuck in Houston due to weather, and they couldn't guarantee that I'd get a flight the next day. I went and rented a car and drove instead and got in at 3am for an interview the next morning. I think that story helped seal the deal
Yeah, I remember reading about your story - a very good one indeed. It certainly showed how bad you wanted to get the job and that it was important to make the interview.
In this case, I actually called the guy yesterday afternoon at 3:40 PM to see if all was well or if there was some kind of miscommunication with HR about the time (since we rescheduled once). I was only able to leave a message. There has been no word today...at all. I've been encouraged to call one more time to make sure the fellow is ok / at least find out what happened.
Have you heard anything? I have popcorn on standby.
@scottalanmiller said:
We never use dedupe outside of the backup system. Good file system management gives you most if that value.
And for a VM you'll get more from dedupe at the storage layer than inside a single VM.
2012 dedupe works well. It is a good product. It probably makes the most sense on a very large file server.
Storage-level deduplication can be quite useful. However, with the modern push toward local storage, shared deduplicating storage isn't really seen except for in rather large environments.
@scottalanmiller said:
I remember when that was all that there was. I stopped using that long, long ago though. DynDNS doesn't seem to be too important anymore, but I'm not sure why.
It was often used to host mail servers from residential or business dynamic IP addresses. As things shift to stable, often hosted, infrastructure, the need for dynamically updating external DNS diminishes. It still has uses for low-end failover scenarios.
Is this lab for hardware, software, networking, or raw research?
I'm curious how this may/may not impact their phone launch.