Removing shared storage from VMWare environment
-
@donaldlandru said:
@dafyre said:
I would suggest migrating to O365 and getting your Exchange and Sharepoint servers shut down being the first step, even before upgrading the other VM OSes.
Are you able to run your entire infrastructure on a single server at this point? Say of VMH-OPS1 explodes or has a melt down?
Basis for the O365 first? Curious if there is benefit or other reasoning?
Yes, the system was designed to hold both servers work load if required. Neither server is currently more than 30-40% utilized.
Which leaves us with even more leftover resources after the 365 migration. Which we may want to use for a 10 user RDS environment.
This gets your data off your shared storage. It also is one less thing to migrate when the time comes.
-
@donaldlandru said:
- Active Directory/DNS (2008R2)- 2 servers (one on each host)
- DHCP - 1 server
- Exchange 2010 Standard - 1 server
- SharePoint 2010 Foundation - 1 server
- Windows File server (2008R2) 1.5TB data - 1 server
- SQL Server 2008 R2 (SharePoint,VMware) - 1 server
- dozen or so other low IO VMs for business applications, mostly CentOS
Most of this looks like it can be removed. Go to Office 365 and you remove the exchange, sharepoint and SQL server. Vcenter can run on SQL server express unless you have more than 5 vm hosts.
Then you just have your DC and a file server to worry about. Since they are server 2008 I'd just build new ones from scratch.
-
@dafyre and @Jason - I see where you are coming from and this makes sense. What if I do a hybrid approach to this.
Steps:
- Refresh VMH-OPS1
a. Migrate all VMs to VMH-OPS2
b. Shutdown VMH-OPS1
c. Remove SAS drives, replace with 8 * SSD and internal SD card for OS
d. Create RAID5 in P420i
e. Install VMWare on SD card - rejoin to cluster - Refresh VMH-OPS2
a. Migrate all VMs to VMH-OPS1
b. Shutdown VMH-OPS2
c. Remove SAS drives, replace with 8 * SSD and internal SD card for OS
d. Create RAID5 in P420i
e. Install VMWare on SD card - rejoin to cluster
f. Rebalance VMs across cluster - Install and configure new Windows Server 2012 R2 for Domain Controllers
- Remove Windows Server 2008 R2 Domain controllers
- Complete any other upgrades
Steps 1 and 2 could be done in a few hours and gives me something to do before our Office 365 deployment which is currently looking like a Q2 activity. I could then work on any remaining tasks in parallel with the Office 365 migration. This doesn't cause me to migrate any data unnecessarily, every VM I move gets the immediate bonus of better disk IO and no more IPOD and I can do that sooner as I already have the budget for a storage upgrade this year.
Thoughts?
- Refresh VMH-OPS1
-
Since your current solution is designed to be able to run everything on a single server, after you migrate most of that load to O365 I don't see why you wouldn't retire the second server completely.
By running two servers you have:
twice the cooling cost
twice the number of servers to manage/update
twice the power consumption
twice the amount of UPSAnd best of all, you'd have twice the storage to purchase and an extra 10 Gb card to buy.
According to Scott, these servers have something like 4 hours of downtime every 7-8 years, on average. Unless you really need to lower that downtime, the expense of those drives and everything else I listed is pretty high.
-
You mention that you're having performance issues today - do you know where those issues are coming from? Disk IO not enough? Production network not fast enough, etc?
-
@Dashrender said:
Since your current solution is designed to be able to run everything on a single server, after you migrate most of that load to O365 I don't see why you wouldn't retire the second server completely.
By running two servers you have:
twice the cooling cost
twice the number of servers to manage/update
twice the power consumption
twice the amount of UPSAnd best of all, you'd have twice the storage to purchase and an extra 10 Gb card to buy.
According to Scott, these servers have something like 4 hours of downtime every 7-8 years, on average. Unless you really need to lower that downtime, the expense of those drives and everything else I listed is pretty high.
Interesting thought. It is really 1 of 7 servers in this location.
So a few bullet points to support the multiple servers:
- We are a 24/7 organization we have users in multiple locations working at anytime throughout the day. I will still need to service application and workstation authentication.
- Being 24/7 means I can't drop the whole thing for maintenance.
- The time managing 2-3 extra virtual machines is negligible
- 300 watts is what this single server consumes -- the cost that adds to being able to service everything without maintenance downtime is again in my opinion negligible
- The business is still out on whether or not same sign-on is sufficient for Office 365 vs single sign-on. I think the same sign-on is sufficient, but if the business wants single sign-on then ADFS will need to be deployed and available to service O365 login requests.
I would agree with your solution in a smaller, single location business -- it just wouldn't jive with the way we operate.
-
@Dashrender said:
You mention that you're having performance issues today - do you know where those issues are coming from? Disk IO not enough? Production network not fast enough, etc?
It is definitely in the storage network that is slowing us down. I am sharing 8 SATA spindles for too many virtual machines. Plus MPIO on the 1Gig side gets saturated quite frequently, but upgrading the controllers in the P2000 to 10GB iSCSI is more than the SSDs I referenced above.
-
@donaldlandru said:
Basis for the O365 first? Curious if there is benefit or other reasoning?
This will free up resources for the other VMs so that you're not running too close to the max with everything on one host.
Yes, the system was designed to hold both servers work load if required. Neither server is currently more than 30-40% utilized.
Okay, so everythign on one host isn't such a big concern.
-
Keep in mind that with your VMware license you should be able to do Storage VMotion, etc, from the shared storage up to the Local storage on VMH-OPS1 after it gets rebuilt.
-
@donaldlandru said:
- We are a 24/7 organization we have users in multiple locations working at anytime throughout the day. I will still need to service application and workstation authentication.
Being 24/7 doesn't mean you can't afford down time. @scottalanmiller has a lot of posts on this. It's about how much that costs you, not about how often you work. We are a fortune 100 and we have down times. Heck we have pretty regular momentary (once a month or so) blips with our exchange systems.
-
@Jason said:
@donaldlandru said:
- We are a 24/7 organization we have users in multiple locations working at anytime throughout the day. I will still need to service application and workstation authentication.
Being 24/7 doesn't mean you can't afford down time. @scottalanmiller has a lot of posts on this. It's about how much that costs you, not about how often you work. We are a fortune 100 and we have down times. Heck we have pretty regular momentary (once a month or so) blips with our exchange systems
Let's look at it from a different angle
- The hardware is already owned and only 3 years old minus the $1600 for SSDs
- The software is already owned
- The "data center" is already built out and over cooled
To me, saying lets discard this server we already own and license in favor of now creating outages for maintenance does not make any sense.
-
@donaldlandru said:
- Being 24/7 means I can't drop the whole thing for maintenance.
How much maintenance do you do? What is the annual downtime caused by VMware? Only VMware and hardware maintenance is assisted by having the second server.
-
@donaldlandru said:
To me, saying lets discard this server we already own and license in favor of now creating outages for maintenance does not make any sense.
That might be true, but let's do a little napkin math...
- Why is it overcooled? That should be fixed regardless of anything else. Just wasting money.
- If you add heat, you still cool more, regardless of how much you cool now, correct? So that is more money.
- The power draw costs money.
- How much downtime does this prevent?
Add those together and see if it makes sense.
-
@scottalanmiller said:
@donaldlandru said:
- Being 24/7 means I can't drop the whole thing for maintenance.
How much maintenance do you do? What is the annual downtime caused by VMware? Only VMware and hardware maintenance is assisted by having the second server.
Assuming a non DFRS file server, that would be assisted by this as well.
@donaldlandru , you said you have 7 servers. can't you install a DC on one of those? Are any of those virtualized or are they all bare metal?
-
@Dashrender said:
@scottalanmiller said:
@donaldlandru said:
- Being 24/7 means I can't drop the whole thing for maintenance.
How much maintenance do you do? What is the annual downtime caused by VMware? Only VMware and hardware maintenance is assisted by having the second server.
Assuming a non DFRS file server, that would be assisted by this as well.
@donaldlandru , you said you have 7 servers. can't you install a DC on one of those? Are any of those virtualized or are they all bare metal?
DFRS would do it on a single physical host for software upgrades, too.
-
@scottalanmiller said:
DFRS would do it on a single physical host for software upgrades, too.
it would? how? If DFRS is only on one VM (or two VMs on the same host) and that host goes down (for maintenance, failure, whatever) wouldn't that data all be unavailable?
-
@scottalanmiller said:
@donaldlandru said:
- Being 24/7 means I can't drop the whole thing for maintenance.
How much maintenance do you do? What is the annual downtime caused by VMware? Only VMware and hardware maintenance is assisted by having the second server.
Now there is an aha moment and presents me a question to bring back to the business. How much downtime is acceptable die to server hardware failure vs spending an additional $1600 to eliminate all but a dual server failure from impacting the services provided by these virtual machines (other disasters of course not included).
-
@donaldlandru said:
@scottalanmiller said:
@donaldlandru said:
- Being 24/7 means I can't drop the whole thing for maintenance.
How much maintenance do you do? What is the annual downtime caused by VMware? Only VMware and hardware maintenance is assisted by having the second server.
Now there is an aha moment and presents me a question to bring back to the business. How much downtime is acceptable die to server hardware failure vs spending an additional $1600 to eliminate all but a dual server failure from impacting the services provided by these virtual machines (other disasters of course not included).
exactly! That's why I mentioned the 4 hours of anticipated downtime over 7-8 years. If one server is expected to only have 4 hours of downtime over 7-8 years, is it worth spending $1600 plus heating/cooling/power/UPS, etc to prevent that 4 hours?
-
@donaldlandru said:
@scottalanmiller said:
@donaldlandru said:
- Being 24/7 means I can't drop the whole thing for maintenance.
How much maintenance do you do? What is the annual downtime caused by VMware? Only VMware and hardware maintenance is assisted by having the second server.
Now there is an aha moment and presents me a question to bring back to the business. How much downtime is acceptable die to server hardware failure vs spending an additional $1600 to eliminate all but a dual server failure from impacting the services provided by these virtual machines (other disasters of course not included).
$1600 up front plus $200 a month or whatever. That adds up over a five year span. $200 or power and cooling is $2400 a year or $12,000 over five years. That's a total of $13,600 not including any effort from you or licensing or anything.
-
@Dashrender said:
@donaldlandru said:
@scottalanmiller said:
@donaldlandru said:
- Being 24/7 means I can't drop the whole thing for maintenance.
How much maintenance do you do? What is the annual downtime caused by VMware? Only VMware and hardware maintenance is assisted by having the second server.
Now there is an aha moment and presents me a question to bring back to the business. How much downtime is acceptable die to server hardware failure vs spending an additional $1600 to eliminate all but a dual server failure from impacting the services provided by these virtual machines (other disasters of course not included).
exactly! That's why I mentioned the 4 hours of anticipated downtime over 7-8 years. If one server is expected to only have 4 hours of downtime over 7-8 years, is it worth spending $1600 plus heating/cooling/power/UPS, etc to prevent that 4 hours?
The heating/cooling on this is probably an atypical situation as the building provided dedicated cooling but does not pass through the cost of this to our organization it is included in the base lease. Even on an estimated usage our lease is for 10 years and just signed this year.
The UPS and power do come into play but at 200 watts (one server in question) is a small piece of the pie