@dustinb3403 said in SQL Virtulization:
They will be separate VM's
@dustinb3403 said in SQL Virtulization:
They will be separate VM's
Got the okay to proceed with converting to a VM. A little more memory in the system and it will be running the SQL server and a DC soon.
We ran across a server at a new clients that was installed in the last 6 months to host their MS Dynamics install. The system was installed as a physical server with 3 RAID 10's consisting of 4 drives each. The 3 array's are broken up for the DB, Logs, and Backup.
We want to conver the server to a VM and configure a large RAID 10 with all 12 drives. When doing this is it still necessary to have 3 VHDX's if they are going to be on the same array?
Why are vendors still installing on physical systems?
@scottalanmiller said in Ubiquiti Frontrow:
@nets said in Ubiquiti Frontrow:
Talk about mission creep.
Creepy mission?
Well for what most consider a top networking company developing wearables seems a bit odd.
We have a client with a small 2 session host Azure RDS deployment. Everything is fine except for O365. Their users are being asked to activate Office at least once a week. This deployment will grow and we are concerned about this becoming a large burden. We have used the shared deployment method and would be okay with once a month globally but weekly is ridiculous.
We thought about purchasing Volume licensing for these users but it seems like over kill for them to be licensed twice.
Any suggestions. Anyone else a a similar deployment?
70-100 user RDS deployment for remote and in office employees to access multiple LOB applications.
Local infrastructure won't work due to the number of satellite locations, Remote workers and infrastructure limitations.
There would need to be 3-4 cloud based Windows application servers and a SQL server.
Has anyone here used them for a Azure RDS deployment?
Any other suggestions for a 100 user build out?
If you're not using the other features you can renew the firewall portion for around half of the above cost. It would save you from ripping it out and starting over. But if you never fully implemented the Firewall then it might not be hard to replace.
If you do decide to replace it let me know, I would possibly buy it off of you.
For migration purposes I see no reason to leave Azure in order to keep the speeds up and the migration window short. After this is done we can work on moving them to a cheaper more stable option.
Yes, I see no reason to leave Azure.
If we can find out what DC it's in we will fire up the new VM in that DC as well.
I know the data store is 500GB in size but I don't think they are using that much.
@scottalanmiller said in Azure Migration:
If that is correct, then you need a tool like StorageCraft or Veeam Endpoint Protection that will do an agent-based full system backup of the VM from inside of the VM itself. Then use that to restore to the new VM.
This was my plan. We have a Storagecraft subscription so this is what we'll plan on doing. Now if we can figure out how to do a headstart restore to Azure we could do it with almost no downtime.
We are anticipating this being difficult due to lack of cooperation from the old provider.
We are going to be moving an Azure VM. We will not have access to the Azure dashbord the VM is setup under. What is the easiest was to back it up and recreate the VM in Azure?
We have a couple of Cisco SG300's that we will connecting to our main network via fiber. There is a 500 foot run and a 800 foot run of single mode that we will need GBIC's for. It looks like there are a couple of gigabit Single mode modules available. Any suggestions on which ones we should use?
We are getting ready to build backup server based on a R720 should we look for 2.5" chassis or 3.5" chassis?
I'm thinking the density of the 2.5" chassis would be beneficial for future expansion.
This makes me want to move everyone to Meraki or Ubiquiti. It's so much cleaner.