ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. PhlipElder
    3. Best
    • Profile
    • Following 0
    • Followers 3
    • Topics 28
    • Posts 913
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Backup strategy for customer data?

      @Pete-S said in Backup strategy for customer data?:

      I did some comparisons of the cost involved for disk versus tape and disregarding the difference between the media types.

      Tape is much cheaper per TB (about $11/TB) but you need to offset the cost of the tape drive/autoloader.
      Disk on the other hand will require a more expensive server with more drive bays and also requires additional disks for partition data.

      In our case I found that at 150 TB of native storage it will break even. If you have more data in backup storage than that, then tape is cheaper.

      How many tapes in the library?

      How many briefcases to take off-premises for rotations?

      Where is the brain trust to manage the tapes, their backup windows, and whether the correct tape set is in the drives?

      If the tape libraries are elsewhere then the above goes away to some degree (distance comes into play).

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: Backup strategy for customer data?

      @Pete-S said in Backup strategy for customer data?:

      In our case I'm thinking about two options.

      OPTION 1
      We'll put together a backup server with a large-ish disk array (maybe 100TB or so) connected with SAS to a tape autoloader. Backups go from backup clients to the disk array and when done it's all streamed to tape. The tapes are exchanged and put off-line. Each week a full backup of disks are taken off-site as well.

      To keep the networks separated as far as possible we can put the backup server on it's own hardware and it's own network and firewall it off from the production servers. So if production servers or VM hosts are breached the backup server is still intact. If somehow it's also compromised we have to restore everything from tape.

      OPTION 2
      We put a smaller backup array, say 10TB or so, on each physical VM host. Backups are run on each host from the production VMs to the backup VM with the backup array. Remember our VMs are running on local storage so this will not require any network traffic.

      When done, we stream the data from each backup VM to a "tape backup"-server that just basically contains the tape drive (with autoloader) and will write the data to tape. Firewall and tape handling will be the same as option 1. Since the disks with the backups are on each host, several backup servers have to be breached to lose all disk backups.

      What do you think?

      Inside Job puts # 2 to rest. Let's just say there are plenty of stories about entire setups being wiped starting with the backups then hitting go on 0000 for the SANs.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: AWS Catastrophic Data Loss

      @IRJ said in AWS Catastrophic Data Loss:

      For IaaS, using a tool like terraform can help you transition from one platform to another as terraform is compatible with many cloud hosts.

      I feel like I'm back in the early 2000s when Microsoft released Small Business Server 2000 then Small Business Server 2003 with the business owner DIY message. We got a lot of calls as a result of that messaging over the years.

      Then, there was the mess created by the "IT Consultant" that didn't know their butt from a hole in the ground. We cleaned up a lot of those over the years.

      At least in the above cases we could work with some sort of box to get their data on a roll.

      Today, that possibility is virtually nil.

      That is, the business owner being knowledgeable enough to navigate the spaghetti of cloud services setup to get to a point where they are secure and backed up for one. For another, as mentioned above, how many folks know how to set up any cloud?

      Then, toss into the mix the message about speed and agility and we have a deadly mix beyond the SBS messaging and failures in that we're talking orders of magnitude more folks losing their businesses as a result of one big FUBAR.

      Ever been on the back of a bike holding a case of beer while the "driver" hit 200+ KPH? I have. Once. And lived to never, ever, ever, trust an arse like that again.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: AWS Catastrophic Data Loss

      @dafyre said in AWS Catastrophic Data Loss:

      @Pete-S said in AWS Catastrophic Data Loss:

      Message From Amazon AWS :

      Update August 28, 2019 JST:

      That is how a post-mortem write up should look. It's got details, and they know within reasonable doubt what actually happened...

      It reads like Lemony Snicket's Series of Unfortunate Events, though, lol.

      It's amazing. A data centre touted as highly available, cloud only according to some marketing folks, has so many different single points of failure that can bring things down.

      I can't count the number of times HVAC "redundant" systems have been the source, or blamed, for system wide outages or outright hardware failures.

      Oh, and ATS (Automatic Transfer Switch) systems blowing out A/B/C even though the systems are supposed to be redundant.

      A/B/C failure from one power provider causing a cascade failure.

      Generator failures as mentioned here in the first article.

      Storms.

      The moral of this story is: Back Up. Back Up. Back the eff up.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: AWS Catastrophic Data Loss

      @DustinB3403 said in AWS Catastrophic Data Loss:

      @PhlipElder said in AWS Catastrophic Data Loss:

      @DustinB3403 said in AWS Catastrophic Data Loss:

      @PhlipElder said in AWS Catastrophic Data Loss:

      Most of our clients have had 100% up-time across solution sets for years and in some cases we're coming up on decades.

      Really, decades of uptime. Not a single bad ram module, raid failure, CPU, PSU or MB issue. No site issues (fire, earthquake, tornado etc) in all that time.

      @PhlipElder said in AWS Catastrophic Data Loss:

      Cloud can't touch that. Period.

      You're full of it.

      I'm quite proud of our record. It's a testament to the amount of time and money put in to research, proof, and thrash the solution sets we've sold over the years. We don't sell anything we first don't proof.

      So you're using technology that is at least a decade old for every one of your customers, because by your own word you can't possibly have had the time to test anything from this year and sold it to a customer!

      Not sure how that conclusion came about but far from it.

      We've had plenty of NDAs over the years to proof with upcoming tech so that we're on the right page and current.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: AWS Catastrophic Data Loss

      @PhlipElder said in AWS Catastrophic Data Loss:

      @Dashrender said in AWS Catastrophic Data Loss:

      @DustinB3403 said in AWS Catastrophic Data Loss:

      @PhlipElder said in AWS Catastrophic Data Loss:

      @DustinB3403 said in AWS Catastrophic Data Loss:

      @PhlipElder said in AWS Catastrophic Data Loss:

      @DustinB3403 said in AWS Catastrophic Data Loss:

      @PhlipElder said in AWS Catastrophic Data Loss:

      Most of our clients have had 100% up-time across solution sets for years and in some cases we're coming up on decades.

      Really, decades of uptime. Not a single bad ram module, raid failure, CPU, PSU or MB issue. No site issues (fire, earthquake, tornado etc) in all that time.

      @PhlipElder said in AWS Catastrophic Data Loss:

      Cloud can't touch that. Period.

      You're full of it.

      I'm quite proud of our record. It's a testament to the amount of time and money put in to research, proof, and thrash the solution sets we've sold over the years. We don't sell anything we first don't proof.

      So you're using technology that is at least a decade old for every one of your customers, because by your own word you can't possibly have had the time to test anything from this year and sold it to a customer!

      Not sure how that conclusion came about but far from it.

      We've had plenty of NDAs over the years to proof with upcoming tech so that we're on the right page and current.

      You've said you've tested everything that you sell. How could this possibly be true to make claims of decades worth of up-time. Power supplies fail, switches die, disks die, MB's die, sites lose power (which people still have jobs to do - just because the lights are out. . .)

      So you're still full of it. Not to mention performing any update will eventually require a restart. Windows updates, file server migrations etc. All require some downtime.

      all of those things can fail - as long as they have an HA solution that accounts for those failures.

      As he said earlier - the customer has NEVER been impacted - that's the point of measurement.

      Thank you sir. 🙂

      This one is relatively recent:
      http://blog.mpecsinc.ca/2018/06/our-calgary-oil-gas-show-booth-slide.html

      This is one of our PoC sets: http://blog.mpecsinc.ca/2018/01/storage-spaces-direct-s2d-sizing-east.html

      Systems we built on the current generation before now (had me wires crossed):
      http://blog.mpecsinc.ca/2017/11/intel-server-system-r2224wftzs.html

      A half Petabyte setup: https://www.youtube.com/watch?v=OKnRzEgHHKA

      At our peak working with these we had three of them here in the shop: https://www.youtube.com/watch?v=26U6pDsdz5M&t=321s

      Drove the neighbours crazy with the jet engine sounds coming out of here. Plenty of Ear Defenders to be had. 😉

      That help?

      EDIT: Any guesses on the cost for the four node S2D setup with Mellanox 40GbE RDMA dual switch fabric?

      This post is a bit dated. But it states clearly, and concisely, exactly where we're at as far as investing in our folks here:

      http://blog.mpecsinc.ca/2016/11/whats-in-lab-profit.html

      My attitude is simple: If we ain't learning we're effing sh#t up.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: Windows Server 2012 Remote Web Access not connecting and RDS services showing error 23005

      @Osvaldo said in Windows Server 2012 Remote Web Access not connecting and RDS services showing error 23005:

      Got this error setting Remote Web Acess on Windows Server 2012 with RDW for a user (The "user" on client computer "192.168.0.31", met connection authorization policy and resource authorization policy requirements, but could not connect to resource "computer name". Connection protocol used: "HTTP". The following error occurred: "23005"). does anybody knows something about this kind of error?

      Don't know if this is related, but this seems to have started after the computers in the clinic in question were updated from Windows 7 to Windows 10. Seem likely that this is the issue.

      The Remote Web Access system is used to access physical desktops, not a central RDS server.

      Make sure the Group Policy managing the Windows Firewall is set to allow all Remote Desktop inbound rule sets that are available.

      Make sure the Group Policy Central Store on the DC(s) has been updated with the most recent PolicyDefinition folders and files.

      Make sure the firewall is left on with logging set to ENABLED so that troubleshooting the firewall is a one step deal of looking at the log for the word DROP.

      I'm thinking RDP TCP/UDP is being dropped at the endpoint.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: Experience with off-brand SAS cables?

      @Pete-S said in Experience with off-brand SAS cables?:

      Does anyone have experience using off-brand external SAS cables like the one below?
      https://www.amazon.com/CableCreation-External-26pin-SFF-8088-Cable/dp/B013G4F3A8

      I mean we usually wouldn't look for a HPE or Dell branded CAT6 cable. Is this any different?

      We've used a lot of AXIOM "equivalent" cables over the years in both SAS and Network settings. We've not had any issues with them.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: Installing Windows 10 without a Microcoft account

      @Obsolesce said in Installing Windows 10 without a Microcoft account:

      @Dashrender said in Installing Windows 10 without a Microcoft account:

      @PhlipElder said in Installing Windows 10 without a Microcoft account:

      @Dashrender said in Installing Windows 10 without a Microcoft account:

      @scottalanmiller said in Installing Windows 10 without a Microcoft account:

      I use this guide to walk customers through setting up a machine nearly weekly.

      I'm curious why you push them away from using a MS account?

      If the machine is pulled in to Azure AD by signing in with a MS AAD account, one cannot use that account to RDP into that endpoint. Something be broken there.

      Better to set up a local account and bind the Azure AD/MS Account in the OS settings.

      Interesting - didn't know that.

      So what - you setup a local account, then under that local account, join it to an MS AAD, then login as the MS AAD account? Then you can RDP into the computer using the MS AAD account?

      You can RDP into an AAD joined Win10 PC with an AAD account. I do it all the time.

      Perhaps that account isn't added to the local Administrators group, or the one that allows RDP.

      Log on process?

      Domain\UserName & Password

      or

      [email protected] AAD account?

      For standalone non-domain joined OS VMs/PCs [email protected] AAD does not work.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: RDS 2019 Setup and RDS License Role

      @pmoncho said in RDS 2019 Setup and RDS License Role:

      @PhlipElder said in RDS 2019 Setup and RDS License Role:

      @pmoncho

      @pmoncho said in RDS 2019 Setup and RDS License Role:

      Working on replacing two 2k8R2 TS servers with RDS 2019. These two servers host a LOB application with about 50 users.

      Due to a few issues with our current setup and LOB app limitations, my plan is to create two RDS servers holding their own RDSH, RDCB, and RDWA roles. Each server will have their own collection (Containing only itself)

      My question is, can these two servers still connect to one RDS License server (which will be located on a DC)? (I don't see why not but figured I'd check)

      I know this is convoluted but it will change in the future to a single RDSH server with all roles as we will be upgrading to the new version of our LOB app connecting to a backend SQL Server.

      Question answered, but curious, why not have Broker/Gateway/Web roles on their own server/VM then have two collections set up along with their respective session hosts configured on that VM?

      That would make things like user profile disks (FSLogix) and load balancing, and certificates, a lot simpler to work with along with getting Single Sign-On configured in AD.

      I was not planning on using the RDGW as my user come in via a SSLVPN.

      As for the rest, my current issues are, Windows licensing costs (keep them down as I keep trying to move services off of windows), going from 2K8R2 to 2019, we will need to use both old version and new version at the same time, limitations in the old version of our LOB app keeps my configuration options very limited without dorking up the current data, and trying to do this will minimal distraction to my in-house users and our remote clients from a tech side.

      Just moving to the new version of our LOB app is going to be a wallop to our users.

      This convoluted setup is temporary until the old LOB version dies. Its the best I could come up with based on my research, RDS requirements, licensing costs (Server, CALs, RDS CALs..etc...), LOB and user requirements/limitations.

      RDGW provides a layer of protection whether via HTTPS 443 port forwarding, which is the only way to publish RDS Internet facing IMNSHO, or using a VPN of any sort.

      Using a single Broker setup would simplify management and ease of connection for users.

      One option would be to use one collection session host setup for the user's dedicated desktop and then publish the other session host's software installed as RemoteApps back to those users. That keeps things relatively clean in the user's local profiles.

      Using User Profile Disks with FSLogix on a network share makes it even easier.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: RDS 2019 Setup and RDS License Role

      @pmoncho said in RDS 2019 Setup and RDS License Role:

      @PhlipElder said in RDS 2019 Setup and RDS License Role:

      @wrx7m said in RDS 2019 Setup and RDS License Role:

      @PhlipElder said in RDS 2019 Setup and RDS License Role:

      Archiving is simpler for users that leave the org. Archive the .VHDX file.
      Profile choke fix: Rename the .VHDX file to .OLD, log the user on, migrate their data. Done.

      Is this specific to Hyper-V or is that even related to the way this works?

      The User Profile Disk is a dynamic .VHDX file that gets created in the designated storage location.

      It can be set up with a storage limit. 5GB, 10GB, or more. Whatever maximum user GB size may be needed.
      ^^^ This is another reason to use UPDs/FSLogix. Storage sprall.

      UPD TIP: Once the RDS setup is complete and the TEMPLATE.VHDX is created in the designated location, mount the TEMPLATE.VHDX file, and shrink the partition down to a "starter size" GB, and dismount it.

      Example: We have a setup where we deployed 30GB maximum UPDs.
      We edit the template to shrink the partition to 10GB. That's all a new user gets when they log on the first time. If they hit a warning for low storage down the road, we can do one of two things:
      1: Clean-up your mess
      2: Log them off, mount the .VHDX, expand the partition by 5GB or more, dismount the .VHDX, and have the user log on. They get instant storage increases.

      Good info. You gave me much to think about.

      Are UPD's still worth it if I expect a user to use <200MB? Everything our users need on the RDSH server is in the LOB app?

      I think so. After working with them for a while, it becomes fairly clear as to the "why".

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • Want some Wasabi with that Azure Cloud?

      https://wasabi-support.zendesk.com/hc/en-us/articles/360035162251-Statement-on-Wasabi-Service-Degradation-Over-September-October-2019

      Lots of pain there folks.

      https://www.zdnet.com/article/microsoft-azure-customers-reporting-hitting-virtual-machine-limits-in-u-s-east-regions/
      Azure is hitting hardware constraints. Not the first time and won't be the last.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: 2-in-1 laptop for a C-Level

      @gjacobse said in 2-in-1 laptop for a C-Level:

      @PhlipElder

      Yea - I suppose the MS Surface would be okay. But I recommend NOT going with the Surface. I have seen more than my share of the Surface with a swollen battery because of charge failures. Since the MS Surface is not serviceable, it's either scrap or warrantied to MS with a exchange.

      In the case of a swap - plan on data backup, and wipe before releasing it.

      Yeah, I've had more than my fair share of issues with Surface Pro.

      In hindsight, I've become a lot more tolerant of issues with my hardware than I should be. :S

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: ConnectWise Zero Day?

      As a FYI: Chatting with a colleague that uses CW, they indicated that 2FA on all endpoints reduces the risk and that the vuln may have been MySQL published to the Internet related.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • SmarterASP.Net - Ransomware Encrypted

      https://www.zdnet.com/article/major-asp-net-hosting-provider-infected-by-ransomware/

      Sigh

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: Windows 10 1909 is Official

      @black3dynamite said in Windows 10 1909 is Official:

      @PhlipElder said in Windows 10 1909 is Official:

      @black3dynamite said in Windows 10 1909 is Official:

      Windows 10 1909 x64 is 5GB now.

      Is that the Install.WIM file you are talking about?

      The iso file.

      VLSC:
      75189d5c-4b07-4d3e-a5f9-0304fd25f4d6-image.png

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: RDS 2019 Setup and RDS License Role

      @wrx7m said in RDS 2019 Setup and RDS License Role:

      When using UPD, is there anyway to access various users profiles' folders and files from the RDS server file system?
      Example:
      C:\Users\Bob\Desktop

      Edit: I found that I can path to it via UNC (\ \server\c$\users\Bob\Desktop), but get permissions error when I go locally, from C:\Users\Bob\Desktop. Also, the Bob folder is only a shortcut (LNK file) in the users directory.

      The path is there in the form of a symbolic link. So, no. It's one way to limit exposure to ransomware.

      If there is a need, the user can be logged off and one can mount the .VHDX file to gain access to whatever is needed.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: Password manager for ordinary users?

      KeePass. Been using it for years.

      https://keepass.info/

      It's simple, great for organizing, and the auto-type just works.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: Need help trouble shooting GPO.

      @srdennis said in Need help trouble shooting GPO.:

      OMG!!!!! It worked!!! Thank you so much Obsolesce. I cannot believe that I didn't understand that aspect of how this all works. So If I were to put a user into this test OU and apply the test GPO that has a user GPO in it then it will get applied?

      AD/GP best practice is to separate out the OU paths. One for Computer objects and another for User objects.

      Group Policy operates similar to Cascading Style Sheets that tailors the way a web site can look with the GPO closest to the object winning with few exceptions.

      Never edit the Default Domain Policy or Default Domain Controllers Policy. Always create a new GPO and link it to the required OU.

      GPOs for Computer objects should have the User section disabled and same for User objects having the Computer section disabled.

      GPResult /H C:\Temp\GPResults.html
      

      That's how to find out the what/where/when for GPOs applying. Computer GPOs will only show up if the command is run via an elevated shell (CMD). The Temp directory needs to exist.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • RE: How can we recover data from Hard Drives were on RAID 10 without controller?

      GetDataBack with RAID Reconstructor is a utility set we've used to recover data from a set of drives that were originally in a NAS box.

      posted in IT Discussion
      PhlipElderP
      PhlipElder
    • 1
    • 2
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 13 / 16