ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. Shuey
    S
    • Profile
    • Following 1
    • Followers 1
    • Topics 16
    • Posts 225
    • Groups 0

    Shuey

    @Shuey

    I've been tinkering with computers since 1984 and have been working professionally in the field of IT since 2005. I love learning about new technology and always enjoy figuring out how things work. People also say that I'm a freak about detail and organization - I take that as a compliment ;)

    I'm commonly known as "Shuey". I'm known for my passionate obsession of all things that interest me; particularly computers, video games (like Tetris), music, art, etc.

    103
    Reputation
    981
    Profile views
    225
    Posts
    1
    Followers
    1
    Following
    Joined
    Last Online
    Age 53
    Website www.youtube.com/user/Shuey187/playlists
    Location Florida

    Shuey Unfollow Follow

    Best posts made by Shuey

    • Invalid Drive Movement from HP SmartArray P411 RAID Controller with StorageWorks MSA60

      Due to hurricane Matthew, our company shutdown all servers for two days.  One of the servers was an ESXi host with an attached HP StorageWorks MSA60.

      When we logged into the vSphere client, we noticed that none of our guest VMs are available (they're all listed as "inaccessible").  And when I look at the hardware status in vSphere, the array controller and all attached drives appear as "Normal", but the drives all show up as "unconfigured disk".

      We rebooted the server and tried going into the RAID config utility to see what things look like from there, but we received the following message:

      "An invalid drive movement was reported during POST. Modifications to the array configuration following an invalid drive movement will result in loss of old configuration information and contents of the original logical drives".

      Needless to say, we're very confused by this because nothing was "moved"; nothing changed.  We simply powered up the MSA and the server, and have been having this issue ever since.

      I have two main questions/concerns:

      1. Since we did nothing more than power the devices off and back on, what could've caused this to happen?  I of course have the option to rebuild the array and start over, but I'm leery about the possibility of this happening again (especially since I have no idea what caused it).

      2. Is there a snowball's chance in hell that I can recover our array and guest VMs, instead of having to rebuild everything and restore our VM backups?

      posted in IT Discussion raid das storageworks msa60 hpe smartarray p411 smartarray hewlett-packard storage
      S
      Shuey
    • RE: How to handle this

      One word of warning I would interject: You have to be careful with Bcc's with certain bosses because some bosses aren't good at paying attention to details and won't even realize that they were included as a bcc instead of inline with the other recipient(s). This can increase the risk of more trouble vs less.

      posted in IT Discussion
      S
      Shuey
    • RE: Options for deploying standardized image to desktop & laptops?

      FOG ended up being just the ticket! It was easy to deploy, easy to configure, and I already had a base VM ready to go for image capture. Capturing and deploying the image was a breeze and took about 60 minutes total (not bad at all for the size of the base image and the speed of our infrastructure).

      Thanks for taking me into giving it a try - it was well worth the time!

      posted in IT Discussion
      S
      Shuey
    • ElectricSheep - An amazing screensaver!

      A collective "android dream", blending man and machine to create an artificial lifeform.

      http://electricsheep.org/

      I've been using this for over 10 years now, and I can still sit and watch it for long stints of time while listening to music. I noticed it wasn't talked about in the forums here, so I thought I'd share it for anyone who's never heard of it :).

      When you first run it, it has to either generate sheep or you need to download and install sheep to get it up and running more quickly. There are some "sheep packs" available online for free, and I have a collection of the latest flocks that I'd be willing to upload if anyone wants about 50GB worth.

      posted in Water Closet
      S
      Shuey
    • RE: ESXi Management Network HP ProCurve Trunking Issue...

      Oh dang!  I finally figured it out, but the solution was very counter-intuitive (at least from my perspective)!

      I needed to first just connect one physical NIC to the management network, add the host to vCenter, then go into the vSwitch0 and change the "teaming and failover" options to "Route based on IP hash".  Then I went back into the DCUI and added the second nic that was part of trk1.  I was still able to ping the server and access it after rebooting.

      What confused me about this process is that I'm used to only changing the vSwitch options AFTER the host is up and accessible.  But I could never get the host up and accessible with multiple NICs.

      This was a good lesson learned the hard way.  I at least know now that I have to use one NIC initially, gain access to the teaming options, then add more NICs afterwards.

      posted in IT Discussion
      S
      Shuey
    • RE: Rackspace Went Private

      3 months ago: https://mangolassi.it/topic/10493/rackspace-bought-out-and-now-private/1

      posted in News
      S
      Shuey
    • RE: If you are new drop in say hello and introduce yourself please!

      @scottalanmiller @dafyre Thanks guys! Scott told me about this site months ago, but I JUST NOW finally bit the bullet and signed up, lol.

      I'm looking forward to learning a lot from all the great people and years of collective experience! 🙂

      posted in Water Closet
      S
      Shuey
    • RE: Invalid Drive Movement from HP SmartArray P411 RAID Controller with StorageWorks MSA60

      You guys are not going to believe this...

      First I attempted a fresh cold boot of the existing MSA, waited a couple minutes, then powered up the ESXi host, but the issue remained. I then shutdown the host and MSA, moved the drives into our spare MSA, powered it up, waited a couple minutes, then powered up the ESXi host; the issue still remained.

      At that point, I figured I was pretty much screwed, and there was nothing during the initialization of the RAID controller where I had an option to re-enable a failed logical drive. So I booted into the RAID config, verified again that there were no logical drives present, and I created a new logical drive (RAID 1+0 with two spare drives; same as we did about 2 years ago when we first setup this host and storage).

      Then I let the server boot back into vSphere and I accessed it via vCenter. The first thing I did was removed the host from inventory, then re-added it (I was hoping to clear all the inaccessible guest VMs this way, but it didn't clear them from the inventory). Once the host was back in my inventory, I removed each of the guest VMs one at a time. Once the inventory was cleared, I verified that no datastore existed and that the disks were basically ready and waiting as "data disks". So I went ahead and created a new datastore (again, same as we did a couple years ago, using VMFS). I was eventually prompted to specify a mount option and I had the option of "keep the existing signature". At this point, I figured it'd be worth a shot to keep the signature - if things didn't work out, I could always blow it away and re-create the datastore again. After I finished the process of building the datastore with the keep signature option, I tried navigating to the datastore to see if anything was in it - it appeared empty. Just out of curiosity, I SSH'd to the host and checked from there, and to my surprise, I could see all my old data and all my old guest VMs! I went back into vCenter and re-scanned storage and refreshed the console, and all of our old guest VMs were there! I re-registered each VM and was able to recover everything! All of our guest VMs are back up and successfully communicating on the network.

      I think most people in the IT community would agree that the chances of having something like this happen are extremely low to impossible.

      As far as I'm concerned, this was a miracle of God...

      posted in IT Discussion
      S
      Shuey
    • RE: Advice on building "storage servers" with two DL380 G7 servers

      Thanks for all the feedback guys! I got on a live chat this morning with xByte and will be getting a quote later today. Can't hurt to at least see what they have to offer and get a price ;).

      posted in SAM-SD
      S
      Shuey
    • RE: What are you listening to? What would you recommend?

      Mr Oizo - Flat Beat: https://www.youtube.com/watch?v=qmsbP13xu6k

      posted in Water Closet
      S
      Shuey

    Latest posts made by Shuey

    • RE: ESXi Physical to Virtual NIC Ports Configuration?

      @crustachio said in ESXi Physical to Virtual NIC Ports Configuration?:

      @shuey

      Ah, sorry, bad assumption on my part.

      No worries - I work for one of those companies that falls outside what's probably considered the "norm" - I've been here for over 5 years so I'm pretty used to it now 😄

      posted in IT Discussion
      S
      Shuey
    • RE: ESXi Physical to Virtual NIC Ports Configuration?

      Thanks for the feedback guys. We don't have a "Plus" license, so we don't have the ability to do vMotion :(.

      posted in IT Discussion
      S
      Shuey
    • RE: ESXi Physical to Virtual NIC Ports Configuration?

      Thanks for the quick reply and info Dustin!

      Yep, our switch is capable of supporting trunks with up to 8 ports. But if you think it'd be better to make a smaller trunk instead (4 ports instead of 6?) and free up two more ports for other devices, I'd definitely consider it.

      posted in IT Discussion
      S
      Shuey
    • ESXi Physical to Virtual NIC Ports Configuration?

      I'm still very new to deploying/configuring/managing ESXi hosts, so I'm unsure about the best method for managing the networks on our hosts.

      One of our old hosts is configured with 3 physical NIC ports in a static trunk, and the "management network" AND the "guest traffic" are all part of the same vSwitch. We've been using the server in this way for at least the last 3-4 years.

      On the new host I've deployed a few days ago, I'm currently configuring the networking and unsure of the best route to take... My initial thought was to configure the host with two vSwitches:

      • vSwitch0 will be for management (with no VLAN ID assigned, and a static IP; let's say 10.1.10.20). This has two physical NICs connected (vmnic0 and vmnic1, which are connected to physical switch ports 1 and 2 which are trk1)
      • vSwitch1 will be for guest traffic (with 8 different "VM Networks" configured; one for each VLAN that will talk on this host - as well as our other hosts; it's the same on all three hosts). vSwitch1 will have a static IP of 10.1.10.25. This has 6 physical NICs connected (vmnic2-vmnic7, which are connected to physical switch ports 3-8 which are trk2)

      Both vSwitches are configured like this for "teaming and failover":

      • Load balancing: Route based on IP hash
      • Network failure detection: Link status only
      • Notify switches: Yes
      • Failback: No

      Is this standard practice, or should I consider doing it a different way?

      posted in IT Discussion
      S
      Shuey
    • RE: ESXi Management Network HP ProCurve Trunking Issue...

      Oh dang!  I finally figured it out, but the solution was very counter-intuitive (at least from my perspective)!

      I needed to first just connect one physical NIC to the management network, add the host to vCenter, then go into the vSwitch0 and change the "teaming and failover" options to "Route based on IP hash".  Then I went back into the DCUI and added the second nic that was part of trk1.  I was still able to ping the server and access it after rebooting.

      What confused me about this process is that I'm used to only changing the vSwitch options AFTER the host is up and accessible.  But I could never get the host up and accessible with multiple NICs.

      This was a good lesson learned the hard way.  I at least know now that I have to use one NIC initially, gain access to the teaming options, then add more NICs afterwards.

      posted in IT Discussion
      S
      Shuey
    • ESXi Management Network HP ProCurve Trunking Issue...

      We have a new ESXi v6.5 host that we've deployed, and we're trying to get the management network configured and working, but we're having problems.  When we only have one NIC selected in the ESXi DCUI, we can ping the host.  But when we select the two NICs that are on the trunk that we created, we can't ping the host anymore...

      The new host (ESXI2) is a Dell PowerEdge R730xd with 4 NIC ports, and an additional set of 4 NIC ports from a daughter card.  The initial goal was to make a trunk with NIC ports 1 and 2 for the management network, then make a separate trunk with ports 3-8 for the VM guest traffic.  This new host is connected to an HP ProCurve 3500yl-24G.  I've created two trunks (trk1 for management and trk2 for VM guests).  trk1 is created from ports 1 and 2 on the ProCurve, and those ports are connected to the physical NIC ports 1 and 2 on the server.  I used the following command to create the trunk:

      trunk 1,2 trk1 trunk

      Quick reference:  We have another v6.0 ESXi host (ESXI-1 / an HP ProLiant DL360 G8) at another site, and it too has a trunk for the management network (created with the same command as above), and that trunk is working fine - we can talk to and manage that host without any problems.  The switch that ESXI-1 is connected to is a ProCurve E5412zl. This switch and the 3500yl at the other site are both running the same exact firmware ROM version.

      The VLAN tagging is exactly the same on the trunk connected to ESXI-1 and ESXI-2 on their respective switches.

      Also, the VLAN ID on the hosts themselves is "not set".  We don't use or need to have that set (and I've tried it on the new host just to rule it out).

      I either need to figure out why the trunk on the new server isn't working (and fix it of course), or I need to just use one NIC port for managment, and create a separate vSwitch from the other trunked ports for the guest traffic (once I can actually get the management network working and connect to the host via the vSphere client).  If I go with a single NIC for the management network, how big of a deal is this?  A lot of the documentation I've read says that the management network should be configured with at least two NICs.

      posted in IT Discussion vmware vmware esxi networking
      S
      Shuey
    • RE: Advice on Current Infrastructure and Possible New VM Host Server

      @scottalanmiller said in [Advice on Current Infrastructure and Possible New VM Host Server]

      Generally with RAID 10 you don't do hot spares at all. They are costly and do extremely little. More likely you'd want 12 drives and one cold spare.

      Thanks for the info : )

      posted in IT Discussion
      S
      Shuey
    • RE: Advice on Current Infrastructure and Possible New VM Host Server

      @scottalanmiller said in Advice on Current Infrastructure and Possible New VM Host Server:

      In purchasing the new machine, drop the MSA and go to local storage.

      Definitely! I learned that from you many months ago ;). I was thinking about going with something like a R730 with 12 x 2TB drives (10 on RAID 10 with 2 hot spares).

      posted in IT Discussion
      S
      Shuey
    • RE: Advice on Current Infrastructure and Possible New VM Host Server

      @bnrstnr said in Advice on Current Infrastructure and Possible New VM Host Server:

      Is the storage the only thing holding you back from putting all 40 VMs on one host?

      Pretty much, yep.

      posted in IT Discussion
      S
      Shuey
    • Advice on Current Infrastructure and Possible New VM Host Server

      We currently have a VM environment that consists of two hosts, each running ESXi v6.0 and a vCSA that manages them (also v6.0). We're using a VMware Essentials license, so we don't have the fancy bells and whistles like vMotion. We have about 40 guest VMs spread across the two hosts, and we're using nearly 5TB of the available overall/combined total storage (nearly 8TB).

      Host 1 stats:

      • DL360 G7
      • 6-core x 2 (E5645 @ 2.40GHz)
      • 192GB RAM (12 x 16GB 1067MHz)
      • (MSA60) 12 x 750GB SATA RAID 10 (datastore "DS10")

      Host 2 stats:

      • DL360 G8
      • 8-core x 2 (E5-2440 v2 @ 1.90GHz)
      • 192GB RAM (12 x 16GB 1600MHz)
      • (MSA60) 12 x 750GB SATA RAID 10 (datastore "DS1")

      Currently, neither host could handle running all 40 of our guest VMs, so my initial thought is this: Management is only willing to buy (or upgrade) ONE server, and the budget is between $10k-$15k.

      I'd like to price out a server from xByte (something with at least 10TB of storage). If management approves the purchase, the idea would be to install ESXi v6.5 on it, then migrate all of our guests to this new server. Then I'd kill off the G7 and move its storage over to the G8 and rebuild the G8 into a fresh v6.5 host. The G7 would then be available as a spare in case the G8 ever dies.

      Thoughts/feedback?

      posted in IT Discussion
      S
      Shuey