ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. Shuey
    3. Posts
    S
    • Profile
    • Following 1
    • Followers 1
    • Topics 16
    • Posts 225
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: ESXi Physical to Virtual NIC Ports Configuration?

      @crustachio said in ESXi Physical to Virtual NIC Ports Configuration?:

      @shuey

      Ah, sorry, bad assumption on my part.

      No worries - I work for one of those companies that falls outside what's probably considered the "norm" - I've been here for over 5 years so I'm pretty used to it now 😄

      posted in IT Discussion
      S
      Shuey
    • RE: ESXi Physical to Virtual NIC Ports Configuration?

      Thanks for the feedback guys. We don't have a "Plus" license, so we don't have the ability to do vMotion :(.

      posted in IT Discussion
      S
      Shuey
    • RE: ESXi Physical to Virtual NIC Ports Configuration?

      Thanks for the quick reply and info Dustin!

      Yep, our switch is capable of supporting trunks with up to 8 ports. But if you think it'd be better to make a smaller trunk instead (4 ports instead of 6?) and free up two more ports for other devices, I'd definitely consider it.

      posted in IT Discussion
      S
      Shuey
    • ESXi Physical to Virtual NIC Ports Configuration?

      I'm still very new to deploying/configuring/managing ESXi hosts, so I'm unsure about the best method for managing the networks on our hosts.

      One of our old hosts is configured with 3 physical NIC ports in a static trunk, and the "management network" AND the "guest traffic" are all part of the same vSwitch. We've been using the server in this way for at least the last 3-4 years.

      On the new host I've deployed a few days ago, I'm currently configuring the networking and unsure of the best route to take... My initial thought was to configure the host with two vSwitches:

      • vSwitch0 will be for management (with no VLAN ID assigned, and a static IP; let's say 10.1.10.20). This has two physical NICs connected (vmnic0 and vmnic1, which are connected to physical switch ports 1 and 2 which are trk1)
      • vSwitch1 will be for guest traffic (with 8 different "VM Networks" configured; one for each VLAN that will talk on this host - as well as our other hosts; it's the same on all three hosts). vSwitch1 will have a static IP of 10.1.10.25. This has 6 physical NICs connected (vmnic2-vmnic7, which are connected to physical switch ports 3-8 which are trk2)

      Both vSwitches are configured like this for "teaming and failover":

      • Load balancing: Route based on IP hash
      • Network failure detection: Link status only
      • Notify switches: Yes
      • Failback: No

      Is this standard practice, or should I consider doing it a different way?

      posted in IT Discussion
      S
      Shuey
    • RE: ESXi Management Network HP ProCurve Trunking Issue...

      Oh dang!  I finally figured it out, but the solution was very counter-intuitive (at least from my perspective)!

      I needed to first just connect one physical NIC to the management network, add the host to vCenter, then go into the vSwitch0 and change the "teaming and failover" options to "Route based on IP hash".  Then I went back into the DCUI and added the second nic that was part of trk1.  I was still able to ping the server and access it after rebooting.

      What confused me about this process is that I'm used to only changing the vSwitch options AFTER the host is up and accessible.  But I could never get the host up and accessible with multiple NICs.

      This was a good lesson learned the hard way.  I at least know now that I have to use one NIC initially, gain access to the teaming options, then add more NICs afterwards.

      posted in IT Discussion
      S
      Shuey
    • ESXi Management Network HP ProCurve Trunking Issue...

      We have a new ESXi v6.5 host that we've deployed, and we're trying to get the management network configured and working, but we're having problems.  When we only have one NIC selected in the ESXi DCUI, we can ping the host.  But when we select the two NICs that are on the trunk that we created, we can't ping the host anymore...

      The new host (ESXI2) is a Dell PowerEdge R730xd with 4 NIC ports, and an additional set of 4 NIC ports from a daughter card.  The initial goal was to make a trunk with NIC ports 1 and 2 for the management network, then make a separate trunk with ports 3-8 for the VM guest traffic.  This new host is connected to an HP ProCurve 3500yl-24G.  I've created two trunks (trk1 for management and trk2 for VM guests).  trk1 is created from ports 1 and 2 on the ProCurve, and those ports are connected to the physical NIC ports 1 and 2 on the server.  I used the following command to create the trunk:

      trunk 1,2 trk1 trunk

      Quick reference:  We have another v6.0 ESXi host (ESXI-1 / an HP ProLiant DL360 G8) at another site, and it too has a trunk for the management network (created with the same command as above), and that trunk is working fine - we can talk to and manage that host without any problems.  The switch that ESXI-1 is connected to is a ProCurve E5412zl. This switch and the 3500yl at the other site are both running the same exact firmware ROM version.

      The VLAN tagging is exactly the same on the trunk connected to ESXI-1 and ESXI-2 on their respective switches.

      Also, the VLAN ID on the hosts themselves is "not set".  We don't use or need to have that set (and I've tried it on the new host just to rule it out).

      I either need to figure out why the trunk on the new server isn't working (and fix it of course), or I need to just use one NIC port for managment, and create a separate vSwitch from the other trunked ports for the guest traffic (once I can actually get the management network working and connect to the host via the vSphere client).  If I go with a single NIC for the management network, how big of a deal is this?  A lot of the documentation I've read says that the management network should be configured with at least two NICs.

      posted in IT Discussion vmware vmware esxi networking
      S
      Shuey
    • RE: Advice on Current Infrastructure and Possible New VM Host Server

      @scottalanmiller said in [Advice on Current Infrastructure and Possible New VM Host Server]

      Generally with RAID 10 you don't do hot spares at all. They are costly and do extremely little. More likely you'd want 12 drives and one cold spare.

      Thanks for the info : )

      posted in IT Discussion
      S
      Shuey
    • RE: Advice on Current Infrastructure and Possible New VM Host Server

      @scottalanmiller said in Advice on Current Infrastructure and Possible New VM Host Server:

      In purchasing the new machine, drop the MSA and go to local storage.

      Definitely! I learned that from you many months ago ;). I was thinking about going with something like a R730 with 12 x 2TB drives (10 on RAID 10 with 2 hot spares).

      posted in IT Discussion
      S
      Shuey
    • RE: Advice on Current Infrastructure and Possible New VM Host Server

      @bnrstnr said in Advice on Current Infrastructure and Possible New VM Host Server:

      Is the storage the only thing holding you back from putting all 40 VMs on one host?

      Pretty much, yep.

      posted in IT Discussion
      S
      Shuey
    • Advice on Current Infrastructure and Possible New VM Host Server

      We currently have a VM environment that consists of two hosts, each running ESXi v6.0 and a vCSA that manages them (also v6.0). We're using a VMware Essentials license, so we don't have the fancy bells and whistles like vMotion. We have about 40 guest VMs spread across the two hosts, and we're using nearly 5TB of the available overall/combined total storage (nearly 8TB).

      Host 1 stats:

      • DL360 G7
      • 6-core x 2 (E5645 @ 2.40GHz)
      • 192GB RAM (12 x 16GB 1067MHz)
      • (MSA60) 12 x 750GB SATA RAID 10 (datastore "DS10")

      Host 2 stats:

      • DL360 G8
      • 8-core x 2 (E5-2440 v2 @ 1.90GHz)
      • 192GB RAM (12 x 16GB 1600MHz)
      • (MSA60) 12 x 750GB SATA RAID 10 (datastore "DS1")

      Currently, neither host could handle running all 40 of our guest VMs, so my initial thought is this: Management is only willing to buy (or upgrade) ONE server, and the budget is between $10k-$15k.

      I'd like to price out a server from xByte (something with at least 10TB of storage). If management approves the purchase, the idea would be to install ESXi v6.5 on it, then migrate all of our guests to this new server. Then I'd kill off the G7 and move its storage over to the G8 and rebuild the G8 into a fresh v6.5 host. The G7 would then be available as a spare in case the G8 ever dies.

      Thoughts/feedback?

      posted in IT Discussion
      S
      Shuey
    • RE: Advice on building "storage servers" with two DL380 G7 servers

      @dashrender All I can say at this point is "you have no idea what I'm dealing with at this company in terms of management perspective and expectation". I could spend another several paragraphs trying to defend myself and justify all the stuff I'm dealing with here, but I'm quite certain that none of you guys care to waste your time reading it, and it would actually just create more fuel for the fire that's already burning.

      Cue the mangolassi elitist guillotine that lobs off Shuey's head

      posted in SAM-SD
      S
      Shuey
    • RE: Advice on building "storage servers" with two DL380 G7 servers

      @jimmy9008 I hear where you're coming from, but until you've walked 5+ years in my shoes, maybe you should slow your roll a bit. This forum isn't about making sure that people hurry up and make a purchase, right? It's a forum that's geared towards helping other IT pros get the information they're asking for.

      posted in SAM-SD
      S
      Shuey
    • RE: Advice on building "storage servers" with two DL380 G7 servers

      I have an update on this thread, and more questions, lol. First of all, I got a new quote from xByte:

      • Dell PowerEdge R730 Server 3.5" chassis with up to 8 drive bays (1)
      • Intel E5-2620v3 2.4GHz/15M/1866MHz 6-Core 85W (1)
      • Dell PE R730 Heat Sink (1)
      • Dell 8GB DDR4 (2)
      • PERC H730 Mini Mono Controller 1GB NV Cache (RAID 0/1/5/6/10/50/60) (1)
      • 6TB 7.2K 3.5" 12Gbps NL SAS Hard Drive (Dell Enterprise) (8)
      • Broadcom 5720 QP 1Gb Network Daughter Card (1)
      • 4 PCIe Slots, 3 x8 and 1 x16 (1)
      • iDRAC8 Enterprise (1)
      • Dell 2U Sliding Ready Rails (1)
      • Dell 750W Power Supply (2)
      • Windows Server 2016 Standard 2-Core OLP (8)
      • 5 Year Dell NBD Onsite Warranty (1)

      Total with tax and shipping: $6360

      My boss had me ask the xByte rep about the hard drive brand/model, and this is what the rep said:

      "We have several to choose from, but the one we have most of are cobranded Dell/Toshiba hard drives. The Toshiba model # is MG04SCA60EE."

      My boss then had me contact CDW to ask for a quote from them. Here's what I told the CDW rep:

      "We're looking to buy a Dell PowerEdge server to act as a storage server. We were initially thinking of an R730, but would be open to an older server if that's more fitting for our needs (520 or 720). We'd essentially like to price out a server with the following bare minimum specs:

      • A 8-12 bay (3.5") chassis
      • A single Xeon 6-core CPU with heatsink
      • 12GB or 16GB of RAM
      • One PERC H710P RAID controller with 1GB NV Cache (or equivalent)
      • At least eight 6TB 7.2K 3.5" 12Gbps NL SAS drives
      • One Broadcom 5720 QP 1Gb NIC (or equivalent)
      • iDRAC7 Enterprise
      • Dell "sliding ready rails"
      • 2x 750W PSUs
      • Windows Server 2016
      • 3-5 Year Warranty

      Is that something you guys could put together for us? Our budget is around $6K"

      Here's what he replied back with:

      "I have the pricing back and am over 10K with this config. I know you said the budget was 6k.
      Where can we cut back? The 8 6tb drives are almost 6k alone.
      "

      I told him we got a quote from another vendor and said that the CDW quote was considerably higher.

      The CDW rep said:

      "Is it apple to apples or did xbyte use third party memory and drives? We are dell's largest partner so it should not be a large difference if truly apples to apples."

      I'm not sure what to think of all this right now.... 😕

      posted in SAM-SD
      S
      Shuey
    • RE: Advice on building "storage servers" with two DL380 G7 servers

      @scottalanmiller said in Advice on building "storage servers" with two DL380 G7 servers:

      A true bean counters job is to do bookkeeping (that's what a bean counter is) and report numbers to the decision makers, not be decision makers. Think of them like a calculator. Very handy to have, but you'd never have a calculator make your decisions.

      Our beans counters are ALSO the decision makers...

      posted in SAM-SD
      S
      Shuey
    • RE: Advice on building "storage servers" with two DL380 G7 servers

      @scottalanmiller said in Advice on building "storage servers" with two DL380 G7 servers:

      @shuey said:

      @dashrender LOL, that's an upgrade for us! We're still running Server 2008 R2 on almost every server in our fleet! 😮

      Might be an upgrade, but why deploy anything new that isn't current? Why take the time and effort to deploy old software?

      Because we still have keys for 2008 R2. It's one more thing that the bean counters can avoid spending money on (and that's not MY choice by the way...).

      posted in SAM-SD
      S
      Shuey
    • RE: Advice on building "storage servers" with two DL380 G7 servers

      @dashrender said in Advice on building "storage servers" with two DL380 G7 servers:

      @shuey said in Advice on building "storage servers" with two DL380 G7 servers:

      That's not a reason to care about the form factor.

      Basically what I was getting at was you only care about cost/performance. As such dropping it the stated need for LFF keeps your options more open. As noted you will likely end up with LFF drives because price/performance are at an acceptable level compared to other options.

      Stipulating LFF does nothing but limit options needlessly.

      Says the guy who's not at the mercy of a management team who only sees "the number of beans" and "the cost of the beans"...

      posted in SAM-SD
      S
      Shuey
    • RE: Advice on building "storage servers" with two DL380 G7 servers

      @dustinb3403 said in Advice on building "storage servers" with two DL380 G7 servers:

      Well I would agree, you have to chose the literal best fiscally viable option that meets the requirements. If LFF drives come in cheaper, even by a little, while meeting every other requirement (IOPS) then what does it matter that the drives aren't SFF?

      It's cheaper and performs the same or better. If it cost the same and performed worse than you would have something to stand on.

      Yeah, the difference in price was literally thousands of dollars when we spec'd out a same model server with 2.5" drives vs 3.5" drives (especially since the biggest drive they offer in the 2.5" size is 2TB - it takes a lot more of 'em to reach 18TB of usable space).

      posted in SAM-SD
      S
      Shuey
    • RE: Advice on building "storage servers" with two DL380 G7 servers

      @dashrender said in Advice on building "storage servers" with two DL380 G7 servers:

      OK, so it's not that you care about form factor, it's that you care about cost.

      How 'bout we say that I "care about form factor because of cost..." ? 😛

      posted in SAM-SD
      S
      Shuey
    • RE: Advice on building "storage servers" with two DL380 G7 servers

      @dashrender said in Advice on building "storage servers" with two DL380 G7 servers:

      Why does storage form factor matter?

      Eight 6TB drives is cheaper than trying to build out 18TB of usable storage with the 2.5" drives...

      posted in SAM-SD
      S
      Shuey
    • RE: Advice on building "storage servers" with two DL380 G7 servers

      @coliver said in Advice on building "storage servers" with two DL380 G7 servers:

      Any reason to need 2U? Just the drive configuration? Or was there something else?

      Main reason is storage. We need at least 18TB usable, and we want it in the 3.5" form factor.

      posted in SAM-SD
      S
      Shuey
    • 1
    • 2
    • 3
    • 4
    • 5
    • 11
    • 12
    • 1 / 12