@Tim_G, I can get screen grabs tomorrow and some more info. I've been trying to wrap my head around why some of the things were done the way they were.
Posts made by Kyle
-
RE: Hyper-V Failover Cluster FAILURE(S)
-
RE: Hyper-V Failover Cluster FAILURE(S)
@tim_g said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
The logs say the switches aren’t saturated, but I wonder is a network broadcast issues can’t be an issue here in the new network size.
I'm going to have to verify the network settings are correct tomorrow again since the 5 NICs associated with the nodes are all over the place I'm going to guess it has something to do with that.
They said they had this exact same issue a few months ago but the logs do not go back that far so I cannot compare the events. But having the cluster fail twice in 2 days isn't sitting right with me since it started occurring just days after switching IP ranges.
I've dumped all the logs and documented everything I have found that look out of place.
Isn't the SAN network isolated both physically and logically from everything else?
Can you show a screenshot of this window: http://en.community.dell.com/resized-image/__size/550x0/__key/CommunityServer.Blogs.Components.WeblogFiles/00.00.00.37.45/0574.1.jpg
Hard to find a good one on Google, but highlight the network you use for your for your SAN and show the settings if you can.
@Tim_G , They are not separated. I can access the SAN and the nodes from the same network and vice versa. There is also some places where there is a /16 network address and a 255.255.255.0 subnet mask in the config.
I know this needs to be changed, but as the FNG at the company I am currently restricted to submitting my recommendations in writing to the Director and then the advice of the Sr. Admin and DBA are asked what they think. Le Sigh.......
-
RE: Hyper-V Failover Cluster FAILURE(S)
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
The logs say the switches aren’t saturated, but I wonder is a network broadcast issues can’t be an issue here in the new network size.
I'm going to have to verify the network settings are correct tomorrow again since the 5 NICs associated with the nodes are all over the place I'm going to guess it has something to do with that.
They said they had this exact same issue a few months ago but the logs do not go back that far so I cannot compare the events. But having the cluster fail twice in 2 days isn't sitting right with me since it started occurring just days after switching IP ranges.
I've dumped all the logs and documented everything I have found that look out of place.
-
RE: Hyper-V Failover Cluster FAILURE(S)
@scottalanmiller The SAN's two 10G connections, should those be on the same subnet for best practices?
-
RE: Hyper-V Failover Cluster FAILURE(S)
@scottalanmiller said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
Some things that recieved the new /16 addresses still carry the 255.255.255.0 instead of the 255.255.0.0 and they said it didn't matter when I questioned them about it.
Um, that means they are NOT /16. 255.255.0.0 and /16 are the same thing, just two different ways to write it. It means that they didn't do the /16 as they said, and they knew it, and they lied about not needing to do it. It's true that at times you can have half broken smaller networks inside of larger ones, but they are broken and not all of your networking will work when it needs to.
So don't say it that they received /16 addressing, because they did not, they are /24 on a /24 network that is broken and can only communicate with a small fraction of the /16.
That's just broken, so that might easily be the issue.
I know. But being the FNG I'm not allowed to make changes to anything. I'm only allowed to view and make suggestions that have to be approved. The SAN also point to 192.168.x.x DNS addresses which I believe can be causing issues as well.
-
RE: Hyper-V Failover Cluster FAILURE(S)
@scottalanmiller said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
We rolled out Class B networking and then 48 hours later made the IP changes on the DFS farm and then 2 hours later we ended up having identical Event ID 5120 Where the Cluster lost connection to the SAN.
But that never happened before?
An issue here is that changing the networking means a lot of things were changed, not just the subnet mask size.
That's another issue too. Some things that recieved the new /16 addresses still carry the 255.255.255.0 instead of the 255.255.0.0 and they said it didn't matter when I questioned them about it.
-
RE: Hyper-V Failover Cluster FAILURE(S)
@tim_g said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
We rolled out Class B networking and then 48 hours later made the IP changes on the DFS farm and then 2 hours later we ended up having identical Event ID 5120 Where the Cluster lost connection to the SAN.
DFS? What exactly are you using DFS for in relation to the cluster?
Nothing as far as the Clustering goes. But that's not saying the MSP didn't change something else when they did the IP address changes on the DFS server after going from a /24 to a /16. I've read several things about Subnetting causing Auto Pause issues in a Hyper-V environment and 2 huge IP changes were made in the environment in a short amount of time.
-
RE: Hyper-V Failover Cluster FAILURE(S)
We rolled out Class B networking and then 48 hours later made the IP changes on the DFS farm and then 2 hours later we ended up having identical Event ID 5120 Where the Cluster lost connection to the SAN.
-
RE: Hyper-V Failover Cluster FAILURE(S)
@tim_g said in Hyper-V Failover Cluster FAILURE(S):
What are your end goals here exactly?
To just fix the error/main problem and be done?
To achieve true HA?
If not HA, then to actually set things up in a practical way that makes sense and is good for the business?
Host redundancy?
Network redundancy?
Host Storage / VM redundancy?
Fix the I/O issue to start. Deprecating all the old bare metal is on the list but is taking time as we have to work with vendors and some upgrades are contingent on future upgrades that are due soon.
-
RE: Hyper-V Failover Cluster FAILURE(S)
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@scottalanmiller , Is it possible to run local and SAN storage at the same time? Each node has 4 empty slots in each of the 6 nodes.
Yes.
Usually I'd load out the servers with tons on storage but they just have 2 drive in each server and they're only 146gb 15K SAS drives ran in RAID1.
I'm assuming that's where Hyper-V is installed. FYI, this is a waste of 15K drives - the hypervisor is rarely in use from the drives once it's up and running.
Hyper-V is installed on the server nodes using Server 2012 Datacenter on 2 disks. All VMs are stored in 1 LUN on the SAN.
Sure, this is a typical setup.
I would think with hosting all VM's critical and not so critical in separate LUNS and hell across 2 separate SANs with replication.
We are a 24/7 operation and downtime is a huge no no.
If downtime is a no no, why separate the VMs?
Do you have one or two SANs?
We have 2 but the 2nd is for SQL data.
So you don't have real HA anyhow? i.e. if either SAN fails, the VMs on the failed SAN are down.
Exactly!
There's also a SQL server that's still running in bare metal.
So you have at least 2 SQL servers? One bare metal and one VM?
The entire environment is bad practice after bad practice.
-
RE: Hyper-V Failover Cluster FAILURE(S)
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@scottalanmiller , Is it possible to run local and SAN storage at the same time? Each node has 4 empty slots in each of the 6 nodes.
Yes.
Usually I'd load out the servers with tons on storage but they just have 2 drive in each server and they're only 146gb 15K SAS drives ran in RAID1.
I'm assuming that's where Hyper-V is installed. FYI, this is a waste of 15K drives - the hypervisor is rarely in use from the drives once it's up and running.
Hyper-V is installed on the server nodes using Server 2012 Datacenter on 2 disks. All VMs are stored in 1 LUN on the SAN.
Sure, this is a typical setup.
I would think with hosting all VM's critical and not so critical in separate LUNS and hell across 2 separate SANs with replication.
We are a 24/7 operation and downtime is a huge no no.
If downtime is a no no, why separate the VMs?
Do you have one or two SANs?
We have 2 but the 2nd is for SQL data.
So you don't have real HA anyhow? i.e. if either SAN fails, the VMs on the failed SAN are down.
Exactly!
There's also a SQL server that's still running in bare metal.
So you have at least 2 SQL servers? One bare metal and one VM?
3 SQL servers. 2 VM 1 bare metal that is going to be migrated in the next month.
-
RE: Hyper-V Failover Cluster FAILURE(S)
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@scottalanmiller , Is it possible to run local and SAN storage at the same time? Each node has 4 empty slots in each of the 6 nodes.
Yes.
Usually I'd load out the servers with tons on storage but they just have 2 drive in each server and they're only 146gb 15K SAS drives ran in RAID1.
I'm assuming that's where Hyper-V is installed. FYI, this is a waste of 15K drives - the hypervisor is rarely in use from the drives once it's up and running.
Hyper-V is installed on the server nodes using Server 2012 Datacenter on 2 disks. All VMs are stored in 1 LUN on the SAN.
Sure, this is a typical setup.
I would think with hosting all VM's critical and not so critical in separate LUNS and hell across 2 separate SANs with replication.
We are a 24/7 operation and downtime is a huge no no.
If downtime is a no no, why separate the VMs?
Do you have one or two SANs?
We have 2 but the 2nd is for SQL data.
So you don't have real HA anyhow? i.e. if either SAN fails, the VMs on the failed SAN are down.
Exactly!
There's also a SQL server that's still running in bare metal.
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@scottalanmiller , Is it possible to run local and SAN storage at the same time? Each node has 4 empty slots in each of the 6 nodes.
Yes.
Usually I'd load out the servers with tons on storage but they just have 2 drive in each server and they're only 146gb 15K SAS drives ran in RAID1.
I'm assuming that's where Hyper-V is installed. FYI, this is a waste of 15K drives - the hypervisor is rarely in use from the drives once it's up and running.
Hyper-V is installed on the server nodes using Server 2012 Datacenter on 2 disks. All VMs are stored in 1 LUN on the SAN.
Sure, this is a typical setup.
I would think with hosting all VM's critical and not so critical in separate LUNS and hell across 2 separate SANs with replication.
We are a 24/7 operation and downtime is a huge no no.
If downtime is a no no, why separate the VMs?
Do you have one or two SANs?
We have 2 but the 2nd is for SQL data.
So you don't have real HA anyhow? i.e. if either SAN fails, the VMs on the failed SAN are down.
Exactly! There's also a SQL server running in bare metal.
-
RE: Hyper-V Failover Cluster FAILURE(S)
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@scottalanmiller , Is it possible to run local and SAN storage at the same time? Each node has 4 empty slots in each of the 6 nodes.
Yes.
Usually I'd load out the servers with tons on storage but they just have 2 drive in each server and they're only 146gb 15K SAS drives ran in RAID1.
I'm assuming that's where Hyper-V is installed. FYI, this is a waste of 15K drives - the hypervisor is rarely in use from the drives once it's up and running.
Hyper-V is installed on the server nodes using Server 2012 Datacenter on 2 disks. All VMs are stored in 1 LUN on the SAN.
Sure, this is a typical setup.
I would think with hosting all VM's critical and not so critical in separate LUNS and hell across 2 separate SANs with replication.
We are a 24/7 operation and downtime is a huge no no.
If downtime is a no no, why separate the VMs?
Do you have one or two SANs?
We have 2 but the 2nd is for SQL data.
-
RE: Congressional Pharmacist Accidentally Tells That Some Congresspeople Have Alzheimer's!
@zachary715 said in Congressional Pharmacist Accidentally Tells That Some Congresspeople Have Alzheimer's!:
It's fairly common knowledge (speculation) that one of our Senators has Alzheimber's, Dementia, or something of the like, yet somehow this knowledge wasn't enough to get him dethroned during his most recent election. When you've been there long enough and have some clout, people just get scared of losing that power and with it losing money for your state.
That's because voter's tend to be sheep that'll vote for the person they've always voted for.
Remember the comedy about it with Eddie Murphy?
-
RE: Hyper-V Failover Cluster FAILURE(S)
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@scottalanmiller , Is it possible to run local and SAN storage at the same time? Each node has 4 empty slots in each of the 6 nodes.
Yes.
Usually I'd load out the servers with tons on storage but they just have 2 drive in each server and they're only 146gb 15K SAS drives ran in RAID1.
I'm assuming that's where Hyper-V is installed. FYI, this is a waste of 15K drives - the hypervisor is rarely in use from the drives once it's up and running.
Hyper-V is installed on the server nodes using Server 2012 Datacenter on 2 disks. All VMs are stored in 1 LUN on the SAN.
Sure, this is a typical setup.
I would think with hosting all VM's critical and not so critical in separate LUNS and hell across 2 separate SANs with replication.
We are a 24/7 operation and downtime is a huge no no.
-
RE: Hyper-V Failover Cluster FAILURE(S)
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@scottalanmiller , Is it possible to run local and SAN storage at the same time? Each node has 4 empty slots in each of the 6 nodes.
Yes.
Usually I'd load out the servers with tons on storage but they just have 2 drive in each server and they're only 146gb 15K SAS drives ran in RAID1.
I'm assuming that's where Hyper-V is installed. FYI, this is a waste of 15K drives - the hypervisor is rarely in use from the drives once it's up and running.
Hyper-V is installed on the server nodes using Server 2012 Datacenter on 2 disks (RAID 1). All VMs are stored in 1 LUN on the SAN.
-
RE: Hyper-V Failover Cluster FAILURE(S)
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
What size pipe is between the switch and the router? Can you see if the pipe was tapped out around the time of the failure?
It's never tapped out. We keep the Tegile Metrics up on our monitor that shows our SQL server performance and TotalMail server which communicates with the fleet trucks.
-
RE: Hyper-V Failover Cluster FAILURE(S)
@dashrender said in Hyper-V Failover Cluster FAILURE(S):
@kyle said in Hyper-V Failover Cluster FAILURE(S):
@scottalanmiller , Is it possible to run local and SAN storage at the same time? Each node has 4 empty slots in each of the 6 nodes.
Yes.
Usually I'd load out the servers with tons on storage but they just have 2 drive in each server and they're only 146gb 15K SAS drives ran in RAID1.
-
RE: Hyper-V Failover Cluster FAILURE(S)
@scottalanmiller , Is it possible to run local and SAN storage at the same time? Each node has 4 empty slots in each of the 6 nodes.
-
RE: Hyper-V Failover Cluster FAILURE(S)
My Mistake. iSCSI 2014.3 is actually listed for both the Quorum and the primary LUN that contains all 42 VM's.