Understanding Server 2012r2 Clustering
-
@Dashrender said:
It is when Microsoft is making white papers for good deployment strategies.
Whoa, what brings you to that conclusion? A white paper, as people point out all the time, is nothing but a pamphlet form of advertisement. White papers are marketing, nothing more. Microsoft has things to sell, they are no different from any other vendor. They want you to do sensible things, but they want you to spend money with them first and their partners second more than they want you to be sensible.
There is nothing, anywhere, that would suggest that MS white papers are some exclusion from everything else in every industry. It's just a sales tool like any other.
-
If there are no real recommendations, then how can they honestly call it Best Practices. Doesn't one have to be a lie? Either the title or the lack of real information?
-
@Dashrender said:
If there are no real recommendations, then how can they honestly call it Best Practices. Doesn't one have to be a lie? Either the title or the lack of real information?
Best Practices should come from an industry, not a vendor. Who determines the best practice in building a bridge? Certainly civil engineers and experts, not concrete salesmen?
-
@Dashrender said:
If there are no real recommendations, then how can they honestly call it Best Practices. Doesn't one have to be a lie? Either the title or the lack of real information?
Sure, or it is the best practices as they see it. Assuming that they label it as such. But most vendors lie. When something is put out as marketing, it is assumed to be a lie, or should be. It's understood that stretching the truth is part of marketing. Outright dishonesty is normally not allowed, but in areas like best practices, the whole concept is a grey area and those things do not apply.
Not that Microsoft is going to get reckless, but they aren't necessarily writing papers that are completely in YOUR interest either. They need to sell software and their marketing is not going to eschew that.
-
Here is a quote from the Microsoft Exchange Best Practices:
For an Exchange 2013 virtual machine, do not configure the virtual machine to save state. We recommend that you configure the virtual machine to use a shut down because it minimizes the chance that the virtual machine can be corrupted. When a shut down happens, all jobs that are running can finish, and there will be no synchronization issues when the virtual machine restarts (for example, a Mailbox role server within a DAG replicating to another DAG member).
That save state statement is the big one. Things like HA use the save state as part of the failover mechanism. The thing that @Carnival-Boy was wondering about, why the SAN backed HA rather than a DAG group was an issue, is because this is when corruption can occur. It is the most risky time. Microsoft acknowledges this in their most recent, official best practices guide to Exchange.
While they state it here only at a technology level, it tells us what we need to know. Microsoft isn't going to make a point of pointing out vendors that do this automatically. But they've given the IT pro all that they need to know to determine which solutions are best and which are more risky.
-
So because IT changes so rapidly that any testing isn't really useful, not to mention that, as you've said, who would pay for it then simply release the information for free... Where are newbies and SMB IT personal suppose to glean all this knowledge to make the correctly informed decisions?
-
@Dashrender said:
So because IT changes so rapidly that any testing isn't really useful, not to mention that, as you've said, who would pay for it then simply release the information for free... Where are newbies and SMB IT personal suppose to glean all this knowledge to make the correctly informed decisions?
Now that's a hard but really important question. Something I have been struggling with. The information is out there, books, websites, communities, mentoring. How do other fields to it? Mostly through university. Which is cheating a bit. Not that it is bad, it's just the easy answer. Lock the knowledge up in the university and expect everyone to go there. That's great, but it fails for IT and software engineering. So we lack the easy answer that, say, civil engineering has. It's not the only path for CivE either, but it is the easy, obvious and common one.
IT can't use universities, at least not for now. But the information is really not that hard to come by. But it takes effort. Books have long been the answer, outside of the mentoring system, for thousands of years for all fields. Find experts either in person or in writing and learn from them. Books are important because they have the opportunity to explain the theory, the background, etc. The "Google the answer" approach is what is killing us. Too many IT pros are looking for the "answer" and get hiring because they can do that but we forget that what is important is understanding the solution set, which is much harder and cannot be searched easily.
How did I learn about RAID, for example? Books, lots of them, back in the 1990s. I read books on storage, on systems, etc. The information was there, in print, on paper. Except for some new information that came about in ~2007, everything that I know about RAID was stuff that was expected, common industrial base knowledge in 1998 but seems to have gone away in the years since. But the Microsoft exams required it. The CompTIA exams required it. Any systems book covered it. Anything advanced assumed that you understood it. It was covered from tons of angles.
Today, I can't figure out how people have avoided it. Yet nearly everyone has.
-
IT does not change that rapidly, though. Good training twenty years ago would nearly completely prepare you for IT today. That's faster change than civil engineering has, arches are arches, roads are roads. The Romans had some info that even today we lack. But it doesn't change at a pace that causes real problems. Nearly everything important that I learned was in the 1990s. There is an important element of keeping up to date, but the fundamentals don't change, just the products, prices and some nuances. It is rare that something new comes along that really changes things.
-
@scottalanmiller said:
... it doesn't change at a pace that causes real problems....
Hmmmmmm maybe for some definitions of "real problems"
Also, for fun:
-
It's amazing just how much 1995 was like today. Yeah, it was all old and slow. But from the Windows 95 interface, the Windows NT platform, Linux, UNIX, USB, HTML, PHP, Google (in early phases), Amazon, SSL, Java, JavaScript, Perl, VoIP, PPro (which later became the Intel Core), IE, Opera, Wikis, Ruby, Yahoo, eBay, MSNBC, etc.
-
@scottalanmiller said:
IT does not change that rapidly, though. Good training twenty years ago would nearly completely prepare you for IT today. That's faster change than civil engineering has, arches are arches, roads are roads. The Romans had some info that even today we lack. But it doesn't change at a pace that causes real problems. Nearly everything important that I learned was in the 1990s. There is an important element of keeping up to date, but the fundamentals don't change, just the products, prices and some nuances. It is rare that something new comes along that really changes things.
Like SSDs and how they've turned RAID 5 on it's head from previously conventional wisdom.
-
@Dashrender said:
@scottalanmiller said:
IT does not change that rapidly, though. Good training twenty years ago would nearly completely prepare you for IT today. That's faster change than civil engineering has, arches are arches, roads are roads. The Romans had some info that even today we lack. But it doesn't change at a pace that causes real problems. Nearly everything important that I learned was in the 1990s. There is an important element of keeping up to date, but the fundamentals don't change, just the products, prices and some nuances. It is rare that something new comes along that really changes things.
Like SSDs and how they've turned RAID 5 on it's head from previously conventional wisdom.
That's not a change, though. It's an application of the same knowledge that was known in 1998. The fundamentals are not changed at all. It is condensed, quick rules that have to be updated based on market prices, sizes, supply changes, failure rates, etc. But understanding RAID basics (including URE rates which was the only bit rarely discussed in the 90s) is all that was ever needed. If RAID was known, SSD hasn't changed anything, it's just more of the same foundational data being applied the same way.
The attempt to learn IT by rote, as a set of rules, makes it seem to change rapidly and require constant updating. But learning the foundations provides rules and concepts that are essentially timeless and provide the foundation from which the rote rules are derived.
Back when Microsoft published their big RAID guidance that was such a landmark, they didn't say "use RAID 5", they said "here is why RAID 5 works now, because of these factors" and expected everyone to understand the factors involved. Because of that, the "2009 change" for hard drives was not a change, but a continuation of the previous knowledge, and likewise the move back to RAID 5 on SSDs is, also, a continuation of the same guidance. They are not disruptions, just all applications of the same foundational rules.
-
Aww oK that makes sense.
-
Likewise, triple parity (aka RAID 7 or RAID 5.3) while new in ~2005 with Sun introduced it with ZFS, while a new product, it really did not change anything. The rules that provided information around RAID 5 and how it changed to RAID 6 continued on to RAID 7. The fundamental formulas needed a new entry for RAID 7, but it was all stuff that was projected in the 90s, we just didn't have an implementation yet. And still ten years later we only have the one. Had we talked triple parity in 1995, we could have projected the reliability, capacity and speed impacts accurately, just would have had to waited to see it in action. We have RAID 8 or RAID 5.4 projected in the same way now. It will come someday, we suspected, and will operated almost exactly along a performance and reliability curve that matches others in the R5 family.
-
A great example is the UNIX interface. The API for UNIX hasn't changed in like 30+ years. The BASH shell hasn't changed in 20+. Other than little updates, sitting down at a Linux, Solaris, AIX, HP-UX or BSD system today is nearly indistinguishable fro doing so in 1995. I started on UNIX in 1994 and other than switching my connection command from telnet to SSH (something that is important to security but doesn't change how we work at all) basically nothing has changed. I use all the same commands, tools, editors, etc. as I did all those years ago. Things are faster and more stable now. But the basics are really similar.
-
@Dashrender said:
So because IT changes so rapidly that any testing isn't really useful, not to mention that, as you've said, who would pay for it then simply release the information for free... Where are newbies and SMB IT personal suppose to glean all this knowledge to make the correctly informed decisions?
For me, it often means you don't make any decision. For example, when I was looking at a SAN, even though I didn't understand the technology and the risks that well (being a newbie), there was enough fear and doubt planted in my mind that I couldn't sign off on buying one because I didn't have 100% confidence that it was the right decision. So I saw doing nothing (and carrying on the way we've always worked) as a better decision than doing something (spending $50k on new technology). And I'm relieved. Especially now that I find out that SQL Server and Exchange are really bad fits for VMware HA, something I had no idea about at the time due to my lack of knowledge and experience.
I moved to virtualisation only when it was completely proven technology and blatantly obvious that it was best practice. So as an SMB, we're not on the bleeding edge of technology. I leave that to the bigger companies, and their experiences then trickle down to me and I implement technology that is new to me, but has been in the blue-chip world for several years.
Having said that, I will take some risks and implement some new tech, but only if the cost is relatively low. So that if I crash and burn I can just scrap it without raising any eyebrows. Life would be boring otherwise. But generally, it's all about choosing mature and proven solutions rather than the latest new thing. Let the big boys with deep pockets take the risks and iron out the bugs.
Another example is that I moved to on-premise Exchange 2010 and Office 2010 licences a few years ago because I felt that Office 365 was too new and I lacked confidence. It is only now that it is completely proven that I feel confident to move to it. I have no regrets about that decision either.
-
@scottalanmiller said:
It is not the business of any of these entities except for internal IT to care or get involved in a statement of this type. VMware and Microsoft are vendors. They produce tools. They support those tools. But it is up to IT to implement and use those tools well and in the right way for their business.
Manufacturers absolutely care about how the tools they've developed are actually used. White papers are not just for marketing purposes.
-
@Carnival-Boy said:
@scottalanmiller said:
It is not the business of any of these entities except for internal IT to care or get involved in a statement of this type. VMware and Microsoft are vendors. They produce tools. They support those tools. But it is up to IT to implement and use those tools well and in the right way for their business.
Manufacturers absolutely care about how the tools they've developed are actually used. White papers are not just for marketing purposes.
I would certainly like to agree with this, but @scottalanmiller's comments do have merit as well.
-
@Carnival-Boy said:
@scottalanmiller said:
It is not the business of any of these entities except for internal IT to care or get involved in a statement of this type. VMware and Microsoft are vendors. They produce tools. They support those tools. But it is up to IT to implement and use those tools well and in the right way for their business.
Manufacturers absolutely care about how the tools they've developed are actually used. White papers are not just for marketing purposes.
I believe that this is true only so far as they hope that they are used well as long as it does not impact profits. I just don't believe you have an unemotional stance on this. You work for a manufacturer and take this very personally, I think.
Maybe you work in a unique industry where manufactures have no financial interests guiding them and they only exists to service their customers and papers are produced to educate, not to make money. But I find that unlikely. White papers might be there to improve success rates that lead to better marketing. But not just for the sake of educating everyone.
And whether your company is like this or not, it doesn't reflect on normal companies and certainly not on IT companies. This is not how the world works. Believing that all business people are out to do good in the world at their own expense is just not how things are. This is not how the world works and in the US it isn't even legal (public companies are required to work for profits.)
-
@Carnival-Boy said:
@scottalanmiller said:
It is not the business of any of these entities except for internal IT to care or get involved in a statement of this type. VMware and Microsoft are vendors. They produce tools. They support those tools. But it is up to IT to implement and use those tools well and in the right way for their business.
Manufacturers absolutely care about how the tools they've developed are actually used. White papers are not just for marketing purposes.
You should look on SW and thread after thread of people complaining about the worthless, pure marketing with no value whatsoever. Far worse than I'm saying here. Many put decent material in the marketing form. But there are many discussions about how the term "white paper" literally is just a term for "marketing brochure."