It is not uncommon to only have servers approved to access the storage listed. So many shops will go in and add a server one by one to enable access. If your servers almost never change, this works pretty well and is extremely secure. You can do this in the firewall too, for even more security. But if you are using DevOps and creating and destroying VMs regularly you will want to automate this in some fashion.
In the interest of reliability and compatibility, we opted to only support enterprise level SSDs. We actually only recently supported non-Dell drives because of those same reasons. We started talking with Edge and found a partner who would co-developer firmware to limit the risk to our customers. If Dell makes a firmware change, we didn't want our customers to experience any issues with their drives.
That what made me reaaaaaaaaaaaaaaally nervous about non-DELL drives.
Not that anything would happen, but the last thing you need is some change form DELL and then your drives stop working, or booting.
Total BS that they do that, but whatever. It's their company and there are other options.
Right now all my production Linux hosts are on VMs so I have snapshots of the backend infrastructure along with application-level backups of data as needed. Mostly these are EC2 instances running WordPress where I have a base AMI ready to go with offsite BackupBuddy backups. It's served me well in the few times I've had to use it - I can go from a launching an instance to all data recovered in about 15 minutes or less.
With that said, I'm really looking forward to seeing how Veeam's Linux backup works once it's released!
First you would create users and SSH keys and then deploy them to the other boxes that you wish to connect to. This is the core of what makes the Jump Box a Jump Box. This is standard SSH key setup, nothing unique to a Jump Box.
Did you ever make a good write up on creating users and SSH keys? If so, I cannot find it.
I mean, I know how to make and use keys in general. But detail here would be good.
Write up for creating the users on the jump box and getting their SSH keys.
Write up for pushing users and keys to other systems that said jump box will be allowing access.
Write up for control of said access.
Bob and Jill have access to Jump Box.
Bob has Access to servers 1 & 2.
Jill has access to server 2 & 3.
I know that @scottalanmiller has mentioned in another thread that he has a script to push this all out (question 2). I can only assume that the script has some controls to tell you which server so shove the key and user logon to (question 3).
Why is a VPN a security risk? because they give you (generally) full access to the network?
Correct. They create unnecessary exposure. Direct access to all hosts (typically) for all protocols and ports. The protections of firewalls and proxies are bypassed. They are generally the least secure form of access because they are the laziest - just expose everything and hope for the best.
If you mean what I think you mean, I use Centos for General server stuff, basically a server that can handle anything or can be more than 1 thing, however for Ubuntu and cause of the snaps, I use Ubuntu for specific roles like :
Just talked to MS guys. We don't even need the VDA since they are for testing we can just use the MSDN Keys and not worry about licensing.
Assuming you already have a KMS server, you shouldn't need to do anything. The Windows 2008 R2 Server (maybe original 2008 as well) KMS key will authorize Windows 7.
Yeah the MDSN and MSVLSC are seperate. The Volume licensing is for production and the MSDN is for testing purposes only. Granted I don't think it matters too much what key is installed. We have both accounts.
Fujitsu's own numbers suggest that the upgrade (which I checked, would quadruple the cores) would be expected to result in a 10 fold increase in petaflops.... from 10 to 100!! It could "easily" be three times the performance of the top ranked Tianhe-2.
Great example came in today. Someone had a Dell server, four matching drives. The system arrived with no virtualization configured and the OS was installed without RAID on a single drive. Each drive was attached as an individual drive. Obviously Dell never intended someone to use the system like that, even for a desktop that's not an acceptable setup. It's pretty clear that it was just a test install to show that the hardware was working.
But several people said "but Dell set it up this way, obviously it is okay" and it has been running in production and is now a disaster.
So HP engineering said that this should work but they had not tested it. Make sure you are fully up to date on the firmware and the P410 should be able to go to the P400 safely. But.... caveat emptor, of course.
It was the motherboard. Tested the PSUs this morning with replacements and no issues with the originals. Swapped out the motherboard and the baby fired right up. We are currently recovering everything and anticipate zero data loss. No need to even fall back to the backups.
If you're looking for something similarly spec'd you could just buy and beef up a Thinkserver TS440. They are pretty inexpensive, have 8 bays for drives, Xeon E3-1225 v3 proc, ECC memory, redundant power supply (only comes with on slot populated), and start off under $500.
If I remember right the DL160 is pretty much useless as a normal server. I've seen cheap people use them as that but they are really meant to be used in a high density web farm.
You are thinking of the right one. It's slightly better than the DL140, but only in that it has SCSI instead of ATA drives. It's not meant to be a standard server at all. Lacks all the server hardware and is no more reliable than a nice desktop. Lacks full IPMI and other tooling.
Ubiquiti Mport and Temperature sensor has be ordered to use mFI to turn on a fan if needed and and off if not needed from the PDU. and also to hard kill everything if a it gets too hot.
That's awesome. Very impressive. HOw did that cost?
It was less than $100 for both the sensor and the mPort (which puts the serial devices on TCP/IP)