@DustinB3403 said in I'll Show You Mine If You Show Me Yours, Home Labs:
Here's my lab, tiny I know....
It's not the size of your lab that matters, it's how you use it...
@DustinB3403 said in I'll Show You Mine If You Show Me Yours, Home Labs:
Here's my lab, tiny I know....
It's not the size of your lab that matters, it's how you use it...
All that needs to be done to prove that Linux is an Operating System is to provide us a screenshot as an example of the OS installed and running.
I'll echo what Scott is saying here, don't think of a container in the same context as a virtual machine, they are not the same thing. You don't want to overload a container with multiple services, that will defeat the purpose of using a container. Containers are meant to contain a single service and abstract that service away from the operating system, streamlining the movement of that service between development, QA, staging, and production environments.
So if you have an application you want to deploy into a production environment that requires a MariaDB database and an instance of the actual application running, you would want to deploy a minimum of two containers. One container for the database, and one container for the application. As demand for the application grows you might consider adding additional containerized instances of the application and possibly a load balancer or HA proxy. At that point you would spin up your load balancing container, and any additional instances of the application in separate containers.
The primary use case for containerized workloads is development and automation of large scale distributed systems. You can run services outside of a development environment using docker, rocket, or lxc containers, but you're more than likely not going to see any significant benefit unless you are operating at a larger scale serving a lot of users/endpoints.
@DustinB3403 We have four environments for each application's stage of development (DEV, QA, Stage, and Production). Each application server has a different component of a product running on it; usually a Java-based micro-service. Some products take 2 or 3 servers, and some take 30+. And each of these systems are by no means hefty. A lot of them are 1 vCPU 512M-1024M builds. The number of systems in DEV varies depending on experimentation and any new products being worked on.
We actually just started work on building out our Stage environment so we can fully implement CI/CD across all of our products. Myself and a few of my fellow Admins spun up 198 servers in a single sitting last week.
We manage everything using Chef.
@black3dynamite said in Linux Lab Project: Building a Linux Jump Box:
How would a jump box used when access a Windows environment? Would I need to setup a jump box with a desktop environment like xfce or windows manager like i3. And then use something like Remmina to remote into a Windows Admin box to manage Servers and such.
You could setup SSH tunneling and just do secure RDP sessions over SSH. No desktop environment required on your jumpbox.
http://www.linuxjournal.com/content/ssh-tunneling-poor-techies-vpn
I did an article about a year ago about setting up KVM/QEMU on Ubuntu 16.04, it might be of some use: https://www.ramblingbiped.com/build-a-kvm-qemu-hypervisor-on-ubuntu-16-04-server/
I also wrote a short blurb on automating VM creation using a pre-built VM template, virt-clone, and virt-sysprep: https://www.ramblingbiped.com/automate-centos-7-minimal-virtual-machine-creation-with-kvm/
Easiest way would be give the job to the Summer Interns and tell them to figure it out. If they do it manually... Well, it gets done. If they figure out a quicker automated way of doing it have them document it and give them a nice pat on the back.
Either way, less of your time is wasted.
@stacksofplates said in Fedora 26 No Space Left on Device with Plenty of Space Available:
@scottalanmiller said in Fedora 26 No Space Left on Device with Plenty of Space Available:
@travisdh1 said in Fedora 26 No Space Left on Device with Plenty of Space Available:
@jaredbusch said in Fedora 26 No Space Left on Device with Plenty of Space Available:
@wirestyle22 said in Fedora 26 No Space Left on Device with Plenty of Space Available:
@jaredbusch said in Fedora 26 No Space Left on Device with Plenty of Space Available:
@travisdh1 said in Fedora 26 No Space Left on Device with Plenty of Space Available:
@scottalanmiller said in Fedora 26 No Space Left on Device with Plenty of Space Available:
@wirestyle22 said in Fedora 26 No Space Left on Device with Plenty of Space Available:
@scottalanmiller I could've sworn there was a post here related to a reboot solving it
Someone mentioned rebooting to try to solve it. But it did not (had already done that.) It was real files causing the issue, nothing orphaned. Literally, we were making 30K files an hour or so.
Tiny files to, right?
That is not relevant except for that fact that large files would have filled drive space and likely been noticed.
How is that not relevant? more files = more inodes being used
Size of the files is not relevant.
It actually makes sense that @scottalanmiller said it was mostly directories. Files of any size will almost always run out of drive space before inodes run out in 99.9999% of situations. This is the first time I've actually heard of this happening, ever.
Only not relevant for those who actually know about inodes already. The only reason I even know about them is they got mentioned in SGI's IRIX Sysadmin courses.
The one major exception is marker files. "touch thishappened" as a file automatically and never clean up and you are using inodes without using any space. That's who you can easily learn about inode depletion. But who does that?
People writing bad scripts that use lock files and forget to remove them.
That wouldn't create enough to do this though.
If you start using configuration management tools to manage infrastructure with code you get the chance to see some of these one-off oddities in the wild a little more frequently than you'd expect. Like having Java developers not use Java's log facilities to manage log rotation, and then having a generic log rotation configuration completely bork things by delete application logs that are still being accessed by the Java application.
I got to see this issue a few dozen times a few months ago before another of our Engineers disassociated the Java applications from our generic log rotation recipe.
Rebooting was the quick fix for us prior to fixing the actual problem.
@wirestyle22 said in wget vs. curl:
@travisdh1 said in wget vs. curl:
@jaredbusch said in wget vs. curl:
@wirestyle22 said in wget vs. curl:
@travisdh1 said in wget vs. curl:
@wirestyle22 said in wget vs. curl:
curl
supports FTP, FTPS, HTTP, HTTPS, SCP, SFTP, TFTP, TELNET, DICT, LDAP, LDAPS, FILE, POP3, IMAP, SMTP, RTMP and RTSP.
wget
supports HTTP, HTTPS and FTP.
wget
is part of the GNU project and all copyrights are assigned to FSF.
curl
project is entirely stand-alone and independent with no organization parenting at all
curl offers upload and sending capabilities. .Seems to me we should be using
curl
99% of the time andwget
when we have to deal with a lot of things simultaneously? Am I judging that correctly? Otherwise there does not seem to be a benefit.I tend to use
wget
, just because it's what I'm used to. I knowcurl
can do more, but being able to do more isn't always the simple/quick way to go about things, unless you're doing one of @scottalanmiller's one-liners.Just a thought I had when I was testing some stuff on my vultr instance. You'll likely have both.
Curl is native to RHEL/Fedora based systems. While wget has to be added.
Yep, and it's so automatic to install nano and wget for me that it's almost an afterthought, in addition to any VM templates I have available.
Same. I'd imagine it's the same type of argument as nano vs. vi
If you're not using vi/vim, you're just wrong. No real argument.
Realistically, regardless of preference for wget, you should understand how to use curl. (same with vi)
DevOps is a culture, not really a role. If you're in a DevOps environment and you're sporting the DevOps Engineer title you're more than likely wearing a lot of hats and interfacing with multiple product teams.
DevOps Engineer
Site Reliability Engineer
Cloud Engineer
System Engineer
A lot of the above titles have a lot of similar duties.
If you are looking at moving into a more modern role working for a shop that has a DevOps culture I'd focus on the following:
Out of curiosity, and to help derail this thread just a little bit more... @scottalanmiller / @Minion-Queen and @JaredBusch , how do you setup your remote employees? Do you provide them with systems to work from and/or furnish a stipend for internet access? Or do you just build that into their salary to simplify things from your end?
It is very likely that I'll be moving to Western Colorado in the not so distant future. Would anyone by chance have any pointers on finding work in the area? We'll be around the Rifle/Glenwood Springs area if/when we relocate. That's about an hour West of Aspen, and an hour East of Grand Junction.
Most of my initial searches are coming up with tons of interesting jobs in and around the Denver/Boulder area, but not a whole lot of System Administration or Linux Admin work in and around the West.
Anybody have experience with this? I'm looking at possibly moving our webserver/webstore to docker containers for better resource utilization and easier development/deployment. I feel like I've got the gist of how things work, but looking for resources to help plan things out and any advice from those who have already done something similar.
Running Quickbooks is like....
...brushing your teeth with steel wool and battery acid.
...eating a peanut butter and broken glass sandwich.
...voting Republican.
...plucking your nose hairs through your tear ducts.
...enjoying a bud light.
I recently finished reading and recommend "The Anatomy of Peace" from The Arbinger Institute. It's a great comprehensive story driven practice in conflict resolution.
It's basically "How to recognize if you're an @$$hole, fix it, and learn to communicate with other @$$hole's effectively".
Our Operations team is currently looking to hire an additional Operations Systems Administrator. Qualified candidates will have experience as a Unix/Linux Administrator and some automation/scripting experience in BASH, Perl, Python, and/or Ruby. Please see the full job posting below for further details.
http://www.wgu.edu/about_WGU/employment/operations-support-administrator
If you have any questions feel free to contact me directly, or post below.
I'm looking at getting a server to upgrade my home lab. I'm considering this refurbished Dell R710: http://www.ebay.com/itm/Dell-PowerEdge-R710-2x2-26GHz-Quad-Core-E5520-32GB-2x1TB-PERC6-iDRAC6-4-Port-NIC-/171528441604?hash=item27efe44f04:g:epUAAOSwe-FU3OWV&autorefresh=true
I'm planning on running XenServer and (initially) using XenCenter on Windows 10 to manage it. I may look at upgrading memory down the road, and possibly processors if necessary. (I'll more than likely invest in a comparable second hypervisor before doing that)
This will primarily be used for testing out opensource applications, hosting my own private SVN and GIT repositories, working on application development using Java, Python, and Ruby, systems automation using Ansible, Vagrant, etc..., and more than likely supporting a couple of Docker Hosts. I want something that is beefy enough to run upwards of 20 VMs comfortably (primarily Linux/Unix).
Thoughts? Suggestions? Recommendations?
@Nic said in Teamviewer hacked:
@RamblingBiped might be that they don't have all the info yet - who knows?
I was also being somewhat sarcastic... I also think the link I posted might be dated (May 23rd) and referencing a different incident.
I'm glad I don't use their product...