Posts made by Emad R
-
Centos 8 and Centos 8 Stream released
Seems RHEL is alot involved in Centos 8 nowadays, I'm starting to think that maybe scott was right and using those old/stable distros just adds extra hassle and effort, it seems they are stating that as well with the unexpected stream version (stupid name).
So something like Alpine/Arch/Ubuntu/Fedora
Seems like a better choice for easier distro updates.
-
RE: Containers on Bare Metal
I would like to hear more about your pi4 cluster since the pi4 is fairly new, any links or hints or suggested products
-
RE: Containers on Bare Metal
https://lxd.readthedocs.io/en/latest/clustering/
https://lxd.readthedocs.io/en/latest/storage/I think latest versions and especially with clustering recommends ZFS storage, which is nice cause now it is added easily as fuse fs
-
RE: Containers on Bare Metal
Nice, do you try to do them with ceph storage or you simply go with the default zfs
-
RE: Containers on Bare Metal
@travisdh1 said in Containers on Bare Metal:
Type-3 hypervisors.
never heard this term b4, and I think in the future it will expire. You would just run containers on bare metal and that it. we didnt reach this step but i think in 10 years or so
-
RE: Containers on Bare Metal
@black3dynamite said in Containers on Bare Metal:
To use something like LXD, you would install Ubuntu and then LXD.
https://help.ubuntu.com/lts/serverguide/lxd.htmlExcatly, and not KVM -> Ubuntu -> LXD
What will I lose if I went Ubuntu -> LXD
that's what I am thinking .... what are the negatives or potential downsides to this in the future of skipping the whole type 1 virtulization
-
Containers on Bare Metal
Does anyone have experience running the above? if so are you doing it in Prod/Dev ?
please dont start rant against certain technology, there are more stuff than docker out there , like LXD, OpenVZ etc.
-
RE: The Death of Sysadmin
nope, it will be a set of processes and cloud running by itself. all build by automation.
The number of servers one sysadmin is handling is increasing and increasing, I think even you said it yourself.
back in the day 1 sysadmin can manage = few servers, afterward it is 50-100, now it is expected to be able to manage 100-500. One day we wouldn't need to manage anything.I think it all went down hill when DevOps role was created, and it became the new Sysadmin and then they easily killed that term and made SRE, site reliability engineer, and now SRE are basically ppl who know abit about everything and that frees developers to do developers work, but in short term everyone will be full stack dev and rely on cloud hosting platforms and FAAS and CAAS to run their code easily with high uptimes cause of stuff like GKE, and others.
-
RE: The Death of Sysadmin
And they are using us to do it, how classy and heartless
-
The Death of Sysadmin
Okay I am high and I may be posted this topic twice. but they are killing us , they want move everything to
k8s and cloud and GKE what next, those clouds dont neet maintenance, sure now it is expensive but surely in the upcoming years it will be cheap. It will be just developers and developers tools and cloud, push code to the cloudwe are speeding it up, we are killing our job role
-
RE: CentOS 6.10 Freezing with Kernel Panic
https://bugs.launchpad.net/ubuntu/+source/module-init-tools/+bug/60716
evbug is Input driver event debug module.
It should not be autoloaded. It is listed in /etc/modprobe.d/blacklist (or
should be). If it isn't on your system, then perhaps you modifed that file at
some point.Does the file /etc/modprobe.conf exist? It should not exist. (On an modern system that file OVERRIDES all distribution-supplied configuration, including blacklists, rather than augments it.)
-
RE: CentOS 6.10 Freezing with Kernel Panic
@scottalanmiller said in CentOS 6.10 Freezing with Kernel Panic:
isa0060
isa0060 is touch pad device, you can black list it in grub boot
-
RE: AWS Catastrophic Data Loss
@Pete-S said in AWS Catastrophic Data Loss:
@IRJ said in AWS Catastrophic Data Loss:
@Pete-S said in AWS Catastrophic Data Loss:
@IRJ said in AWS Catastrophic Data Loss:
@dbeato said in AWS Catastrophic Data Loss:
@PhlipElder said in AWS Catastrophic Data Loss:
@dbeato said in AWS Catastrophic Data Loss:
@PhlipElder said in AWS Catastrophic Data Loss:
@Dashrender said in AWS Catastrophic Data Loss:
@BRRABill said in AWS Catastrophic Data Loss:
because the chances that MS's DC is going to blow up is extremely small
And yet, it is what this thread is about ... exactly that happening.
Except that it's Amazon, not MS.
MS was US Central this year or late last.
MS was the world when their authentication mechanism went down I think it was a year or so ago.
MS was Europe offline with VMs hosed and a recovery needed. Weeks.
MS has had plenty of trials by fire.
Not one of the hyper-scale folks are trouble free.
Most of our clients have had 100% up-time across solution sets for years and in some cases we're coming up on decades. Cloud can't touch that. Period.
And no updates correct right? to have 100 % Up-time you must never do updates.
In a cluster setting, not too difficult. In this case, 100% up-time is defined as nary a user impacted by any service or app being offline when needed.
So, point of clarification conceded.
Yes, I know you could do a cluster and that's how Cloud Providers give you that 99.9% up-time or SLA. Right now it is hard to believe no one has any issues, if cloud providers in a large scale have issues then smaller companies do have them as well. That said, no cloud provider provides any backups for anyone unless you set them up either through their offering or your company.
Yeah and you can only fault yourself, if you are one AZ that fails. Most serious deployments are in different regions as well.
Well, except that:
@Pete-S said in AWS Catastrophic Data Loss:
As we have further investigated this event with our customers, we have discovered a few isolated cases where customers' applications running across multiple Availability Zones saw unexpected impact
At some point, you have to be willing to accept some risks by by not using a different region, generally the risk is VERY, VERY low which is why many customers use AZs.
You have to do risk anaylsis, and see how often these events occur and how likely you would be to be one of the "few" that were impacted.
You can dig in the weeds all you want, but across multiple regions this wouldnt have happened. Which is true HA
Well, different regions wouldn't be enough for true HA. You'd need different cloud providers as well.
Otherwise you have something called common mode failure. Which is for instance that they are running on the same architecture, maybe even the same hardware and as such could be susceptible to a single problem that will affect the entire cloud.
I toyed with this idea, but it is a bit unlieky, however that said when you are vendor agnostic and you have Centos box in DigitalOcean and another one using Vultr.
-
RE: AWS Catastrophic Data Loss
YES YES YES SCREW AWS, they have this big marketing scheme for CEOs that force us to work for those CEOs that believe everything is better in AWS, and the server wont work properly unless its AWS, then when the bill comes we have to explain to them that we can never calculate the cost accurately cause it is Amazon AWS, and they charge for IOPS, and there is no way I can calculate that shit, its meant to be bill sinkhole for to pay bezos divorce settlement .
-
LXD/LXC Beginners Video Guide
Yes, it is very long . Yes, I had edibles today (i can only do it one day at the end of the weekend days and this project will be deleted so I want to make a video before it gets deleted).
At the end my video recording system stopped, FYI the node got rebooted and the container started auto and drupal was working again.
https://drive.google.com/file/d/12VWPuudoRo4rz2_Q4to2anjOYyT2bH5F/view?usp=sharing
If you never heard of LXD/LXC watching this long video will give you 80-90% of what it is.
FYI:
At the moment the cluster tries to always keep 3 database nodes. If you have 4 nodes, you will have 3 database nodes. If you remove one database node out of those 4, the cluster will notice that there are only 2 database nodes left and promote the non-database node to be a database node.
-
RE: New ISP Issues at CEO's Home
time to find another job if you dont sort this out