Building ELK on CentOS 7
-
Why lock out with .htaccess? There is no hint what is needed to log in here.
I hate this level of authentication.
Using kibanauser and the password I chose, results in Kibana setup.
-
@JaredBusch said:
@scottalanmiller so what do you setup your disk partitioning like in CentOS 7?
If I'm doing this for product, I do 20GB for the OS and 200GB+ on a second VHD for the data. I put it all under LVM and make a XFS filesystem on the secondary mount and mount it to data and make a symlink for the Elasticsearch database directory into there.
-
@JaredBusch said:
Why lock out with .htaccess? There is no hint what is needed to log in here.
It's how Digital Ocean does it as well. Kibana doesn't have a built in authentication scheme that I know of. HTAccess is very simple for someone to just get started.
-
And simple to remove when you want to move to something else.
-
@JaredBusch said:
Line 109 needs commented out.
add this right after the yum install to fix the firewall.
yum -y install wget firewalld epel-release systemctl enable firewalld systemctl start firewalld yum -y install nginx httpd-tools unzip
I would just remove line 109 it serves no purpose.
Edit: Some dumbass forgot to snapshot the image so he could repeat the install...
Thanks. That was formatting I had originally put in before scripting it.
-
-
@scottalanmiller said:
@JaredBusch said:
@scottalanmiller so what do you setup your disk partitioning like in CentOS 7?
If I'm doing this for product, I do 20GB for the OS and 200GB+ on a second VHD for the data. I put it all under LVM and make a XFS filesystem on the secondary mount and mount it to data and make a symlink for the Elasticsearch database directory into there.
SO this mean you need to make one of your linux admin setrups on drive settings because that is not what CentOS does by dfault.
-
@JaredBusch said:
@scottalanmiller said:
@JaredBusch said:
@scottalanmiller so what do you setup your disk partitioning like in CentOS 7?
If I'm doing this for product, I do 20GB for the OS and 200GB+ on a second VHD for the data. I put it all under LVM and make a XFS filesystem on the secondary mount and mount it to data and make a symlink for the Elasticsearch database directory into there.
SO this mean you need to make one of your linux admin setrups on drive settings because that is not what CentOS does by dfault.
Would CentOS do what Scott does if you had two drives you provide CentOS to use? i.e. a 20 GB and a 200+ GB one? Would CentOS install the OS and everything fully on the 20, and then just mount the 200 on some point?
-
@Dashrender said:
@JaredBusch said:
@scottalanmiller said:
@JaredBusch said:
@scottalanmiller so what do you setup your disk partitioning like in CentOS 7?
If I'm doing this for product, I do 20GB for the OS and 200GB+ on a second VHD for the data. I put it all under LVM and make a XFS filesystem on the secondary mount and mount it to data and make a symlink for the Elasticsearch database directory into there.
SO this mean you need to make one of your linux admin setrups on drive settings because that is not what CentOS does by dfault.
Would CentOS do what Scott does if you had two drives you provide CentOS to use? i.e. a 20 GB and a 200+ GB one? Would CentOS install the OS and everything fully on the 20, and then just mount the 200 on some point?
Than answer is not by default. It tries to make it's own magic.
You can see here I created a 20gb and a 200GB vhdx and told the install to handle it all for me.
Guess what, you still end up with a 50GB and a 170GB partitions scheme
[root@elk ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos_elk-root 50G 882M 50G 2% / devtmpfs 906M 0 906M 0% /dev tmpfs 916M 0 916M 0% /dev/shm tmpfs 916M 8.3M 907M 1% /run tmpfs 916M 0 916M 0% /sys/fs/cgroup /dev/sda2 494M 99M 395M 21% /boot /dev/sda1 200M 9.5M 191M 5% /boot/efi /dev/mapper/centos_elk-home 168G 33M 168G 1% /home tmpfs 184M 0 184M 0% /run/user/0 [root@elk ~]#
-
CentOS 7 has a thing for 50GB root mounts.
-
Yeah, the defaults suck a bit.
-
@scottalanmiller Why are you using Oracle's Java SDK and not just java from the repo?
I had read in another write up on the install that it works fine even if it is not the "official" method.
-
@JaredBusch said:
@scottalanmiller Why are you using Oracle's Java SDK and not just java from the repo?
I had read in another write up on the install that it works fine even if it is not the "official" method.
Even if it "works", Elasticsearch tests against and only officially supports the Oracle one. Just because I can get the OpenJDK to work, I'd hate to have it be buggy or problematic for someone down the line because I used one that wasn't tested against.
-
@scottalanmiller said:
@JaredBusch said:
@scottalanmiller Why are you using Oracle's Java SDK and not just java from the repo?
I had read in another write up on the install that it works fine even if it is not the "official" method.
Even if it "works", Elasticsearch tests against and only officially supports the Oracle one. Just because I can get the OpenJDK to work, I'd hate to have it be buggy or problematic for someone down the line because I used one that wasn't tested against.
I am not a fan of Oracle when it comes to Java
-
You hardcoded a DNS name in that script...
openssl req -subj '/CN=elk.lab.ntg.co/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
You also used a different port here for logstash than in the logstash forwarder example.
cat > /etc/logstash/conf.d/02-beats-input.conf <<EOF input { beats { port => 5044 ssl => true ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" } } EOF
You used 5000 in that other post.
-
@JaredBusch said:
You used 5000 in that other post.
Which post was that? I bet that one was a typo, 5044 is the standard port.
-
@scottalanmiller said:
@JaredBusch said:
You used 5000 in that other post.
Which post was that? I bet that one was a typo, 5044 is the standard port.
It was this post
I would also like to note that the certificate your script in this post creates will not work with your logstash forwarder instructions by IP address.
2016/02/24 13:21:48.989719 Failed to tls handshake with 10.201.1.16 x509: cannot validate certificate for 10.201.1.16 because it doesn't contain any IP SANs
-
As soon as I corrected all of those issues, I got this.
2016/02/24 13:42:26.611248 Connecting to [10.201.1.16]:5044 (elk.domain.local) 2016/02/24 13:42:28.167827 Connected to 10.201.1.16 2016/02/24 13:42:32.038421 Registrar: processing 1024 events 2016/02/24 13:42:33.923706 Registrar: processing 1024 events 2016/02/24 13:42:35.424984 Registrar: processing 891 events 2016/02/24 13:45:55.815543 Registrar: processing 3 events 2016/02/24 13:46:03.305215 Registrar: processing 1 events
-
@JaredBusch said:
@scottalanmiller said:
@JaredBusch said:
You used 5000 in that other post.
Which post was that? I bet that one was a typo, 5044 is the standard port.
It was this post
I would also like to note that the certificate your script in this post creates will not work with your logstash forwarder instructions by IP address.
2016/02/24 13:21:48.989719 Failed to tls handshake with 10.201.1.16 x509: cannot validate certificate for 10.201.1.16 because it doesn't contain any IP SANs
Oh!! That post is for a different era using different tools. Can't use that with this. We are on the "beat" system now. A lot has changed, that's why I was doing the new write up as there was a lot new in the last few weeks that prompted new documentation.
-
@scottalanmiller said:
@JaredBusch said:
@scottalanmiller said:
@JaredBusch said:
You used 5000 in that other post.
Which post was that? I bet that one was a typo, 5044 is the standard port.
It was this post
I would also like to note that the certificate your script in this post creates will not work with your logstash forwarder instructions by IP address.
2016/02/24 13:21:48.989719 Failed to tls handshake with 10.201.1.16 x509: cannot validate certificate for 10.201.1.16 because it doesn't contain any IP SANs
Oh!! That post is for a different era using different tools. Can't use that with this. We are on the "beat" system now. A lot has changed, that's why I was doing the new write up as there was a lot new in the last few weeks that prompted new documentation.
Well, i never did manage to get something working with beat. Your different era was only 8 months ago.