ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Building ELK on CentOS 7

    IT Discussion
    scale ntg lab scale hc3 centos centos 7 elk logging log management how to linux elasticsearch kibana logstash kibana 4
    8
    43
    16.2k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller @JaredBusch
      last edited by

      @JaredBusch said:

      Line 109 needs commented out.

      0_1456293589646_upload-722d8a55-ede0-467f-815e-97aca00bde17

      add this right after the yum install to fix the firewall.

      yum -y install wget firewalld epel-release
      systemctl enable firewalld
      systemctl start firewalld
      yum -y install nginx httpd-tools unzip
      

      I would just remove line 109 it serves no purpose.

      Edit: Some dumbass forgot to snapshot the image so he could repeat the install...

      Thanks. That was formatting I had originally put in before scripting it.

      1 Reply Last reply Reply Quote 0
      • scottalanmillerS
        scottalanmiller @JaredBusch
        last edited by

        @JaredBusch said:

        Looks like maybe you forgot to start firewalld?

        Fixed

        1 Reply Last reply Reply Quote 0
        • JaredBuschJ
          JaredBusch @scottalanmiller
          last edited by

          @scottalanmiller said:

          @JaredBusch said:

          @scottalanmiller so what do you setup your disk partitioning like in CentOS 7?

          If I'm doing this for product, I do 20GB for the OS and 200GB+ on a second VHD for the data. I put it all under LVM and make a XFS filesystem on the secondary mount and mount it to data and make a symlink for the Elasticsearch database directory into there.

          SO this mean you need to make one of your linux admin setrups on drive settings because that is not what CentOS does by dfault.

          DashrenderD 1 Reply Last reply Reply Quote 0
          • DashrenderD
            Dashrender @JaredBusch
            last edited by

            @JaredBusch said:

            @scottalanmiller said:

            @JaredBusch said:

            @scottalanmiller so what do you setup your disk partitioning like in CentOS 7?

            If I'm doing this for product, I do 20GB for the OS and 200GB+ on a second VHD for the data. I put it all under LVM and make a XFS filesystem on the secondary mount and mount it to data and make a symlink for the Elasticsearch database directory into there.

            SO this mean you need to make one of your linux admin setrups on drive settings because that is not what CentOS does by dfault.

            Would CentOS do what Scott does if you had two drives you provide CentOS to use? i.e. a 20 GB and a 200+ GB one? Would CentOS install the OS and everything fully on the 20, and then just mount the 200 on some point?

            JaredBuschJ 1 Reply Last reply Reply Quote 0
            • JaredBuschJ
              JaredBusch @Dashrender
              last edited by

              @Dashrender said:

              @JaredBusch said:

              @scottalanmiller said:

              @JaredBusch said:

              @scottalanmiller so what do you setup your disk partitioning like in CentOS 7?

              If I'm doing this for product, I do 20GB for the OS and 200GB+ on a second VHD for the data. I put it all under LVM and make a XFS filesystem on the secondary mount and mount it to data and make a symlink for the Elasticsearch database directory into there.

              SO this mean you need to make one of your linux admin setrups on drive settings because that is not what CentOS does by dfault.

              Would CentOS do what Scott does if you had two drives you provide CentOS to use? i.e. a 20 GB and a 200+ GB one? Would CentOS install the OS and everything fully on the 20, and then just mount the 200 on some point?

              Than answer is not by default. It tries to make it's own magic.

              You can see here I created a 20gb and a 200GB vhdx and told the install to handle it all for me.

              0_1456324763628_upload-a8516602-b5e4-4c0e-9112-caabbb970b80

              Guess what, you still end up with a 50GB and a 170GB partitions scheme

              [root@elk ~]# df -h
              Filesystem                   Size  Used Avail Use% Mounted on
              /dev/mapper/centos_elk-root   50G  882M   50G   2% /
              devtmpfs                     906M     0  906M   0% /dev
              tmpfs                        916M     0  916M   0% /dev/shm
              tmpfs                        916M  8.3M  907M   1% /run
              tmpfs                        916M     0  916M   0% /sys/fs/cgroup
              /dev/sda2                    494M   99M  395M  21% /boot
              /dev/sda1                    200M  9.5M  191M   5% /boot/efi
              /dev/mapper/centos_elk-home  168G   33M  168G   1% /home
              tmpfs                        184M     0  184M   0% /run/user/0
              [root@elk ~]#
              
              1 Reply Last reply Reply Quote 0
              • JaredBuschJ
                JaredBusch
                last edited by

                CentOS 7 has a thing for 50GB root mounts.

                1 Reply Last reply Reply Quote 0
                • scottalanmillerS
                  scottalanmiller
                  last edited by

                  Yeah, the defaults suck a bit.

                  1 Reply Last reply Reply Quote 0
                  • JaredBuschJ
                    JaredBusch
                    last edited by JaredBusch

                    @scottalanmiller Why are you using Oracle's Java SDK and not just java from the repo?

                    I had read in another write up on the install that it works fine even if it is not the "official" method.

                    scottalanmillerS 1 Reply Last reply Reply Quote 0
                    • scottalanmillerS
                      scottalanmiller @JaredBusch
                      last edited by

                      @JaredBusch said:

                      @scottalanmiller Why are you using Oracle's Java SDK and not just java from the repo?

                      I had read in another write up on the install that it works fine even if it is not the "official" method.

                      Even if it "works", Elasticsearch tests against and only officially supports the Oracle one. Just because I can get the OpenJDK to work, I'd hate to have it be buggy or problematic for someone down the line because I used one that wasn't tested against.

                      JaredBuschJ 1 Reply Last reply Reply Quote 1
                      • JaredBuschJ
                        JaredBusch @scottalanmiller
                        last edited by

                        @scottalanmiller said:

                        @JaredBusch said:

                        @scottalanmiller Why are you using Oracle's Java SDK and not just java from the repo?

                        I had read in another write up on the install that it works fine even if it is not the "official" method.

                        Even if it "works", Elasticsearch tests against and only officially supports the Oracle one. Just because I can get the OpenJDK to work, I'd hate to have it be buggy or problematic for someone down the line because I used one that wasn't tested against.

                        I am not a fan of Oracle when it comes to Java

                        1 Reply Last reply Reply Quote 0
                        • JaredBuschJ
                          JaredBusch
                          last edited by

                          @scottalanmiller

                          You hardcoded a DNS name in that script...

                          openssl req -subj '/CN=elk.lab.ntg.co/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
                          

                          You also used a different port here for logstash than in the logstash forwarder example.

                          cat > /etc/logstash/conf.d/02-beats-input.conf <<EOF
                          input {
                            beats {
                              port => 5044
                              ssl => true
                              ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
                              ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
                            }
                          }
                          EOF
                          

                          You used 5000 in that other post.

                          scottalanmillerS 1 Reply Last reply Reply Quote 1
                          • scottalanmillerS
                            scottalanmiller @JaredBusch
                            last edited by

                            @JaredBusch said:

                            You used 5000 in that other post.

                            Which post was that? I bet that one was a typo, 5044 is the standard port.

                            JaredBuschJ 1 Reply Last reply Reply Quote 1
                            • JaredBuschJ
                              JaredBusch @scottalanmiller
                              last edited by

                              @scottalanmiller said:

                              @JaredBusch said:

                              You used 5000 in that other post.

                              Which post was that? I bet that one was a typo, 5044 is the standard port.

                              It was this post

                              I would also like to note that the certificate your script in this post creates will not work with your logstash forwarder instructions by IP address.

                              2016/02/24 13:21:48.989719 Failed to tls handshake with 10.201.1.16 x509: cannot validate certificate for 10.201.1.16 because it doesn't contain any IP SANs
                              
                              scottalanmillerS 1 Reply Last reply Reply Quote 0
                              • JaredBuschJ
                                JaredBusch
                                last edited by

                                As soon as I corrected all of those issues, I got this.

                                2016/02/24 13:42:26.611248 Connecting to [10.201.1.16]:5044 (elk.domain.local)
                                2016/02/24 13:42:28.167827 Connected to 10.201.1.16
                                2016/02/24 13:42:32.038421 Registrar: processing 1024 events
                                2016/02/24 13:42:33.923706 Registrar: processing 1024 events
                                2016/02/24 13:42:35.424984 Registrar: processing 891 events
                                2016/02/24 13:45:55.815543 Registrar: processing 3 events
                                2016/02/24 13:46:03.305215 Registrar: processing 1 events
                                
                                1 Reply Last reply Reply Quote 0
                                • scottalanmillerS
                                  scottalanmiller @JaredBusch
                                  last edited by

                                  @JaredBusch said:

                                  @scottalanmiller said:

                                  @JaredBusch said:

                                  You used 5000 in that other post.

                                  Which post was that? I bet that one was a typo, 5044 is the standard port.

                                  It was this post

                                  I would also like to note that the certificate your script in this post creates will not work with your logstash forwarder instructions by IP address.

                                  2016/02/24 13:21:48.989719 Failed to tls handshake with 10.201.1.16 x509: cannot validate certificate for 10.201.1.16 because it doesn't contain any IP SANs
                                  

                                  Oh!! That post is for a different era using different tools. Can't use that with this. We are on the "beat" system now. A lot has changed, that's why I was doing the new write up as there was a lot new in the last few weeks that prompted new documentation.

                                  JaredBuschJ 1 Reply Last reply Reply Quote 0
                                  • JaredBuschJ
                                    JaredBusch @scottalanmiller
                                    last edited by

                                    @scottalanmiller said:

                                    @JaredBusch said:

                                    @scottalanmiller said:

                                    @JaredBusch said:

                                    You used 5000 in that other post.

                                    Which post was that? I bet that one was a typo, 5044 is the standard port.

                                    It was this post

                                    I would also like to note that the certificate your script in this post creates will not work with your logstash forwarder instructions by IP address.

                                    2016/02/24 13:21:48.989719 Failed to tls handshake with 10.201.1.16 x509: cannot validate certificate for 10.201.1.16 because it doesn't contain any IP SANs
                                    

                                    Oh!! That post is for a different era using different tools. Can't use that with this. We are on the "beat" system now. A lot has changed, that's why I was doing the new write up as there was a lot new in the last few weeks that prompted new documentation.

                                    Well, i never did manage to get something working with beat. Your different era was only 8 months ago.

                                    scottalanmillerS 1 Reply Last reply Reply Quote 0
                                    • scottalanmillerS
                                      scottalanmiller @JaredBusch
                                      last edited by

                                      @JaredBusch said:

                                      Well, i never did manage to get something working with beat. Your different era was only 8 months ago.

                                      Well that's how paradigm changes work. ELK introduced a new scheme less than eight months ago. Whether it was a decade, eight months or two days ago, once they've changed their underlying architecture it's changed. It's not a gradual evolution over the years, it's one day they used one system and with the next release they used another. So at some point people installing "current" one minute had one thing and the next minute would have had the other. Has to change at some point in time.

                                      1 Reply Last reply Reply Quote 0
                                      • scottalanmillerS
                                        scottalanmiller
                                        last edited by

                                        Updated with information on generating the logstash certificate.

                                        JaredBuschJ 1 Reply Last reply Reply Quote 1
                                        • JaredBuschJ
                                          JaredBusch @scottalanmiller
                                          last edited by

                                          @scottalanmiller said:

                                          Updated with information on generating the logstash certificate.

                                          You already had it in your instructions as I pointed out in a prior post in this topic.

                                          Right here.
                                          0_1457111310395_upload-5593fdc0-c299-4b3b-befa-136ba8fb5c01

                                          1 Reply Last reply Reply Quote 0
                                          • DanpD
                                            Danp
                                            last edited by

                                            Interesting article on cluster health here.

                                            1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 2 / 3
                                            • First post
                                              Last post