ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Installing Gluster on CentOS 7

    Scheduled Pinned Locked Moved SAM-SD
    glustercentoscentos 7linuxstoragescale out storagefilesystemscalescale hc3glusterfsrhel 7rhel
    27 Posts 6 Posters 9.1k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller
      last edited by

      You probably want a way to see what is going on with your Gluster storage. The info command will tell us the status, like in this example:

      # gluster volume info
       
      Volume Name: gv0
      Type: Replicate
      Volume ID: fc3d20d9-d65e-47ab-93b3-3598e1c9b751
      Status: Started
      Number of Bricks: 1 x 2 = 2
      Transport-type: tcp
      Bricks:
      Brick1: 192.168.1.80:/export/glusterdata/brick
      Brick2: 192.168.1.81:/export/glusterdata/brick
      Options Reconfigured:
      performance.readdir-ahead: on
      
      1 Reply Last reply Reply Quote 0
      • dafyreD
        dafyre
        last edited by

        Aside from the size of the drives, what would you change if you were putting this into production?

        Ideally, you would have a way to prevent split-brain type problems.

        scottalanmillerS 1 Reply Last reply Reply Quote 0
        • stacksofplatesS
          stacksofplates
          last edited by

          This will be helpful. We have a few servers at work that the RAID cards have failed. We are planning to put software RAID and test some things out. One was either Ceph or Gluster. This will help a lot.

          1 Reply Last reply Reply Quote 1
          • scottalanmillerS
            scottalanmiller @dafyre
            last edited by

            @dafyre said:

            Aside from the size of the drives, what would you change if you were putting this into production?

            Ideally, you would have a way to prevent split-brain type problems.

            For production I would have at least three nodes and pretty typically would not have this on a shared infrastructure but on dedicated hardware. Because this is a full cluster on its own, I would expect that I would have resources for nothing but this, custom build for the purpose.

            If Raspberry Pi 3 had SATA connections, I would totally build a cluster that way for fun. That would be neat. You need very low CPU power for Gluster.

            I would likely remove LVM in production as well. Just use the raw disk and all of it.

            1 Reply Last reply Reply Quote 3
            • stacksofplatesS
              stacksofplates
              last edited by stacksofplates

              I'm firing up a couple VMs on my KVM box to test it.

              Does Ceph have any advantages? I don't think I can count object storage as an advantage based on what we would be using it for.

              scottalanmillerS 1 Reply Last reply Reply Quote 0
              • scottalanmillerS
                scottalanmiller @stacksofplates
                last edited by

                @johnhooks said:

                I'm firing up a couple VMs on my KVM box to test it.

                Does Ceph have any advantages? I don't think I can count object storage as an advantage based on what we would be using it for.

                Not a lot.

                http://www.networkcomputing.com/storage/gluster-vs-ceph-open-source-storage-goes-head-head/8824853

                Now that CEPH and Gluster are both inside the RH fold, if you don't want the object flexibility of CEPH, Gluster might be for you.

                stacksofplatesS 2 Replies Last reply Reply Quote 1
                • stacksofplatesS
                  stacksofplates @scottalanmiller
                  last edited by stacksofplates

                  @scottalanmiller said:

                  @johnhooks said:

                  I'm firing up a couple VMs on my KVM box to test it.

                  Does Ceph have any advantages? I don't think I can count object storage as an advantage based on what we would be using it for.

                  Not a lot.

                  http://www.networkcomputing.com/storage/gluster-vs-ceph-open-source-storage-goes-head-head/8824853

                  Now that CEPH and Gluster are both inside the RH fold, if you don't want the object flexibility of CEPH, Gluster might be for you.

                  Ya we would be using it pretty much as a giant NAS. That's what we are experimenting with is older 24 drive servers that were NAS boxes.

                  1 Reply Last reply Reply Quote 1
                  • stacksofplatesS
                    stacksofplates @scottalanmiller
                    last edited by

                    @scottalanmiller said:

                    @johnhooks said:

                    I'm firing up a couple VMs on my KVM box to test it.

                    Does Ceph have any advantages? I don't think I can count object storage as an advantage based on what we would be using it for.

                    Not a lot.

                    http://www.networkcomputing.com/storage/gluster-vs-ceph-open-source-storage-goes-head-head/8824853

                    Now that CEPH and Gluster are both inside the RH fold, if you don't want the object flexibility of CEPH, Gluster might be for you.

                    Ha I just read that article like 10 minutes ago.

                    1 Reply Last reply Reply Quote 1
                    • dafyreD
                      dafyre
                      last edited by

                      So the next question would be... which IP address do you use for connecting to the Gluster system? the IP address of Brick 1 or Brick 2... or Brick N... ?

                      Or do you set up some kind of master IP address with Pacemaker / Heartbeat, et al?

                      scottalanmillerS 2 Replies Last reply Reply Quote 1
                      • scottalanmillerS
                        scottalanmiller @dafyre
                        last edited by

                        @dafyre said:

                        So the next question would be... which IP address do you use for connecting to the Gluster system? the IP address of Brick 1 or Brick 2... or Brick N... ?

                        Great question. The Gluster client actually handles this. Mount from Server1 and that server fails, the client automatically attaches to Server2. It's not 100% transparent, there is some noticeable delay during the failover but it takes care of itself. It's self healing.

                        At mount time, you can't do that, if Server1 is down and that's what is in your mount command it can't find the second server. So either you accept that limitation or you put backup servers into the mount command itself and then it handles it at boot time as well.

                        1 Reply Last reply Reply Quote 1
                        • scottalanmillerS
                          scottalanmiller
                          last edited by

                          Basically, when mounting, the client appears to query the first node, ask it where the other nodes are, and then is ready to reach out to them as needed. The systems remains able to read and write without any intervention even if an individual node fails.

                          1 Reply Last reply Reply Quote 0
                          • scottalanmillerS
                            scottalanmiller @dafyre
                            last edited by

                            @dafyre said:

                            So the next question would be... which IP address do you use for connecting to the Gluster system?

                            Any or all.

                            1 Reply Last reply Reply Quote 1
                            • stacksofplatesS
                              stacksofplates
                              last edited by

                              You forgot

                              gluster start volume gv0
                              

                              before you mount the volume to /data

                              1 Reply Last reply Reply Quote 1
                              • Emad RE
                                Emad R @scottalanmiller
                                last edited by Emad R

                                @scottalanmiller
                                No package glusterfs-server available ???

                                I tried other articles as well
                                I can install = centos-release-gluster
                                but not glusterfs-serve = not available


                                Oh nvm they changed the url of their repo

                                Connecting to download.gluster.org (download.gluster.org)|23.253.208.221|:443... connected.
                                HTTP request sent, awaiting response... 404 Not Found

                                This worked for me:

                                yum search centos-release-gluster #check LTS version number (centos-release-gluster310)
                                yum -y install centos-release-gluster310 -y
                                sed -i -e "s/enabled=1/enabled=0/g" /etc/yum.repos.d/CentOS-Gluster-3.10.repo
                                yum --enablerepo=centos-gluster310,epel -y install glusterfs-server
                                systemctl start glusterd
                                systemctl enable glusterd

                                stacksofplatesS 1 Reply Last reply Reply Quote 1
                                • stacksofplatesS
                                  stacksofplates @Emad R
                                  last edited by

                                  @emad-r said in Installing Gluster on CentOS 7:

                                  @scottalanmiller
                                  No package glusterfs-server available ???

                                  I tried other articles as well
                                  I can install = centos-release-gluster
                                  but not glusterfs-serve = not available


                                  Oh nvm they changed the url of their repo

                                  Connecting to download.gluster.org (download.gluster.org)|23.253.208.221|:443... connected.
                                  HTTP request sent, awaiting response... 404 Not Found

                                  This worked for me:

                                  yum search centos-release-gluster #check LTS version number (centos-release-gluster310)
                                  yum -y install centos-release-gluster310 -y
                                  sed -i -e "s/enabled=1/enabled=0/g" /etc/yum.repos.d/CentOS-Gluster-3.10.repo
                                  yum --enablerepo=centos-gluster310,epel -y install glusterfs-server
                                  systemctl start glusterd
                                  systemctl enable glusterd

                                  It's in the storage SIG too. So if you use a mirror local to you, you should be able to find it under storage.

                                  1 Reply Last reply Reply Quote 0
                                  • PenguinWranglerP
                                    PenguinWrangler
                                    last edited by

                                    I was thinking about doing Gluster Storage for my three KVM Hosts and keep my KVM VMs there. So if I made a virtual machine for the Gluster that used all the storage on each machine and then mounted the Gluster store in each KVM host for storage, would there be any disadvantage to that?

                                    travisdh1T scottalanmillerS Emad RE 3 Replies Last reply Reply Quote 1
                                    • travisdh1T
                                      travisdh1 @PenguinWrangler
                                      last edited by

                                      @penguinwrangler said in Installing Gluster on CentOS 7:

                                      I was thinking about doing Gluster Storage for my three KVM Hosts and keep my KVM VMs there. So if I made a virtual machine for the Gluster that used all the storage on each machine and then mounted the Gluster store in each KVM host for storage, would there be any disadvantage to that?

                                      Yes, good plan.

                                      That's essentially how many commercial offerings operate today, they just hide the complexity from you.

                                      1 Reply Last reply Reply Quote 0
                                      • scottalanmillerS
                                        scottalanmiller @PenguinWrangler
                                        last edited by

                                        @penguinwrangler said in Installing Gluster on CentOS 7:

                                        I was thinking about doing Gluster Storage for my three KVM Hosts and keep my KVM VMs there. So if I made a virtual machine for the Gluster that used all the storage on each machine and then mounted the Gluster store in each KVM host for storage, would there be any disadvantage to that?

                                        That's Red Hat's HCI model.

                                        PenguinWranglerP 1 Reply Last reply Reply Quote 2
                                        • PenguinWranglerP
                                          PenguinWrangler @scottalanmiller
                                          last edited by

                                          @scottalanmiller @travisdh1 Another question. I have two SSDs for the main OS (RAID 1), CentOS, then I have the 8 TB enterprise drive for the gluster store. What are your thoughts of needing raid on the 8 TB drive that would be in each machine? I was going to have the gluster store replicate itself to each machine so we only have 8 TB of storage but in theory, we could lose two of the machines and be okay, correct? In a perfect world, I would raid the 8 TB drives with a raid 1 for redundancy, however, this is for my friend who is at a school district that literally doesn't have two pennies to rub together, so the cost of the drives is an issue. He is just now starting to virtualize machines after I have been badgering him forever about it. He picked up some refurbished supermicro servers that we will be using.

                                          travisdh1T 1 Reply Last reply Reply Quote 0
                                          • travisdh1T
                                            travisdh1 @PenguinWrangler
                                            last edited by

                                            @penguinwrangler said in Installing Gluster on CentOS 7:

                                            @scottalanmiller @travisdh1 Another question. I have two SSDs for the main OS (RAID 1), CentOS, then I have the 8 TB enterprise drive for the gluster store. What are your thoughts of needing raid on the 8 TB drive that would be in each machine? I was going to have the gluster store replicate itself to each machine so we only have 8 TB of storage but in theory, we could lose two of the machines and be okay, correct? In a perfect world, I would raid the 8 TB drives with a raid 1 for redundancy, however, this is for my friend who is at a school district that literally doesn't have two pennies to rub together, so the cost of the drives is an issue. He is just now starting to virtualize machines after I have been badgering him forever about it. He picked up some refurbished supermicro servers that we will be using.

                                            What you have with the gluster configuration is already a network based triple mirror. Having a local RAID and a gluster setup becomes a waste of resources quick.

                                            PenguinWranglerP 1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 1 / 2
                                            • First post
                                              Last post