ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. biggen
    3. Posts
    B
    • Profile
    • Following 0
    • Followers 0
    • Topics 13
    • Posts 156
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Add 2.5" U.2 (NVMe) SSDs to custom build?

      @Pete-S said in Add 2.5" U.2 (NVMe) SSDs to custom build?:

      Intel P4510

      Yup that would work. Wish I could figure out a way to at least mount the U.2 drives in the 3.5" external drive bay since I don't need a 3.5" bay for anything else. ICY DOCK makes a twin 2.5" SATA drive bay that fits in a 3.5" bay, but they don't make a twin 2.5" U.2 NVMe drive bay that fits in the 3.5". I'm guessing the 2.5" U.2 needs either more spacing or better cooling than a standard 2.5" SATA SSD.

      posted in IT Discussion
      B
      biggen
    • RE: Add 2.5" U.2 (NVMe) SSDs to custom build?

      @Pete-S Yeah I think I'll skip U.2.

      ICY DOCK makes this bay that you can put twin M.2 drives in. Then I could run double SFF-8643 cables back to two of these adapters that would plug into the twin M.2 slots on the MB.

      All that really gets me is a lot of links and drive trays. Might as well plug the M.2 drives right into the MB and be done with it. I was hoping to buy an Enterprise M.2 drive, though, but those are all 110cm in length and the MB only supports M.2 to 80cm.

      Always something...

      posted in IT Discussion
      B
      biggen
    • RE: Add 2.5" U.2 (NVMe) SSDs to custom build?

      So I'm looking at the motherboard I'm pillaging from another system and it has two M.2 slots built into it! Doh! When I bought it last year I must have had a premonition I wanted a dual NVMe setup for a future lab!

      posted in IT Discussion
      B
      biggen
    • RE: Add 2.5" U.2 (NVMe) SSDs to custom build?

      @scottalanmiller I saw that but I still have the problem of what do I plug the "backplane" into.
      All I can find are $400+ adapters.

      posted in IT Discussion
      B
      biggen
    • RE: Add 2.5" U.2 (NVMe) SSDs to custom build?

      @Pete-S
      Well before @scottalanmiller ruined my life I was looking at twin MZ-QLB1T9NE. 🙂

      But that was also before I had to figure spending another $600+ on bays/HBA/cables etc. I was figuring I could get a cheap HBA off Amazon.

      I'll have close to $1500 in the system in just drives. Seems a bit silly for a lab of parts at this point.

      posted in IT Discussion
      B
      biggen
    • RE: Add 2.5" U.2 (NVMe) SSDs to custom build?

      Yeah I wanted U.2 simply because I like the "physical rectangular" drive and it can be slipped in and out of that ICY DOCK drive tray without having to break open the case to swap a drive out (although I can't imagine I'd be doing this ever). With M.2, you have to screw them in and they just feel "flimsy" to me. Again, this was a lab so I'm not too worried about reliability. Just wanted functionality.

      There are also some speed differences if your're concerned with IOPS (as far as the Samsung Data Center drives are concerned) where the U.2 NVMe drives seem to be a bit faster than their M.2 NVMe counterparts.

      posted in IT Discussion
      B
      biggen
    • RE: Add 2.5" U.2 (NVMe) SSDs to custom build?

      @scottalanmiller ICY DOCK sent me this .pdf that shows compatible HBA/RAID cards with that bay/tray and none of them are inexpensive.

      Alright, back to M.2. Not worth spending this kind of money and not getting Enterprise level quality from end-to-end.

      posted in IT Discussion
      B
      biggen
    • RE: Add 2.5" U.2 (NVMe) SSDs to custom build?

      @scottalanmiller Yeah but, Highpoint? Man, I'd have a hard time paying them anything!

      posted in IT Discussion
      B
      biggen
    • RE: Add 2.5" U.2 (NVMe) SSDs to custom build?

      @scottalanmiller Ok....

      Yeah I've seen that Supermicro HBA reviewed over at servethehome but it was such an old review I figured there would be more stuff out since then.

      That Highpoint card is over $400 on Amazon!

      Alright, I guess M.2 it is. Just need to figure out a two drive setup.

      posted in IT Discussion
      B
      biggen
    • RE: Add 2.5" U.2 (NVMe) SSDs to custom build?

      @scottalanmiller Its a drive holder but what are they expecting you to connect it to? They wouldn't make them if they didn't see some type of demand.

      posted in IT Discussion
      B
      biggen
    • RE: Add 2.5" U.2 (NVMe) SSDs to custom build?

      @scottalanmiller Who branded it as "Enterprise Only"? There must be some demand for the prosumer version since they are building those "adapters" that I listed in the OP.

      My other option if I want to go NVMe is M.2 I guess if its really this complicated but I still want two drives.

      posted in IT Discussion
      B
      biggen
    • RE: Add 2.5" U.2 (NVMe) SSDs to custom build?

      @scottalanmiller Well there is tons of Enterprise stuff that makes its way to the consumer market. ICY DOCK selling U.2 style racks/bays means they expect you to connect it to "something" and I don't think anyone thinks of ICY DOCK as Enterprise.

      posted in IT Discussion
      B
      biggen
    • Add 2.5" U.2 (NVMe) SSDs to custom build?

      Have some parts laying around and want to construct a new lab for playing with. I really want to go with 2.5" U.2 NVMe SSDs but I'm confused on how to actually connect them to the MB. Are there no PCIe HBAs (perhaps the wrong term) that allows you to add NVMe U.2 drives to the system?

      I saw this which I guess could work. But I really want to go with two drives and run a MDADM mirror so I'd have to get two of those "interfaces" and stuff two drives in the back of the chassis over the PCIe lanes if I went this route.

      Icy Dock makes these cool little kits that I can add into the existing 3.5" external bays in the case. But, again, what do I connect the backplanes to?

      Surely it can't be this complicated to add U.2 2.5" NVMe SSDs to a custom build?

      Edit: Found this which I wouldn't give a $1 much less the ridiculous price that they are asking.

      posted in IT Discussion
      B
      biggen
    • RE: Gluster and RAID question

      @scottalanmiller I've shied away from Proxmox over the years primarily for the fact that they didn't offer much in the way of Pro support. I'm currently running xcp-ng which I really like. Perhaps it worth looking at Proxmox once again.

      posted in IT Discussion
      B
      biggen
    • RE: Gluster and RAID question

      I think its safe to say I'll probably never be asked (or even want to) design a system where Gluster may be used as it seems its waaaay over my head at this stage of the game.

      I don't even know what @scottalanmiller solutions are! 🙂

      posted in IT Discussion
      B
      biggen
    • RE: Gluster and RAID question

      @travisdh1 said in Gluster and RAID question:

      @biggen said in Gluster and RAID question:

      @travisdh1 Great thanks for that info. When you say storage for VMs are you speaking of a SAN? So your VMs are running off the Gluster?

      Yeah I thought 3 nodes of storage + the hypervisor node sounded like a ton of equipment. I know you can buy single boxes that have 2 - 4 nodes inside of them to reduce the footprint.

      Something like that. Basically Gluster would replace the SAN.

      Those 2-4 node in a boxes are just horrible solutions if you want fault-tolerance. Basically, you still have a single point of failure, but now it takes down all 3 nodes instead of a single node.

      Yeah I've always wondered about that multiple nodes in one case setup. Especially since I'd imagine the PSU backplane is probably being shared between all the nodes inside in some fashion.

      @Dashrender said in Gluster and RAID question:

      @biggen said in Gluster and RAID question:

      I appreciate the explanation guys. Not being in the IT field (directly) for some time means I'm playing catch up with a lot of the stuff.

      Lets say as a hypothetical one wanted to build out a 500TB Gluster cluster to be used as a backup target for VMs. It looks like you need at least 3 nodes to build out the Gluster Cluster. Then, of course, you need an additional node for the hypervisor - so 4 nodes minimum.

      On the three Gluster nodes, would you be installing a Linux OS directly to them (bare metal)? I know from reading here physical servers have fallen out of style. Is this a use case where a physical server still serves a purpose?

      Once the Gluster volume is up and running, you could then connect the hypervisor to the cluster assuming the hypervisor had Gluster Client support and then you have the massive cluster attached to the hypervisor as a SR to be used appropriately.

      I'm just wondering if something like this would work.

      Why would you need a fault tolerant storage solution for your backups? i would think if it was that important - you'd more likely go to tapes as part of your backups D2D2T.

      You probably wouldn't. I was just trying to dream up a solution of doing a three cluster Gluster. Perhaps a VM SR would be a better scenario OR perhaps a massive NAS storage Gluster cluster holding raw 4K footage for a production company. Again, it was a hypothetical. I have a hard time imagining any scenario where I would need to ever contain this much storage unless I'm starting up my own YouTube or some sort. The guys over on Reddit in the r/Datahoarder sub are commonly collecting hundreds of TB of junk but that is mostly on spare parts and cobbled together machinery. I've never seen any massive storage scale done with my own eyes using production level equipment and software so I guess its more curiosity on my own part as to how it would work.

      @travisdh1 said in Gluster and RAID question:

      @Dashrender said in Gluster and RAID question:

      Question for those in the know - Can Gluster run on the same boxes as the hypervisor like in a hyperconveraged setup? It seems crazy to have a solution as @biggen is suggesting - 3 Gluster nodes and a single VM host using that Gluster cluster - i.e. SPOF in that one VM host.
      And as he mentioned, that's a ton of hardware.

      Yes. Really easy if using a linux based KVM. Just create your Gluster storage and mount it as your VM config and storage directory. I've not done a setup like this myself, so I'm probably missing some high-points, but that's the basic idea.

      I know there are lots of ways to skin the cat, but wouldn't you still need three separate Gluster nodes? Gluster recommends at least three in order to avoid split brain. If you used a two physical node system I don't think they want you to do that without an arbiter which is something I have no idea about.

      @Dashrender said in Gluster and RAID question:

      @biggen said in Gluster and RAID question:

      On the three Gluster nodes, would you be installing a Linux OS directly to them (bare metal)? I know from reading here physical servers have fallen out of style. Is this a use case where a physical server still serves a purpose?

      This seems to be a misunderstanding. There's nothing wrong with physical servers. Something has to run on the physical hardware to make it work, I don't know diddily squat about Gluster, but I image it works something like this:
      A Linux OS is installed onto some smallish disk, possibly SD card, that is used to setup a Gluster cluster.
      KVM, or some other hypervisor is installed into the Linux OS as well, the hypervisor is pointed to the Gluster cluster for SR
      VM's are made in that hypervisor.

      Now I'm guessing this can't be done with Hyper-V, since that can't run inside Linux (as far as I know), so you'd be forced to have hypervisor hosts and storage hosts (i.e. SAN/NAS) for Hyper-V and other hypervisors.

      I'm looking forward to someone shredding this post. 🙂

      I don't know any about Gluster either other than what I've gleaned in the last 24 hours. From what I toyed with, I spun up two Debian VMs and installed and configured the Gluster volume from those two VMs. Then I could (I didn't though) install the Glusterfs client on xcp-ng in order to connect to the cluster and then the hypervisor uses the cluster as a SR.

      If you were talking about ONLY two physical nodes for everything, then what you say makes sense. I think you'd have to install your base OS (Debian, Cent, whatever...) on each node, configure the cluster, and install the hypervisor inside the same OS on both nodes in order to utilize the cluster.

      There is a split brain issue with only using two nodes from what I've read though.

      posted in IT Discussion
      B
      biggen
    • RE: Gluster and RAID question

      @travisdh1 Great thanks for that info. When you say storage for VMs are you speaking of a SAN? So your VMs are running off the Gluster?

      Yeah I thought 3 nodes of storage + the hypervisor node sounded like a ton of equipment. I know you can buy single boxes that have 2 - 4 nodes inside of them to reduce the footprint.

      posted in IT Discussion
      B
      biggen
    • RE: Gluster and RAID question

      I appreciate the explanation guys. Not being in the IT field (directly) for some time means I'm playing catch up with a lot of the stuff.

      Lets say as a hypothetical one wanted to build out a 500TB Gluster cluster to be used as a backup target for VMs. It looks like you need at least 3 nodes to build out the Gluster Cluster. Then, of course, you need an additional node for the hypervisor - so 4 nodes minimum.

      On the three Gluster nodes, would you be installing a Linux OS directly to them (bare metal)? I know from reading here physical servers have fallen out of style. Is this a use case where a physical server still serves a purpose?

      Once the Gluster volume is up and running, you could then connect the hypervisor to the cluster assuming the hypervisor had Gluster Client support and then you have the massive cluster attached to the hypervisor as a SR to be used appropriately.

      I'm just wondering if something like this would work.

      posted in IT Discussion
      B
      biggen
    • RE: Gluster and RAID question

      This was the piece of the puzzle I was missing. It explains at the bottom how to configure a simple Samba share.

      When one types in "samba gluster" in Google, this unwieldy page is the very first hit. And since its from the official Gluster docs, it makes it seems that is the RIGHT way to do it. That was my confusion when I asked earlier about CTDB.

      If one doesn't want to mess with CTDB then sharing out a simple Samba share on one of the Gluster nodes is real easy as I just found out. There is no fault tolerance as far as Samba goes since you are only dealing with a single Samba connection however.

      posted in IT Discussion
      B
      biggen
    • RE: Gluster and RAID question

      @JaredBusch Once the volume is up and running how the heck does one share it out? That what I'm trying to do. I have a successful two node system running:

      joe@glusternode1:/mnt$ sudo gluster volume info
      
      Volume Name: gv0
      Type: Replicate
      Volume ID: ab19d123-eb34-4186-8a03-316a3fc790e3
      Status: Started
      Snapshot Count: 0
      Number of Bricks: 1 x 2 = 2
      Transport-type: tcp
      Bricks:
      Brick1: glusternode1:/data/xvdb1/brick
      Brick2: glusternode2:/data/xvdb1/brick
      Options Reconfigured:
      transport.address-family: inet
      nfs.disable: on
      performance.client-io-threads: off
      
      

      That volume must now be mounted "somewhere" to access it. How do I mount it so Windows clients can access it? Do I simply mount the share in one of the nodes under /mnt/big_ole_gluster_space and then share out that mount point via Samba from that same Gluster node?

      posted in IT Discussion
      B
      biggen
    • 1 / 1