ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    The State of ARM RISC in the DataCenter

    Water Closet
    arm risc
    5
    30
    3.2k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • thwrT
      thwr @MattSpeller
      last edited by

      @MattSpeller said in What Are You Doing Right Now:

      @thwr said in What Are You Doing Right Now:

      The LeMaker Cello is a similar A1100 based board, available for 299 USD. Would be a perfect test / dev rig.

      How close is that to being able to buy a micro-atx board chip and ram though...

      Very close: http://www.tomsitpro.com/articles/amd-opteron-a1100-enterprise-production,1-3107.html

      1 Reply Last reply Reply Quote 1
      • thwrT
        thwr @MattSpeller
        last edited by

        @MattSpeller said in What Are You Doing Right Now:

        @thwr said in What Are You Doing Right Now:

        The LeMaker Cello is a similar A1100 based board, available for 299 USD. Would be a perfect test / dev rig.

        http://ca.pcpartpicker.com/list/kWZyxY

        Celeron, 4gb ram, 60gb ssd, case, psu... $311CDN

        Yeah, but that's an Intel system. The whole point about the ARM architecture in servers is Flops per Watt. They aren't as fast as Intel systems, but they are much more efficient.

        MattSpellerM 1 Reply Last reply Reply Quote 1
        • MattSpellerM
          MattSpeller @thwr
          last edited by

          @thwr said in What Are You Doing Right Now:

          @MattSpeller said in What Are You Doing Right Now:

          @thwr said in What Are You Doing Right Now:

          The LeMaker Cello is a similar A1100 based board, available for 299 USD. Would be a perfect test / dev rig.

          http://ca.pcpartpicker.com/list/kWZyxY

          Celeron, 4gb ram, 60gb ssd, case, psu... $311CDN

          Yeah, but that's an Intel system. The whole point about the ARM architecture in servers is Flops per Watt. They aren't as fast as Intel systems, but they are much more efficient.

          Lets hope the pricing trend heads lower once significant production ramps up

          thwrT 1 Reply Last reply Reply Quote 0
          • thwrT
            thwr @MattSpeller
            last edited by

            @MattSpeller said in What Are You Doing Right Now:

            @thwr said in What Are You Doing Right Now:

            @MattSpeller said in What Are You Doing Right Now:

            @thwr said in What Are You Doing Right Now:

            The LeMaker Cello is a similar A1100 based board, available for 299 USD. Would be a perfect test / dev rig.

            http://ca.pcpartpicker.com/list/kWZyxY

            Celeron, 4gb ram, 60gb ssd, case, psu... $311CDN

            Yeah, but that's an Intel system. The whole point about the ARM architecture in servers is Flops per Watt. They aren't as fast as Intel systems, but they are much more efficient.

            Lets hope the pricing trend heads lower once significant production ramps up

            Well, just from looking at the price of the dev board, which is considerable low, I would say we will see some interesting ARM based server boards in the future.

            Also Linux and BSD both don't care much about the underlying architecture, which means that a proven OS is already available.

            DashrenderD 1 Reply Last reply Reply Quote 0
            • DashrenderD
              Dashrender @thwr
              last edited by

              @thwr said in What Are You Doing Right Now:

              @MattSpeller said in What Are You Doing Right Now:

              @thwr said in What Are You Doing Right Now:

              @MattSpeller said in What Are You Doing Right Now:

              @thwr said in What Are You Doing Right Now:

              The LeMaker Cello is a similar A1100 based board, available for 299 USD. Would be a perfect test / dev rig.

              http://ca.pcpartpicker.com/list/kWZyxY

              Celeron, 4gb ram, 60gb ssd, case, psu... $311CDN

              Yeah, but that's an Intel system. The whole point about the ARM architecture in servers is Flops per Watt. They aren't as fast as Intel systems, but they are much more efficient.

              Lets hope the pricing trend heads lower once significant production ramps up

              Well, just from looking at the price of the dev board, which is considerable low, I would say we will see some interesting ARM based server boards in the future.

              Also Linux and BSD both don't care much about the underlying architecture, which means that a proven OS is already available.

              Sure, I get that. But when Dell/HP puts their spin on it, it will probably be nearly the same cost as typical servers. I'm guessing we'll only see a few hundred dollars cost difference on average.

              Like others things, in this case the CPU probably isn't where most of the costs come from.

              thwrT 1 Reply Last reply Reply Quote 1
              • thwrT
                thwr @Dashrender
                last edited by thwr

                @Dashrender said in What Are You Doing Right Now:

                @thwr said in What Are You Doing Right Now:

                @MattSpeller said in What Are You Doing Right Now:

                @thwr said in What Are You Doing Right Now:

                @MattSpeller said in What Are You Doing Right Now:

                @thwr said in What Are You Doing Right Now:

                The LeMaker Cello is a similar A1100 based board, available for 299 USD. Would be a perfect test / dev rig.

                http://ca.pcpartpicker.com/list/kWZyxY

                Celeron, 4gb ram, 60gb ssd, case, psu... $311CDN

                Yeah, but that's an Intel system. The whole point about the ARM architecture in servers is Flops per Watt. They aren't as fast as Intel systems, but they are much more efficient.

                Lets hope the pricing trend heads lower once significant production ramps up

                Well, just from looking at the price of the dev board, which is considerable low, I would say we will see some interesting ARM based server boards in the future.

                Also Linux and BSD both don't care much about the underlying architecture, which means that a proven OS is already available.

                Sure, I get that. But when Dell/HP puts their spin on it, it will probably be nearly the same cost as typical servers. I'm guessing we'll only see a few hundred dollars cost difference on average.

                Like others things, in this case the CPU probably isn't where most of the costs come from.

                Think of larger scales. While a single A11xx probably can't beat a modern Xeon in anything but CPU power/watt, a whole bunch of them can. They could be pretty perfect webservers, in-memory database nodes, maybe even virtualization hosts for ARM based VMs at some point in time.

                MattSpellerM 1 Reply Last reply Reply Quote 0
                • DashrenderD
                  Dashrender
                  last edited by

                  Sure, I see when we don't have typical 2-4 socket computers, instead a server might have 20+ sockets.

                  scottalanmillerS 1 Reply Last reply Reply Quote 2
                  • MattSpellerM
                    MattSpeller @thwr
                    last edited by

                    @thwr @Dashrender It will be interesting to see how it works out between consolidation (ARM server racks) and IoT/shards/stand-alone/single-board-computers

                    scottalanmillerS 1 Reply Last reply Reply Quote 2
                    • scottalanmillerS
                      scottalanmiller @Dashrender
                      last edited by

                      @Dashrender said in What Are You Doing Right Now:

                      Sure, I see when we don't have typical 2-4 socket computers, instead a server might have 20+ sockets.

                      Actually the move will be to single sockets, at least at first. ARMs rarely support multiple sockets.

                      DashrenderD 1 Reply Last reply Reply Quote 1
                      • scottalanmillerS
                        scottalanmiller @MattSpeller
                        last edited by

                        @MattSpeller said in What Are You Doing Right Now:

                        @thwr @Dashrender It will be interesting to see how it works out between consolidation (ARM server racks) and IoT/shards/stand-alone/single-board-computers

                        SBCs are the expectation for racks of ARM servers, like MoonShot.

                        DashrenderD 1 Reply Last reply Reply Quote 1
                        • DashrenderD
                          Dashrender @scottalanmiller
                          last edited by

                          @scottalanmiller said in What Are You Doing Right Now:

                          @Dashrender said in What Are You Doing Right Now:

                          Sure, I see when we don't have typical 2-4 socket computers, instead a server might have 20+ sockets.

                          Actually the move will be to single sockets, at least at first. ARMs rarely support multiple sockets.

                          So how do you see these appearing in the DC? Single socket, I'm guessing you're not virtualizing.

                          scottalanmillerS 1 Reply Last reply Reply Quote 0
                          • DashrenderD
                            Dashrender @scottalanmiller
                            last edited by

                            @scottalanmiller said in What Are You Doing Right Now:

                            @MattSpeller said in What Are You Doing Right Now:

                            @thwr @Dashrender It will be interesting to see how it works out between consolidation (ARM server racks) and IoT/shards/stand-alone/single-board-computers

                            SBCs are the expectation for racks of ARM servers, like MoonShot.

                            This makes great sense when looking at DevOps, but I don't understand how they would work in a typical virtualized setup - but maybe that's not who they are going up against?

                            scottalanmillerS thwrT 2 Replies Last reply Reply Quote 0
                            • scottalanmillerS
                              scottalanmiller @Dashrender
                              last edited by

                              @Dashrender said in What Are You Doing Right Now:

                              @scottalanmiller said in What Are You Doing Right Now:

                              @Dashrender said in What Are You Doing Right Now:

                              Sure, I see when we don't have typical 2-4 socket computers, instead a server might have 20+ sockets.

                              Actually the move will be to single sockets, at least at first. ARMs rarely support multiple sockets.

                              So how do you see these appearing in the DC? Single socket, I'm guessing you're not virtualizing.

                              Why? It can go both ways, but Xen and containers are the standards that are expected. Why would single socket not have you virtualizing?

                              1 Reply Last reply Reply Quote 0
                              • scottalanmillerS
                                scottalanmiller @Dashrender
                                last edited by

                                @Dashrender said in What Are You Doing Right Now:

                                @scottalanmiller said in What Are You Doing Right Now:

                                @MattSpeller said in What Are You Doing Right Now:

                                @thwr @Dashrender It will be interesting to see how it works out between consolidation (ARM server racks) and IoT/shards/stand-alone/single-board-computers

                                SBCs are the expectation for racks of ARM servers, like MoonShot.

                                This makes great sense when looking at DevOps, but I don't understand how they would work in a typical virtualized setup - but maybe that's not who they are going up against?

                                DevOps and typical virtualization overlap. These are smaller individual units that you are used to, but why do you feel that this would significantly impact virtualization or deployment decisions outside of needing more nodes that are cheaper rather than fewer that are more expensive?

                                DashrenderD 1 Reply Last reply Reply Quote 0
                                • thwrT
                                  thwr @Dashrender
                                  last edited by

                                  @Dashrender said in What Are You Doing Right Now:

                                  @scottalanmiller said in What Are You Doing Right Now:

                                  @MattSpeller said in What Are You Doing Right Now:

                                  @thwr @Dashrender It will be interesting to see how it works out between consolidation (ARM server racks) and IoT/shards/stand-alone/single-board-computers

                                  SBCs are the expectation for racks of ARM servers, like MoonShot.

                                  This makes great sense when looking at DevOps, but I don't understand how they would work in a typical virtualized setup - but maybe that's not who they are going up against?

                                  Don't forget that ARM SoCs can easily pack dozens of cores in a single package. For example: https://www.nextplatform.com/2016/09/01/details-emerge-chinas-64-core-arm-chip/

                                  I would expect to see servers like the aforementioned MoonShot from HP: Lots of cores per SoC, but just one SoC per board. And dozens of boards per chassis.

                                  scottalanmillerS 1 Reply Last reply Reply Quote 0
                                  • scottalanmillerS
                                    scottalanmiller @thwr
                                    last edited by

                                    @thwr said in What Are You Doing Right Now:

                                    @Dashrender said in What Are You Doing Right Now:

                                    @scottalanmiller said in What Are You Doing Right Now:

                                    @MattSpeller said in What Are You Doing Right Now:

                                    @thwr @Dashrender It will be interesting to see how it works out between consolidation (ARM server racks) and IoT/shards/stand-alone/single-board-computers

                                    SBCs are the expectation for racks of ARM servers, like MoonShot.

                                    This makes great sense when looking at DevOps, but I don't understand how they would work in a typical virtualized setup - but maybe that's not who they are going up against?

                                    Don't forget that ARM SoCs can easily pack dozens of cores in a single package. For example: https://www.nextplatform.com/2016/09/01/details-emerge-chinas-64-core-arm-chip/

                                    I would expect to see servers like the aforementioned MoonShot from HP: Lots of cores per SoC, but just one SoC per board. And dozens of boards per chassis.

                                    Exactly. Blade approach, but very dense. So maybe 16 servers in a chassis. Each server might be 64 cores. So a 3U user might be really dense in cores.

                                    1 Reply Last reply Reply Quote 1
                                    • DashrenderD
                                      Dashrender @scottalanmiller
                                      last edited by

                                      @scottalanmiller said in What Are You Doing Right Now:

                                      @Dashrender said in What Are You Doing Right Now:

                                      @scottalanmiller said in What Are You Doing Right Now:

                                      @MattSpeller said in What Are You Doing Right Now:

                                      @thwr @Dashrender It will be interesting to see how it works out between consolidation (ARM server racks) and IoT/shards/stand-alone/single-board-computers

                                      SBCs are the expectation for racks of ARM servers, like MoonShot.

                                      This makes great sense when looking at DevOps, but I don't understand how they would work in a typical virtualized setup - but maybe that's not who they are going up against?

                                      DevOps and typical virtualization overlap. These are smaller individual units that you are used to, but why do you feel that this would significantly impact virtualization or deployment decisions outside of needing more nodes that are cheaper rather than fewer that are more expensive?

                                      Mostly because of the added overhead of a hypervisor for every processor. If you look at modern server today, has two sockets and 8+ cores. So that's like having 16+ processors.
                                      I know ARM chips can have multiple cores, so I'm not sure where the problem is running multisocket is compared to multicore. But I did infer (probably wrongly) that a single ARM would be single core, and now you're talking about 16x the overhead for 16 versions of XS running to support all of those processors.

                                      Also, How do you give a single process more power over, for lack of a better term, separate machines?

                                      I'm talking completely out of my ass because I don't understand the architecture of ARM systems compared to Intel.

                                      travisdh1T scottalanmillerS 4 Replies Last reply Reply Quote 0
                                      • scottalanmillerS
                                        scottalanmiller @Dashrender
                                        last edited by

                                        @Dashrender said in What Are You Doing Right Now:

                                        Mostly because of the added overhead of a hypervisor for every processor. If you look at modern server today, has two sockets and 8+ cores. So that's like having 16+ processors.

                                        Sort of. There are a few things that you are missing here....

                                        • Additional sockets create additional overhead that is different than cores.
                                        • Even enterprise AMD64 servers today have a trend towards single sockets (Scale is a big leader in that direction)
                                        • In the big iron RISC world, single socket Power and Sparc systems are extremely common.
                                        • ARM RISC can be 64 cores (real cores) on a single CPU, more than on AMD64
                                        • Hypervisor overhead is absolutely tiny and because these are servers, Xen and PV are the norm with nearly zero overhead
                                        • Containers have even lower overhead than PV
                                        thwrT 1 Reply Last reply Reply Quote 2
                                        • scottalanmillerS
                                          scottalanmiller @Dashrender
                                          last edited by

                                          @Dashrender said in What Are You Doing Right Now:

                                          I know ARM chips can have multiple cores, so I'm not sure where the problem is running multisocket is compared to multicore.

                                          Multisocket and multicore have very different physical engineering needs. This is why standard cheap AMD64 procs (like the ones that you get in a desktop) can only be one socket. They don't have the channels for the dual sockets. And then the next step up Intel and AMD AMD64 processors are dual socket capable. These are the ones in most servers. They are cheaper than the ones that can do four and eight way sockets. Each growth in socket count makes the processors more expensive. This is why four and eight way AMD64 servers are not that common, the processor costs go up a lot and the efficiency goes down.

                                          Multicore is almost entirely a software problem to solve. Multisocket is both software and hardware together. It has dramatically more overhead than just adding cores.

                                          1 Reply Last reply Reply Quote 1
                                          • thwrT
                                            thwr @scottalanmiller
                                            last edited by

                                            @scottalanmiller said in What Are You Doing Right Now:

                                            @Dashrender said in What Are You Doing Right Now:

                                            Mostly because of the added overhead of a hypervisor for every processor. If you look at modern server today, has two sockets and 8+ cores. So that's like having 16+ processors.

                                            Sort of. There are a few things that you are missing here....

                                            • Additional sockets create additional overhead that is different than cores.
                                            • Even enterprise AMD64 servers today have a trend towards single sockets (Scale is a big leader in that direction)
                                            • In the big iron RISC world, single socket Power and Sparc systems are extremely common.
                                            • ARM RISC can be 64 cores (real cores) on a single CPU, more than on AMD64
                                            • Hypervisor overhead is absolutely tiny and because these are servers, Xen and PV are the norm with nearly zero overhead
                                            • Containers have even lower overhead than PV

                                            Should we fork that whole topic? Contains quite a bit of information about ARM based servers and ARM vs Intel now. The discussion started with ~ https://mangolassi.it/topic/1022/what-are-you-doing-right-now/30339

                                            1 Reply Last reply Reply Quote 1
                                            • 1
                                            • 2
                                            • 1 / 2
                                            • First post
                                              Last post