Review of ASRock Rack EPC621D6U-2T16R server motherboard for 30+ drives
Today we are reviewing the ASRockRack motherboard, which is designed for those who assemble a Big Data server or storage system for hundreds of Terabytes. The EPC621D6U-2T16R model resembles the previously considered
That is, using only one board, you can connect all 3.5-inch front bays of a 4U case, so there will also be free channels for SSDs located inside or behind. And so as not to be small, you have a Slimlink port for connecting NVME U2 drives, you have booting from SATA DOM and two 10Gb RJ45 ports for connecting to your infrastructure, all in MicroATX format. Of course, with such an announcement, this board is asking for assembly, but first, let's find out why it is exactly that.
Processor - LGA3647 Xeon Scalable
In general, if a motherboard has many channels for connecting an SSD/HDD, it is most likely designed to store information, not to process it.
Today both Intel and AMD have special low-cost processors for storage applications, such as Xeon-Dor EPYC 3000 . If you do not get involved in the eternal dispute "Intel vs AMD", then choosing a discrete Xeon Scalable processor gives you the following advantages:
- first, you have a 6-channel memory controller versus a 4-channel memory controller for the Xeon-D
- secondly, you have an Intel C621 south bridge with 16 SATA channels with software RAID support
- third, you can use the cheapest Xeon Bronze 3104 processor with 6 cores, support for the cheapest DDR4-2133 ECC Reg memory and 1.7 GHz without Turbo Boost. Its capacity is quite enough for a file server, and it costs less than Xeon-D.
In general, the idea of assembling storage systems on a 6-core Xeon Bronze is also interesting because this processor has a very low power consumption, around 50 W, so a passive radiator such as Supermicro SNK-P0068PS , allowing you to build a very quiet storage system, which can be kept in an open cabinet in Open Space. When choosing a cooler, keep in mind that ASRock Rack EPC621D6U-2T16R has a Narrow ILM socket, and if you still don’t know how this socket differs from Square ILM, I recommend reading our review of coolers for Xeon Scalable . The lower 6-core Intel processors have 48 PCI Express lanes, the same number as the top-end Xeon Gold and Platinum, so you don't lose anything in terms of connectivity.
Of course, you can install even the most powerful 28-core CPU here, 6-phase VRMs are ready for high load, but this solution is hardly justified from a storage point of view. The manufacturer has not released a separate list of compatibility with processors, and as a rule, this means that the entire Skylake series is supported up to Xeon Platinum 8280.
By the way, I would like to note right away that the board has a simpler memory power filtering scheme than, say, a two-processor monstrous EP2C621D12 -WS for workstations. This is understandable: the MicroATX format is so small, and you have to save space on the motherboard. The processor uses a 6-phase power supply circuit and 2-phase for memory modules, and the manufacturer claims to support DDR4 ECC RDIMMs with frequencies from 2133 to 2933 MHz. The latter works with processors based on the Cascade Lake architecture and the latest BIOS versions.
Network - Intel X550-BT2
In the field of high-speed optical interfaces, there is a revolution every year, and the 10GBase-T standard has stalled at the level of previous years (read comparison of 10G SFP + and 10GBase-T ), and from my point of view, Intel X550-T2. The EPC621D6U series motherboard accepts either a 2-port 1 Gigabit Intel i350 controller (in models without the "2T" index) or a 2-port 10 Gigabit Intel X550-BT2.
On some motherboards you can find the X550-AT2 controller, but its difference “BT2” lies only in support of PCI Express 3.0, while “BT2” supports PCI Express 2.1 with a bandwidth of up to 8x. In terms of bandwidth, this difference does not matter, but I want to warn you against flashing network controllers with firmware version 2.10 from the Intel official site . In our test lab, three different versions of Intel X550 (one discrete and two integrated on motherboards) started to lose the second port after flashing to firmware version 2.1, and we had to roll back to version 1.08.
In theory, the controller supports intermediate speeds of the NBase-T standard, equal to 2.5 Gbps over Category 5 cable and 5 Gbps over Cat 6, but only under Linux, and moreover, these speeds are not announced by the motherboard manufacturer. In any case, I do not think that these intermediate speeds matter for a 30-disk storage system. The built-in network card supports booting from PXE, as well as connecting to an IPMI controller in out-of-band mode, and if you choose a 1-gigabit version of the motherboard, then iSCSI booting is also supported there.
Data storage subsystem
The distribution of storage channels is as follows: 12 SATA-600 ports are brought out from the Intel C621 south bridge through 4-channel SFF8647 connectors in the lower left corner of the board. Two more SATA-600 ports are represented by regular ports, and the red one has power for SATA DOM drives, and the white one is parallel to the M.2 slot, and will turn off as soon as you install an M.2 SSD drive with a SATA interface, so use the latter only for NVME , especially since this slot is connected via the PCI-E bus directly to the processor.
The Slimline U.2 x8 port for connecting a basket with PCI Express drives directly to the processor deserves a special mention. Today, the cost of PCI-E SSDs has caught up with the price of SATA drives, and the only reason for installing the latter is that there are not enough lines for connecting PCI-E devices. At the same time, PCI Express drives, according to Micron, in some cases they can reduce access latency by 100 times!
And, of course, the gem of this motherboard is the LSI3616 HBA controller, which supports both SAS-12 and SATA-600, as well as PCI Express drives (including NVME). This chip promises performance at the level of 1 million IOPS, the ability to work at SAS-12Gb/s speeds using SAS-600 backplanes, and automatic allocation of PCI Express lanes in 4x4, 16x1 or 1x16 configurations.
The output from the LSI3616 controller is carried out through 4 SFF8647 ports, that is, you have available: 16 SATA-600 channels, 12 SAS-12 or 4 NVMe PCIe Gen3.0 channels. The controller operating mode (PCI-E or SAS/SATA) is switched by two jumpers near the SFF8647 ports, that is, you can use two ports for PCI-E devices and a couple for 8 SAS/SATA devices.
Naturally, LSI3616 uses all 16 PCI Express 3.1 bus lines, and therefore on the board it is connected in parallel with the PCI-E4 slot through a divider, and if you install an expansion board in this slot, the controller will not work. That is, figuratively speaking, you have one slot for expansion cards, which you can use without restrictions, while the other was sacrificed to the SAS-12 controller, and is completely useless on this card.
Another unpleasant point related to the placement of the LSI3616 on the board is that the chip has a too large heatsink that covers the topmost PCI Express slot in the case. There is a question of economy: a regular case for MicroATX has 4 slots for PCI cards, one of which is logical to use for an FC controller, and the others for an SSD sled. So, you lose the lower slot due to the peculiarities of the PCI Express switch, the upper one because of the HBA controller heatsink, and you are left with one PCI compartment above the M.2 port and one full PCI Express 16x slot.
Fortunately, if you do not adhere to high moral principles and on a short leg with the motherboard supplier, the heatsink of the LSI3616 chip can be slightly cut: it is clearly taken in excess for a 10-Watt controller (this is how much the LSI3616 HBA without RAID consumes maximum) , and without warranty, you can install 4 SSD drives in empty PCI bays. The allowed chip temperature is 85 degrees Celsius, but in normal mode, the controller does not heat up above 60 degrees even with the `` modified '' radiator.
It's clear that this motherboard is not bought to install some kind of expansion card in the lower PCI-E4 slot, while losing the built-in LSI 3616 controller, but if you put an SSD sled there, keep in mind, that they can block access to the socket for the COM port, again, if you need it for something.
Basically, ASRock Rack has done a miracle by putting a huge LGA3647 socket, network controller and SAS HBA on one MicroATX board, while leaving Slimlink X8, so you can build the fastest storage system on PCI Express drives. But for some reason, the developers completely forgot that a lot of space is empty on the "back" of the motherboard, using which they could once accomplish a feat and install Xeon Scalable on Mini-ITX .
BIOS, remote control and monitoring
BIOS settings are fairly standard: operating modes for PCI slots, SATA controllers, as well as basic properties of power consumption, including the limitation on the TDP of the processor. I would like to note that ASRock Rack EPC621D6U-2T16R allows you to control the speed of only PWM fans, and only speed monitoring is available via 3-pin ones.
Like all modern motherboards, ASRock Rack has a dedicated 1Gbps network port for remote management based on the ASpeed AST2500 chip. This chip gives you full HTML5 control with an interface optimized for smartphones, as well as an IPMI interface for connecting centralized monitoring systems. This is what the export of IPMI data looks like in text format:
root@ubuntu:/tmp# ipmitool -H 192.168.1.206 -U admin -P admin sensor 3VSB | 3.400 | Volts | ok | 2.880 | 3.060 | na | na | 3.740 | 3.900 5VSB | 5.040 | Volts | ok | 4.260 | 4.500 | na | na | 5.490 | 5.760 CPU1_VCORE | 1.790 | Volts | ok | 1.250 | 1.320 | na | na | 1.980 | 2.070 VCCM ABC | 1.230 | Volts | ok | 1.020 | 1.080 | na | na | 1.320 | 1.380 VCCM DEF | 1.210 | Volts | ok | 1.020 | 1.080 | na | na | 1.320 | 1.380 CORE_PCH | 1.000 | Volts | ok | 0.720 | 0.760 | na | na | 1.100 | 1.150 1.05_PCH | 1.050 | Volts | ok | 0.890 | 0.950 | na | na | 1.160 | 1.210 1.80_PCH | 1.790 | Volts | ok | 1.530 | 1.620 | na | na | 1.980 | 2.070 BAT | 3.140 | Volts | ok | 2.000 | 2.700 | na | na | 3.400 | 3.560 3V | 3.320 | Volts | ok | 2.880 | 3.060 | na | na | 3.740 | 3.900 5V | 4.980 | Volts | ok | 4.260 | 4.500 | na | na | 5.490 | 5.760 12V | 12.100 | Volts | ok | 10.200 | 10.800 | na | na | 13.200 | 13.800 PSU1 VIN | na | Volts | na | na | na | na | na | na | na PSU2 VIN | na | Volts | na | na | na | na | na | na | na PSU1 IOUT | na | Amps | na | na | na | na | na | na | na PSU2 IOUT | na | Amps | na | na | na | na | na | na | na PSU1 PIN | na | Watts | na | na | na | na | na | na | na PSU2 PIN | na | Watts | na | na | na | na | na | na | na PSU1 POUT | na | Watts | na | na | na | na | na | na | na PSU2 POUT | na | Watts | na | na | na | na | na | na | na MB Temp | 32.000 | degrees C | ok | na | na | na | 55.000 | na | na Card side Temp | 54.000 | degrees C | ok | na | na | na | 68.000 | na | na TR1 Temp | na | degrees C | na | na | na | na | 65.000 | na | na CPU1 Temp | 41.000 | degrees C | ok | na | na | na | 87.000 | 88.000 | na PCH Temp | 43.000 | degrees C | ok | na | na | na | 85.000 | 86.000 | na DDR4_A Temp | 49.000 | degrees C | ok | na | na | na | 84.000 | 85.000 | na DDR4_B Temp | 54.000 | degrees C | ok | na | na | na | 84.000 | 85.000 | na DDR4_C Temp | 50.000 | degrees C | ok | na | na | na | 84.000 | 85.000 | na DDR4_D Temp | 40.000 | degrees C | ok | na | na | na | 84.000 | 85.000 | na DDR4_E Temp | na | degrees C | na | na | na | na | 84.000 | 85.000 | na DDR4_F Temp | na | degrees C | na | na | na | na | 84.000 | 85.000 | na Lan Temp | 54.000 | degrees C | ok | na | na | na | 103.000 | 104.000 | na SAS Temp | 60.000 | degrees C | ok | na | na | na | 65.000 | na | na CPU1_FAN1_1 | na | RPM | na | na | na | 100.000 | na | na | na FRNT_FAN1_1 | 1900.000 | RPM | ok | na | na | 100.000 | na | na | na FRNT_FAN2_1 | na | RPM | na | na | na | 100.000 | na | na | na FRNT_FAN3_1 | 2400.000 | RPM | ok | na | na | 100.000 | na | na | na FRNT_FAN4_1 | 1900.000 | RPM | ok | na | na | 100.000 | na | na | na REAR_FAN1_1 | 700.000 | RPM | ok | na | na | 100.000 | na | na | na CPU1_FAN1_2 | na | RPM | na | na | na | 100.000 | na | na | na FRNT_FAN1_2 | na | RPM | na | na | na | 100.000 | na | na | na FRNT_FAN2_2 | na | RPM | na | na | na | 100.000 | na | na | na FRNT_FAN3_2 | na | RPM | na | na | na | 100.000 | na | na | na FRNT_FAN4_2 | na | RPM | na | na | na | 100.000 | na | na | na REAR_FAN1_2 | na | RPM | na | na | na | 100.000 | na | na | na ChassisIntr | 0x0 | discrete | 0x0080| na | na | na | na | na | na CPU1_PROCHOT | 0x0 | discrete | 0x0080| na | na | na | na | na | na CPU1_THERMTRIP | 0x0 | discrete | 0x0080| na | na | na | na | na | na PSU1 Status | 0x0 | discrete | 0x0080| na | na | na | na | na | na PSU2 Status | 0x0 | discrete | 0x0080| na | na | na | na | na | na PSU1 AC lost | na | discrete | na | na | na | na | na | na | na PSU2 AC lost | na | discrete | na | na | na | na | na | na | na CPU_CATERR | 0x0 | discrete | 0x0080| na | na | na | na | na | na
In addition to the processor, memory modules and chipset, through IPMI monitoring the motherboard allows you to monitor the temperature of the SAS controller and the network card, for which the upper temperature limit is 103 degrees Celsius. Graphically, the dashboard can be easily assembled in Grafana:
In general, the board manages five fans, monitors 14 temperatures, the status of two power supplies and the server's main voltages. In the field of management, everything is very decent, so you can not worry about monitoring storage systems.
Since the LGA3647 socket has already become quite common by 2020, we do not have any problems with modern operating systems: Windows Server 2019 DE, Centos 8.1, FreeNAS 11.3 and ESXi 6.7 are perfectly supported by the platform under consideration, the HBA controller is determined and allows you to read SMART connected drives. Once again, I want to note that you still have full compatibility with ZFS, BTRFS and other tools that require Raw access to the drive.
Please note that the SAS controller requires support for the MPT2Fusion driver family, so the operating system must be no older than 2018.
Recommendations when ordering
Choosing the ASRock Rack EPC621D6U-2T16R motherboard as a storage platform, you can be sure that you will not be offered an analogue at tenders: this is a one-of-a-kind board created for one highly specialized task. Please note that when using SAS12 drives, no more than 12 of them can be connected, and if you use VMware ESXi 6.x + as the main operating system, then you can forward both the LSI 3616 controller itself and two SATA drives to the guest OS controller integrated into the south bridge.
Most rack server chassis today support ATX and EATX motherboards, so I wouldn't say MicroATX is a necessity here. Tower cases, which can be crammed to the brim with SSD/HDD baskets, are a different matter altogether for powerful storage with 20 gigabits out. In this case, the scenario of using Xeon Bronze series processors and ASRockRack EPC621D6U-2T16R motherboard coupled with cheap DDR4 Registered PC2133 memory looks harmonious and is an excellent purchase!
Mikhail Degtyarev (aka LIKE OFF)