Review of Wexler GPR109 server platform, 2-way 1U machine with PCI Express Gold-Fingers technology
These servers are built using motherboards and chassis from AIC (Advanced Industrial Computer), a rather old OEM for server hardware , storage systems and various industrial solutions electronics. One of the advantages of AIC equipment is an extended product life cycle - more than 5 years, while the main manufacturers of server equipment do not go beyond 3 years. This is of great importance for professional solutions. AIC's server arsenal includes quite unique solutions, such as a 2-node 4-processor 1U server with a redundant power supply, or motherboards supporting up to 72 PCI Express Gen 2 lanes . We will find out how it looks not on paper, but live, using the Wexler server based on the Wexler GPR109 server platform (hereinafter simply Wexler GPR109).
Wexler GPR109 Features
Wexler GPR109 is a 2-processor Intel-based barebone server platform for high-density computing systems. It is marketed as a low-cost solution for Web applications, data processing systems, email applications and digital content finalization. In general, for most tasks, perhaps, except for those that have a high load on the disk subsystem, since it supports installing only up to four 3.5 " Hot-swappable SATA hard drives.
Key Features of Wexler GPR109
- Form Factor - 1U
- Motherboard - AIC family CASTOR
- Supports up to two Intel Xeon X5xx0 Nehalem processors
- 12 DIMM DDR3 800/1066/1333 Registered ECC SDRAM slots up to 192 GB
- Chipset - Intel 5520 Tylersburg-EP + Intel ICH10R
- 1 PCI Express 16x slot
- 1 PCI Express 8x slot
- Possibility to install full-size, 2-slot expansion cards
- 1 internal low profile PCI Express 8x slot
- Compatible with the AIC Aquarius riser family. PCI-X, PCI 32, PCI 64 support
- Intel 82571EB Gigabit Ethernet Dual Port Controller
- Intel 82574L Single Port Gigabit Ethernet Controller
- Intel 82567LM Gigabit Ethernet Single Port PHY
- 4 RJ45 ports
- Intel ICH10R Controller
- 4 hot-swappable SATA2 drive bays
- RAID 0, 1, 10 support
- IPMI 2.0 Aspeed AST2050 Controller
- iKVM + Media redirection, SMC support
- Full remote hardware monitoring, with AES-128 encryption, with machine power on/off
- Power supply unit - EMACS 500 W.
- Supplied complete with rack mount rail.
From the characteristics it is worth noting the lack of support for RAID 5, support for PCI Express 8x expansion slot in Mini format, as well as the use of the Aspeed AST2050 processor for remote administration, which supports IPMI over LAN. But the most interesting thing is the Max IO technology, which allows converting the existing thirty-six PCI Express Gen2 lanes into almost any combination of PCI Express Gen1 and PCI Express Gen2 slots, including support for proprietary PCI 32 bit, PCI 64, and PCI-X. The choice is made by jumpers on the motherboard and a large family of risers for the AIC Aquarius motherboard. This versatility greatly simplifies the commissioning of equipment and the use of existing equipment. Well, let's see what the server is.
Wexler GPR109 construction
The first impression of the Wexler GPR109 platform is twofold: on the one hand, the rough design gives the impression of oak reliability and the absence of unnecessary "tweaks" for which no one wants to pay. On the other hand, everything is somehow too simple. We got a version without a built-in DVD drive for testing, but it won't hurt us.
The front panel looks simple: trays for four SATA disks are located across the entire width, strong handles for transferring and removing the machine from the rack are installed along the edges of the case, each of them has two ears for mounting bolts. Reliable, you won’t say anything.
Trays for hard drives, as it should be, have ventilation grilles on the front panel, however, the grilles take up a small area, and the holes themselves are too small. There could be more for better ventilation, but this is more than enough for SATA drives. But large ventilation grilles are installed in the upper part of the front panel, but only from the front side. A DVD drive should be installed on the right above the two hard drives.
On the left, above the first HDD, the information panel is located. It has one USB 2.0 port, Power and ID buttons, as well as LED indication. Despite the fact that the server's motherboard has an ATI ES1000 video chip with support for output to two monitors in the "clone" mode, the second VGA port is not displayed on the front panel. And the presence of only one USB 2.0 in front makes it pointless to try to connect KVM from the front of the server. To the right of the USB port there are two buttons pressed by the handle from the handle, their purpose is unknown.
In general, the lack of signatures is a strange feature of the Wexler GPR109 case - they are not found on hard drives, which is generally forgivable (stickers can be used), or on the LED status indication, which is already serious. We hope that this is a sign of a novelty, because we have tested the first sample of Wexler GPR109 in Russia. Another distinctive feature of the chassis is the ability to lock the HDD trays with a key. Perhaps in some cases this will be required.
The server chassis has ventilation holes on the sides , one for additional air intake, and the other, at the back, for extracting a hot stream. Moreover, the rear grill is located on a corner, and air can escape both sideways and down - this is good for cramped 19-inch cabinets.
There are not many ventilation holes on the back of the case, and some of them are placed on the top cover. Server has 4 network ports - enough for modern environments using VMWare, iSCSI interfaces and dedicated storage systems. The network ports, like two USB 2.0 ports, like the VGA port and RS-232, are not signed. But we have already discussed this. Above the RS-232 port we see a slot for a PCI Express 8x board, without external ports; on the right, one above the other - two PCI Express 16x slots.
Let's see how the internal space of the server platform is distributed.
Inside the Wexler GRP109 server platform, surprisingly, is somehow too empty for a 1U server, where sometimes even a small apple "amber" has nowhere to fall. Everything is different here, partly free space is determined by the design of the motherboard, partly - by the budget of the platform itself. For example, you will not find additional connectors for installing USB flash drives or SD cards with an operating system here. Although, in fact, installing not only flash drives, but even an additional 2.5-inch disk, will not be difficult.
The cooling system is represented by 6 fans, 5 of which are 40x40x28 mm (12V, 10W, 15.3 CFM), and one is double, 40x40x48mm (12V, 19.2W, 32.5 CFM). And fans are made in the Philippines by Japanese company Sanyo Denki , using two types of fans is a minus, since you will have to keep more spare fans for your server park. Each of the fans is attached to a steel frame with silicone nails. As a result, they are easily removable, and vibration does not spread to the case at all in any cooling mode.
Fans are connected to the control board with long wires tied into a single bundle, you will have to sweat when replacing any of them.
The Wexler GRP109 platform is equipped with an Emacs P1H-5500V power supply from the Taiwanese company Zippy Technology Corp, which has extensive experience in creating industrial and server power supplies. Moreover, two 4-pin Molex connectors were left free. However, there is nowhere to apply them here.
Naturally, the most interesting thing about the Wexler GRP109 platform is the Castor AIC motherboard. Originally designed for 2-node servers, it measures 431x160mm. At the same time, AIC Castor board is not inferior in functionality to many EATX motherboards , which allows it to be used in various configurations: servers, storage systems and workstations. Naturally, for an enterprise with a large IT infrastructure, such versatility is a huge plus. Please note that the board has 2 ATX power connectors for easy connection in 2-node servers when located to the left or right of the power supply.
Due to its size, the board turned out to be very complex and tightly assembled, but there was a place for shortcuts with MAC addresses of network controllers. It supports up to two Intel Xeon 5500 series processors and up to 192GB of DDR3 SDRAM 800/1066/1333 MHz with 12 DIMMs.
Network interfaces are represented by three Intel controllers - dual-port 82571EB with PCI Express 4x Gen 1.0a interface, single-port 82574L and physical interface 82567LM. Neither has TOE or iSCSI hardware acceleration, but the PHY 82567LM has support for Intel vPro technology, which exists to facilitate computer administration in large infrastructures.
In the middle of the motherboard there is one PCI Express 8x slot, which here serves to install internal expansion cards through a special riser. The board to be installed should not have a steel socket, so it is wiser to use this slot for a discrete RAID controller .
AIC Castor motherboard implements the Rackmount Technology Extension (RTX) technology in its design, which allows to fully use the entire height of the server case for expansion boards . Physically, it looks like this: instead of PCI Express ports, there are pins on the side of the board, like on expansion cards. The riser is connected from the side, it has two ports on one side for the motherboard, and two on the other side for expansion cards. Thus, in the compartment for expansion cards, the motherboard does not take up the precious height of the case , and the riser is screwed directly to the case from below. And as mentioned at the beginning of this article, taking advantage of the Max IO technology and the Gold Fingers connector, you can split 32 PCI Express Gen2 lanes into PCI Express Gen1, Gen2, PCI-X, PCI 32 and PCI 64 slots by simply switching the jumpers on the board and buying the appropriate riser. For example, for 1U chassis, you can get 2 PCI-X slots, or 1 PCI Express 8x and 1 PCI-X. For 2U and 3U enclosures, the choice of expansion slots is even greater.
PCI Express slots on this riser do not have strictly dedicated lines , and you can use both the top and the bottom slot as an x16 slot. Configuring PCI Express slots is done with jumpers on the motherboard. This flexibility of I/O slots, perhaps, can not boast of any other manufacturer of server motherboards.
Possibility to install two cards with PCI Express interface on top of each other, support for PCI Express 16x - what does this tell us? The fact that you can install a dual-slot GP-GPU card or a modern video card into the Wexler GPR109 server platform . What is a video card for a server for? Let's figure it out.
The world is moving towards the large parallelization of the program code. Already today, most professional and server applications use parallel threads, maximizing physical and virtual processor cores. But in the near future, we will see applications that use the computing power of video processors from ATI/AMD and nVidia. With modern GPUs, 1 TFLOPS per unit performance can be easily achieved. Naturally, in terms of teraflops, such a system will be out of competition in terms of price and cost of ownership over systems based solely on x86 processors. Already today, some applications are using the power of the graphics chips. For example, Kaspersky Lab uses nVidia CUDA technologies to compare files and quickly respond to new threats. Nvidia itself claims that its graphics chips can be used not only to accelerate DCC tasks, but also for databases, mathematical calculations, and so on. The demand for powerful video cards in servers is evidenced by at least the fact that 5th place in the world ranking of supercomputers Top500 November 2009 is Xeon E5540/5450 processors with Radeon HD 4870 video cards.
Naturally, video cards have to be tricky to meet server reliability requirements. Both ATI and nVidia have introduced boards specifically designed for use in computing - ATI FireStream and nVidia Tesla. In fact, these are top-end single-processor multi-core video cards, with or without one DVI port, with increased memory capacity, with a more reliable cooling system and special drivers.
But we're off topic. We tried to install a Zotac GTX260 graphics card into the Wexler GPR109 server platform, solely to check that it fits there . I must say that this was not an easy task - one fastening element of the case constantly rested against the steel panel of the board. I had to tinker, remove the riser and show miracles of finger flexibility; after about 30 minutes the board was installed in the server. It looks, of course, unusual.
But then we were disappointed: to power the GTX 260, you need two PCI-E 6 pin connectors, or four 4-pin Molex connectors with adapters. The Emacs P1H-5500V power supply did not have either one or the other, and 500 W of power for a dual-processor system, and even with a powerful video card, would not be enough. Eh, probably another time.
Well, for the installation of ordinary, 1-slot expansion cards, here, as they say, there are all the conveniences - and almost the entire length of the server case, and the support under the "back" of the board, adjustable for both and for low-profile boards. And, of course, the slot for PCI Express boards is cooled by two fans - even the hottest 10G controllers are not afraid of overheating.
AIC Castor motherboard has a fairly simple AMI BIOS. Among the required set of settings, the IPMI over LAN setting should be highlighted, but at the same time there are no such simple functions as choosing a power consumption profile or the complete absence of hardware monitoring in the BIOS.
The only information about the hardware state and server errors is displayed as codes in the IPMI log, and what they mean is understandable only to specialists from the service department. Perhaps the manufacturer should pay close attention to this point.
The following configuration was used for testing:
- Processor - 2 x Xeon X5550 QC, HT, 2.6 GHz, 8 Mb L3
- Memory - 12 Gb PC3-10600 ECC Unbuffered
- Disk subsystem - Seagate 7200.12 SATA, Mirroring
- Operating system - MS Windows Server 2008
We will be comparing Wexler GPR109 with Dell PowerEdge R610 server, which will be reviewed shortly. But we will exclude everything related to the disk subsystem from the tests, since the comparison would be incorrect.
These servers have the same speed in CPU-bound tests. There are slight differences, but they can be explained by different power consumption settings of processors in the motherboard BIOS. The Dell machine was configured for maximum performance, but the Castor board in the Wexler GPR109 server did not.
According to the results of comparison with Dell PowerEdge R610, both servers show approximately the same results, and the difference may be caused by the testing error.
Wexler GPR109 is not just a barebone platform, which is already enough today, it is technologies that should make your infrastructure easier to maintain, and as a result, more reliable. First of all, it is a universal motherboard that can be used both in rack servers and in `` towers '', network drives and workstations. This advantage in terms of faster commissioning and maintenance is hard to overestimate. In addition, the board itself is not inferior in expansion possibilities, but rather even surpasses many analogues in this price category. And this is once again confirmed by the Wexler GPR109 server platform, into which you can install almost any expansion cards with the PCI Express interface, up to two-slot GP-GPU modules . Having 4 network ports does not require a discrete network card for virtualization applications or connecting to different subnets.
Testing the pilot sample without setting the power consumption did not allow us to estimate the cost of owning the server in terms of electrical power consumption, but the initial platform cost is around 66,000 rubles, which makes it possible to assemble configurations 30-40% cheaper than analogs from Dell or HP ... And if, by the time it goes on sale, the manufacturer fixes such `` childhood illnesses '' as a modest BIOS, the lack of software in the package and signatures on the case, this will be an excellent option for both simple web hosting and those applications where there is a need. in the maximum unification of infrastructure equipment or for computing tasks using GP-GPU modules.
Mikhail Degtyarev (aka LIKE OFF)