Synology FlashStation FS6400 review - testing speed and getting to know Synology branded SSDs
In general, it's time to stop talking about Synology's top models as NAS-s for medium-sized businesses. A few years ago, the company set a course for unification and hyperconvergence, deciding that many customers will be able to transfer their local services to storage using virtualization or Docker, while the ease with which applications are launched and maintained will be able to bribe even the most orthodox IT Directors: any system administrator can cope with these tasks, which means you can save on staff. Moreover, Synology launched this trend long before the craze of the it community for EDGE topics, and today all the products of this IT giant fit perfectly into the concept of peripheral computing.
Although no, not all of them: the FlashStation series is mostly devices for installation in the data Center of an average company. In the updated version, the FS6400 received a pair of 8-core Intel Xeon Silver processors with a frequency of 2.1 GHz and the ability to install up to 512 GB of RAM. For storage, you have 24 bays for 2.5-inch SATA SSDS, two 10-Gigabit twisted pair ports are used as Front-End interfaces, and two SAS12G ports are used as Back-End for connecting expansion shelves. In total, you can connect up to 72 drives to a single head unit, and Synology has recently started offering its own branded SSDS, and HDDs will soon appear. True, this is not yet a full-fledged Vendor-lock, and the NAS is compatible with third-party drives, but you need to be prepared for the fact that sooner or later Synology will come to link disks and components to business-class NAS. Since our last testing, top FlashStation FS3017 it's been 2.5 years, and it's very interesting to see how Synology has changed following trends in storage devices.
Today, Intel offers significant discounts on processors For its large customers, so it is not surprising that Synology uses the top-end Intel Skylake platform for the top-end FlashStation series.
The NAS has two Xeon Silver 4110 processors with the following characteristics:
- 8 cores, 16 threads
- 2.1 GHz, up to 3.0 GHz in Turbo Boost mode for 1 core
- 11 MB L3 Cache
- TDP up to 85 W
For cooling, radiators with heat-conducting tubes and purging with hot-swappable system fans are used. That is, theoretically, nothing prevents you from replacing the CPU with more powerful ones if Synology offers such an upgrade kit, but even in the current configuration, for tasks such as databases, VPN gateway, or log processing, the processor capacity is more than sufficient for an average organization with 200-500 jobs.
Each processor has a 6-channel memory controller, but the Synology FS6400 starter kit is implemented with 32 GB of RAM (two 16 GB ddr4-2400 ECC Registered modules). In total, the system has 16 memory slots, where you can install modules with a capacity of 32 GB of RAM (up to 512 GB in total). Although the Xeon Silver 4110 processors themselves support memory up to 2400 MHz, Synology installs DDR4-2666 ECC RDIMM modules that simply operate at a reduced frequency to 2400 MHz.
Today, memory prices are at lows, so if you plan to load your NAS with your own SOFTWARE, immediately order a set of RAM. For the operation of built-in services such as Active Backup for Business, Mail Station, video surveillance, and iSCS/NFS, the initial amount of RAM is more than enough, and you do not need to expand anything. If you think that increasing the amount of RAM improves performance due to caching, then with SSD drives in the NAS, this effect is almost not noticeable. Read our article "Studying SSD and RAM caching in Synology servers".
3. Disk controllers
Interestingly, the Intel c621 chipset has 14 SATA-600 ports, which Synology could have used to connect SSDS, but did not: the drives are connected via two SAS-12G adapters, LSI SAS3216 from Broadcom. They also provide a Back-End interface for connecting disk shelves, of which there may be two. It's really a rare sight when a manufacturer could have saved money, but did not, and I must say that there are reasons for this: these are very high-performance interface controllers, each of which supports speeds of more than 1 million IOPS, and has a bandwidth of up to 6.5 GB/s.
At the same time, there is no built-in write caching in Synology NAS, so there are no batteries or backup NVME disks to reset the cache. Synology relies entirely on the drive cache and your UPS for this issue. In normal mode, you can configure the use of the SSD write buffer. As soon as the UPS receives a battery warning, this mode will be disabled and the drive cache will not be used, so that data will not be damaged if the power is cut off abruptly. But, by the way, modern enterprise-class SSDS have long had powerful ionistors (supercapacitors) in their design, the charge of which is enough to write the cache to the drive in case of a sudden blackout.
4. SSD stack
Power outage Protection is also implemented in Synology SAT5200 drives. What's interesting about these disks? First of all, their warranty period is 5 years, which corresponds to the warranty for enterprise-class NAS-s. These disks are write-tolerant and have a DWPD of 1.3 (1.3 GB of the entire drive is allowed to be overwritten daily during the warranty period), which is 1.11 PB for 480 GB models. By the way, in addition to 480 GB, at the time of preparing the review, Synology had disks of 960 and 3840 GB. The maximum performance of a single SSD is stated at the level of 67 thousand IOPS for arbitrary writing and 98 thousand IOPS for reading.
the characteristics of these SSDS resemble Samsung 860 EVO, but this is Synology's own development. First of all, the SSDS on the front side bear the proud inscription "Made in Taiwan" - the only SSDS in my memory released outside of China. Second, they use a very powerful controller Phison PS3112-S12DC. These controllers are characterized by extremely high response stability in random read and write tasks, and are actually able to provide even performance on the entire volume of the drive.
The cache uses two SKHynix chips with a total volume of 1024 MB, which is indecently large for 480-Gigabyte drives. However, the hopes that we have an MLC drive were not fulfilled: 3D TLC NA7AG64A0A chips with a capacity of 128 GB each are used for storage. In the event of a sudden power outage, the supply of electrical energy sufficient to write data to memory is stored in tantalum capacitors with a capacity of 330 UF each. These capacitors, though not relevant to supercapacitors, but are considered to be eternal.
What can I say about SSD? This is not the case when the company bought an OEM SSD in China, slapped its sticker and said "here is our super-compatible drive", this is the case when the company knew in advance what it needed and created a disk for its requests. The declared resource is huge, the speed for a SATA drive is just great, and the operating system also calculates the service life of the drive based on actual loads, warning in advance that the outdated SSD should be replaced. But here I would not flatter myself: how well Synology is done support for Seagate IronWolf series hard drives with their proprietary IHM monitoring system, so you could do something like this for your SSDS, but there is nothing like this. The user only has access to the usual SMART, and whether there is "SSD lifecycle Analytics" or not is not clear.
Of Course, having 96 PCI Express channels from two Xeon Silver in the cache, the manufacturer could install 4 m slots.2 on the motherboard, add Intel Optane support for the cache, but NVME connectivity is only supported by expansion cards. By the way, among the latter there is a very interesting e10m20-T1 Board that combines a 10GBase-T port and two M. 2 2280/22110 slots.
in General, the question remains behind the scenes, why Synology drags support for thick, 15-mm SSDS, because they are almost not found among SATA models, and if you abandon them in favor of 7.5 mm, you will get more than 30 hot-Swap bays on the front panel, which is much more important for a mass buyer.
5. Network stack
In the network area, everything is fairly standard: the NAS has two 1-Gigabit and two 10-Gigabit RJ45 ports implemented using the Intel X550-T2 NIC, which is still the de facto standard for 10-Gigabit twisted-pair ports. If you need optics, you can purchase an e10g17-F2 adapter (Mellanox ConnectX-3) or another one from the compatibility list that currently includes Mellanox, Intel, and Marvell network adapters.
I would like to add that Synology is currently working on introducing support for the Fibre Channel interface to its NAS-s in order to make its way to the SAN market, which we can say that the company has struggled since entering the business segment. Technically, there are no difficulties in this, and the company has already made the first step by releasing a dual-controller IP-SAN iSCSI storage UC3200, a review of which you can find on our website.
6. Power and cooling
As befits top-end devices, Synology FC6400 has a fault-tolerant power supply manufactured by the Taiwanese company Delta The power supplies have a power of 800W and are certified to the 80 Plus Platinum standard.
4 hot-swappable San Ace fans Are responsible for cooling.
7. Software stack
From the software point of view, Synology has not taken any significant steps recently: it is logical to assume that the developers are busy with the 7th version of the DSM and supporting compatibility with updated models.In General, we constantly keep our finger on the pulse, talking about new software features of Synology servers. I like to watch the company grow and its products evolve with it, and as the owner of a NAS, I know that tomorrow it will be able to carry more functions than yesterday. Of course, it seemed that virtualization through Docker and Virtual Machine Manager would put an end to the development of Synology software, because now you can install any programs on the NAS under Windows, Linux and FreeBSD, but the developer went the other way: Synology began to integrate packages into the NAS, for which you pay royalties in the Enterprise world. It may not have as much functionality as the top "luminaries" of the market, but everything is free and damn convenient. Here are some examples:
- Failover Virtualization Cluster
- Synology MailPlus - testing a failover mail cluster on NAS
- Video surveillance for large companies with branches
- Backing up the entire business PC
- Backing up Office 365/G Suite
As for the basic, root capabilities of the NAS, here the traditional functions for modern devices are tied to the BTRFS file system with Copy-On-Write functions. You have iSCSI available with space reorganization, over-Provisioning for target devices, which is useful when using compression at the file system level. Well, that is, when you need to write a large amount of compressed data (for example, log files) to a small space and trick the target operating system by showing it a larger volume than it actually is :). And, of course, a convenient feature of scheduled snapshots, which is very convenient to set up on a working folder with projects to keep a history of your developments, even for every hour over the past years. Unfortunately, Synology still doesn't have a snapshot browser function, like a "time machine", but it is possible to restore each snapshot as a separate folder without affecting the original. </p>
When integrating with VMware infrastructure, you can take advantage of the free Synology Storage Console add-on for vCenter. This add-on allows you to view the status of connected volumes and data storages from the web interface of the hypervisor, as well as increase the size of the Datastore if necessary. Perhaps, with a large fleet of equipment, this is a useful opportunity, but it should be understood that, for the sake of security, it does not allow any manipulations with storage systems other than increasing storage. It is installed directly in VCenter via import of .OVA format, after which the service needs to be reloaded.
Our test methods do not stand still, but change along with the IT world and the requirements that customers and integrators place on equipment. Specifically for Synology FS6400, I was wondering about the performance that embedded processors will provide for internal services in a Hyper-converged infrastructure, and the minimum latency from the network port to the SSD that the NAS brings to your infrastructure.
Test bench Configuration:
Among the desktop tests that allow you to evaluate the processor's capabilities, the favorite is Cinebench, for which we give all the Synology FS6400 hardware resources to a virtual machine.
the Results are more evaluative than practical. For some slow calculations on the CPU, for example, to run some Python code or to run Jupyter. For example, on my NAS, the neural network removes the background from working photos. This is not a time-critical task that can be solved without a GPU. At the same time, in EDGE environments, you can perform some kind of analysis on the CPU.
testing the disk subsystem
We have 3 SSDS at our disposal, which is enough to estimate delays, so we perform the first random access test in 1-thread mode.
The graphs show the reference stability of read access, but with a long random write after 10 minutes, the delay increases. At first, I thought it had something to do with Write Amplification processes on the SSD , a process of reducing the speed of SSD operation, which you can't get away from, but the sequential write test did not confirm my guesses.
The delay Increases after the same 500 seconds of recording, but the SSD has disproportionately more data written in sequential access mode. I am inclined to believe that we are seeing trottling's work, although even if this is true, Delta is still impressive in its minimalism. To put an end to this issue, we will conduct a write stability test: within 100 minutes, we will write data to the NAS in sequential mode in blocks of 2 Megabytes.
Apart from a slight narrowing in the median area, this test does not show any other anomalies: the average write speed is 1 Gb/s, and it remains only to remind that we have RAID 5 out of only three SSDs with a capacity of 480 GB. Of course, as the number of drives grows, the speed will be higher.
To test server loads, we use the same patterns as for tests of corporate SAN and NAS class storage systems. You can read about them in our storage reviews Synology and Huawei . These patterns were developed by Pure Storage specialists based on I/O tracing of real applications, and we run each in 1-thread mode so as not to rest against the limitations of an array of three SSDs.
In the Oracle pattern, the data storage system shows remarkable results, being only twice as slow as the internal NVME drives of servers (see our test Adata SX8200 Pro ). In the VDI and SQL patterns, we just have a predictably stable latency, and VSI is very similar in nature to random writes in 4K sectors. To be fair, I find it difficult to imagine the conditions in which software-defined storage system (VSI = Virtual Storage Infrastructure) is deployed on top of the Synology FS6400, it is much more likely to use a fast NAS as a consolidated storage in a modern Digital studio.
Well, let's look at Workstation task patterns. Since in this mode all data processing is performed on the desktop, and the NAS only stores and downloads files from itself, here the CIFS/SMB v3 throughput measured in the same 1-thread mode is more important to us.
Unfortunately or fortunately, everything is standard here, and the behavior of Synology FS6400 fully corresponds to the nature of SSD-drives: also random writes in large blocks (Capture One and Acronis Backup tests) reduce performance to 30-50 MB/s Also, importing 4K files shows a speed of around 250 MB/s. On the one hand, it's good that there are no glitches or performance drops, on the other hand, it's bad that there is no optimization.
The fact that the head unit of the Synology FS6400 is designed for use with SSDs does not mean that there are no drives. You can connect two RX1217SAS 12-bay 3.5'' shelves or two FX2421 shelves with 24x 2.5'' SSD bays each, shelf types can be mixed, allowing SSD and HDD to be used in the same storage system.
Of the components sold by Synology, there are 16 and 32 GB memory modules (DDR4-2666 RDIMM), 10-40 Gbps Ethernet network controllers, as well as SSDs with a volume of 480 to 3840 GB. You can find a full list of compatibility with third-party components on the company's website.
Recommendations when ordering
Top-of-the-line Synology devices are comparable in price to A-brand servers, so when you look at it head-on, the storage benefits may not be visible on paper. The point is that today many CIOs know the principles of data consolidation and hyper-convergence (reverse consolidation process), and Synology offers something else. I would call this "reverse hyper-convergence", that is, not when you smear the storage functions across your cluster, but when you concentrate the functions of your cluster in the storage system. In both cases, you reduce the amount of hardware installed in server racks, but from Synology you also save on licenses for corporate software that you do not buy, and, to be honest, on staff training, because it is easy to set up and run here elevated to a cult.
In addition, you reduce the digital footprint, which can be attacked by an intruder, closing the infrastructure to a single vendor of software and hardware that keeps updates up to date for ten years, as can be seen on the company's website. In terms of hardware, a pair of 8-core processors looked very cool 2 years ago, but today they are at the level of mid-sized companies with a small software ecosystem, including geographically distributed ones. In principle, such companies have laid the foundation for Synology, allowing it to take a place among the top storage manufacturers in " magic quadrant " Gartner .
Mikhail Degtyarev (aka LIKE OFF)
25 / 11.2020