Testing Synology Flashstation FS3017 flash array in virtual environments

In our hypervisor review Synology VMM where a potential customer can use one device not only to access data, but also to run resource-intensive applications such as virtual desktops, databases and simulation systems, and so on. Support for replication, snapshots and clustering allows you to build not just systems with reliability at the level of `` five nines '', but very cheap systems with the highest reliability, where duplication of functions occurs at the level of the device itself, roughly speaking - the storage box itself.

Three ideas from Synology that set their storage systems apart from the competition

  • Duplication at the level of High Availability systems . Synology's ideology has always been somewhat different from that of other storage manufacturers, and when offering solutions for cloud systems, the company says: “You don't have to pay extra for duplicating everything that is installed inside one storage system - you do not need to duplicate cables, controllers, host adapters, and other. Our storage systems are affordable for any customer, and if you need maximum reliability, you simply use the N + 1 scheme at the device level. In any case, you save space in the data center, reduce the cost of electricity and network connections, but you achieve reliability through our software development, and not through the purchase of unused components, the sole purpose of which is to keep the system running in the rare case of something out of building, because even load balancing can be done programmatically. " For experienced IT professionals, this approach seems strange, but only until you compare the cost of two or three top-end Synology solutions with one single "fully hardware" module from Dell/EMC or HPE.
  • Using iSCSI as the most promising interface . In 2015, the head of Mellanox, a leader in host controllers and interconnects, said in an interview that the FC interface has no future. On their corporate blog, Mellanox lists 7 reasons why Fiber Channel should be forgotten as a bad dream and switch to iSCSI for block access.

    The advantages of iSCSI are obvious: the ability to use storage systems at any distance from the client via TCP, and the speed of the interconnect, which today can already be 200 Gb/s per port, and the very low price of the host controllers and switches themselves, and the ability to use the existing network infrastructure for work on iSCSI. In practice, choosing the iSCSI protocol, the customer does not need to build SAN networks, does not need to lay additional cables, inflating the budget to the skies: one network interface, one switch, one copper cable that carries IPC, LAN and SAN traffic, plus the ability to set traffic priority at the host level.

    Yes, three years after Mellanox sent Fiber Channel to the dustbin of history, this interface is still in demand in the industry, mainly where there is already SAN infrastructure, although customers are already switching to iSCSI there.
  • Using software storage technologies . Today storage systems are not so much hardware technologies as software technologies, and Synology says: “We do not have hardware RAID controllers, since the power of modern Intel Xeon processors is so high that the CPU does not only do an excellent job of calculating XOR, but simply does not notice it. It is much more important to support direct transfer of iSCSI packets in order to save CPU resources not only of the storage system itself, but also of the clients connected to it, so we have support for RDMA over Ethernet. That is, it is much more important to ensure fast data transfer to the outside than to worry about how it works inside the storage system itself, and you cannot argue with that, although not every server today has support for RDMA over Ethernet.

It goes without saying that Synology integrates all the technologies that it has developed in the past years into its top storage systems. This is both the btrfs file system with snapshot function, and built-in programs for remote replication, including and copies data from it using the RSync protocol, and the possibility of replication between storage systems via the Internet, so you can put a desktop NAS somewhere in a remote office and on weekends make encrypted off-site copies on it over an encrypted channel, or copy them into the cloud. And all this - without additional licenses, in a single web-interface, in which you can work even from a smartphone.

Synology FlashStation FS3017 Key Features

  • Format - 2U
  • CPU: 2x Intel Xeon E5-2620v3 (6-core, 12 threads, 2.4 GHz up to 3.2 GHz, 15Mb Cache)
  • AES-NI Support
  • 64GB DDR4 ECC RDIMM (up to 512Gb)

Storage subsystem:

  • 24 bays for 2.5 SSD/HDD with hot-swap
  • SAS-1200/SATA-600 interface
    • Expandability:
    • Connect 2 x 24 bay disk enclosures (RX2417SAS) or 12 bays (RX1217SAS)
    • You need to buy Synology FXC17 card to connect the DAE
  • Disk enclosure connection interface - SAS 12Gbps
  • Maximum Raw internal storage capacity is 96 TB
  • Maximum Raw Capacity with Disk Shelves - 288 TB
  • Internal Array File Systems: Btrfs, EXT4, Btrfs Scheduled Defragmentation
  • SSD as cache
  • SSD Trim
  • RAID: F1, Basic, JBOD, 0, 1, 5, 6, 10
  • RAID Migration: Basic to RAID 1, Basic to RAID 5, RAID 1 to RAID 5, RAID 5 to RAID 6


  • 128 iSCSI targets
  • 512 iSCSI LUNs
  • File-level iSCSI Thin provision
  • iSCSI Block Level

Network connection:

  • 2 x RJ45 10Gbps ports
  • Ling Aggregation/LACP Support
  • RDMA/iWARP (ISER) support with expansion cards


  • 2x800W Redundand PSU
  • 321W Accessible
  • 156W HDD/SSD Hibernation

Synology VMM:

  • Maximum number of virtual machines on a native Synology VMM hypervisor: 24

Video surveillance server:

  • Up to 100 IP cameras
  • H.264 bit rate: up to 3000 FPS @ 1080p/720p, 1500 FPS @ 3M, 1000 FPS @ 5M
  • MJPEG bit rate: up to 2000 FPS @ 720p, 750 FPS @ 1080p, 540 FPS @ 3M, 425 FPS @ 5M

Available expansion cards:

  • FXC17 (SAS 12GBps controller with 2 external ports for connecting one or two disk enclosures)
  • E10G15-F1 (10G Ethernet NIC with 1 SFP + Port)
  • E10G17-F2 (10G Ethernet NIC with 2 SFP + ports and RDMA over Ethernet support)
  • DDR4 ECC Memory Modules
  • Warranty - 5 years

Of the characteristics we pay attention to. First of all, on the network connection: here 1/10-Gigabit RJ45 ports are used, that is, the FS3017 is ready to be installed in the existing network infrastructure without laying additional cables. We will not talk about which is more promising - copper or optics, since Synology offers two 10 Gigabit network cards for optics, and I recommend buying the 2-port E10G17-F2 anyway, as it supports RDMA over Ethernet ). Synology does not have 10 Gigabit network cards with RJ45 ports, but it is always possible to install a card from another manufacturer. The Compatibility List on the Synology website are all up-to-date HCA from Intel, Emulex and Mellanox, including 40Gbps.

Well, the warranty, one of the most important criteria for choosing a device, here by default is 5 years, although Synology does not yet have the opportunity to buy an extension package for this period.

Design of Synology FlashStation FS3017

The Synology FS3017 is a 2U chassis with the entire front section dedicated to drive bays.

Disks or SSDs fit into plastic sleds, each with a lock to prevent accidental removal of the drive.

The back side of Synology FS3017 looks empty: only 2 10 Gigabit ports, 2 USB 3.0 and RS232 ports for service needs. In the photo, the storage system has an E10G17-F2 card, with two SFP + slots.

By the way, the E10G17-F2 expansion board is Mellanox ConnectX-3 Pro ( description of the ConnectX-3 series ), even with native markings. The ConnectX-3 series has a very good iSCSI packet offloading engine at the hardware level, so running ahead I'll say that in our testing we ran into the performance of the test bench, but the load on the Synology FS3017 processors did not rise above 20%.

And one more interesting point - the chip installed on ConnectX-3 Pro supports data transfer at speeds up to 40 Gbps per port, and since support for this series of network cards is already available in Synology DiskStation Manager, perhaps the manufacturer will release 40 Gigabit expansion cards for FS3017.

Two Delta DPS-800AB-30A units provide power to the storage system in a fault-tolerant mode. They have 800 W of power and 80Plus Platinum certification, but measurements showed a relatively low power factor (PFC) - only 0.83 instead of the required 0.9.

For cooling, 4 Sanyo Denki fans with dimensions of 80x80x32 mm (9700 RPM, 86.5 CFM) are used with the function of easy replacement, that is, if one of them fails, the repair will take a few minutes, but the storage system itself will have to still turn off.

Structurally, Synology FS3017 does not differ much from typical 2-way servers. Two Xeon E5-2620v3 processors are hidden under large heat sinks and are cooled by a common airflow directed by a massive duct. The motherboard has 16 DDR4 memory slots, of which 4 are occupied by 16GB Samsung modules.

3 LSI SAS 9300-8i host adapters are used to connect SSD/HDD drives, supporting data transfer rates up to 12 Gb/s for SAS devices and up to 600 Gb/s for SATA.

By default, Synology FS3017 has 2 free slots:

  • PCI Express 16x for SAS controller FXC17 required for connecting disk enclosures
  • PCI Express 8x used for network cards

If you are faced with the need to expand your storage space, Synology FS3017 will offer you two shelves to choose from: RX2417SAS, which has 24 2.5 "bays, and RX1217SAS, which has 12 3.5" bays. In total, you can use two expansion shelves, both of the same type or different, and install SSD disks in any of them. Global Hot Spare technology is supported, thanks to which the hot-swap disk can be located both in the head unit and in the disk shelf.

Scheme for connecting disk shelves

For connection, the 12 Gb/s SAS interface is used, which connects two shelves in series and connects to the controller installed to the head unit. Duplication at the cable level is not provided, but we talked about this at the beginning of the article - fault tolerance is provided by connecting two or more storage systems and the High Availability software package built into the Disk Station Manager operating system.

About DiskStation Manager

We have already talked many times about the features of DiskStation Manager from Synology: the same operating system from this manufacturer is used for both junior and senior storage systems, thanks to which those who once made friends even with the simplest 2-disk NAS, it can easily set up a top-end FlashStation. In order not to overwhelm the reader, we will only talk about some of the features of DSM that are useful for use in the world of fast flash arrays.

Synology DSM

First, I would like to draw your attention to the creation of a RAID group. Within one storage, you can create several RAID arrays, for example on HDD or SSD, and create your own partitions on each of them. You can choose not to create a partition on a RAID array if you plan to use iSCSI moons at the block level, but in this case you will not have the Thin Provision feature available. Using iSCSI moons at the file level within a shared partition looks like a simpler solution, but according to the manufacturer, it may work a little slower than at the block level. We will check this during the test phase.

Built-in resource monitoring will show you the performance of not only each network interface, but also each iSCSI target, both in megabytes per second and in IOPS, which can be useful in assessing the need to scale storage systems.

Performance shortage alerts

Moreover, you can set up performance alerts when network or disk layer latency exceeds a certain threshold. When triggered, such a notification can be sent by e-mail to the administrator, or in the form of a push notification. Agree - this is invaluable information that no application writes to Log-files , and there is nowhere else to get it other than from the storage system itself.

Synology VMM

And, of course, Virtual Machine Manager with the ability to allocate twenty processor cores and 60 gigabytes of memory to virtual machines now looks completely grown-up. This is where we begin our testing.


For testing, we used a test bench with the following configuration:

# 1 - IBM System x3550
# 2 - IBM System x3550
  • 2 x Xeon X5450
  • 16GB RAM
  • VMWare ESXi 6.0
  • 2x15K SAS 146 Gb HDD
  • Mellanox ConnectX-2

Synology FS3017:

  • 64 GB RAM
  • E10G17-F2
  • 14x SSD Samsung MZ-7KM480E, 480 Gb, SATA-600
  • RAID F1
  • Btrfs

Test servers were connected directly to the storage system using DirectAttach cables, Intel XDACBL3M. On the test bench running VMWare ESXi, from 4 to 16 virtual machines were deployed for various tests with Debian 9 x64 guests. Virtual machines were managed from the command line over a 1-gigabit network interface.

Intel X520-DA2

The Synology Flashstation FS3017 has 14 Samsung SM863 SSDs of 480GB each, combined in RAID F1. Each SSD promises up to 98,000 read IOPS and 19,000 write IOPS, sequential speeds of 510 MB/s read and 485 MB/s write. Power consumption - from 1.3W idle to 2.8W recording.

The test was carried out with two types of iSCSI moons for 4 iSCSI targets: first, a partition with the Btrfs file system was created, inside which 16 iSCSI moons of 100 GB each were created with Thin Provision support, then the partition was deleted and the same ones were created 16 iSCSI moons of 100 GB each in an unallocated area of ​​the RAID group. Each guest connected to its own iSCSI target, and although the network topology did not allow for cross-traffic between 10 Gigabit ports, iSCSI Multipath was enabled.

For testing, we used the VDBench package developed by Sun (now Oracle). It is a scalable benchmark using JAVA that allows you to batch run tests on multiple virtual machines using block-level access, without being tied to the file system, which allows you to test storage speed at the block level. By running the server side of the benchmark on 15 virtual machines and the main one on the 16th, we get aggregated storage performance indicators from 16 clients exactly in the form in which it will be in real conditions. From test to test, the number of virtual machines changes to maximize the storage potential.

Before starting the main test, the storage disk space is pre-populated to exclude the impact of "new blank SSDs" on speed. There are no unequivocal recommendations for the "pre-filling" process, and for example the Storage Performance Council in some SPC-1 tests spends up to 1000 hours for this procedure. We don't have 1000 hours, besides, at a write speed of 900 MB/s, theoretically our entire array will be written in 100 minutes, and taking into account the optimization of the SSD firmware, I believe that the disk will write to a fresh sector every time, so we pre-fill it within 120 minutes.

The first test is a traditional 4K Random Read 100%, 32 threads per virtual machine. Synology claims 500K IOPS performance for FS3017 . We understand in advance that this is the theoretical maximum of the sum of four 10-gigabit interfaces possible in the FS3017. Two of them have copper media - RJ45, two - optics, SFP +. In our test bench, there are only optics, and the processors, to put it mildly, are not the latest, and Intel X520-DA2 also does not have iSCSI hardware offloading and iSER support, on which Synology reached such a speed , so 500 thousand We are not waiting for IOPS.

4K Random

In this test, Synology FS3017 processors showed load less than 40%, which is typical for iSCSI Offload at the level of a network expansion card, and in principle, there is no reason to doubt that 500K IOPS for this storage system is a real figure.

4K Random 70/30

The following tests are 4K Random 70% read, 30% write, traditional 70/30 and 100% write, by which we can see that the delay practically unchanged, which means we are very far from the top features of FlashStation FS3017.

4K Random Write

The recording test shows a rise in latency up to 4ms and is limited by the capabilities of the test equipment. Let's see what a similar launch with a 64kb transaction size shows.

Test with transaction size 64kb

Test 64kb 70/30

64k random write

The write speed, apparently, depends on the performance of the installed SSD disks.

Real World Problem Patterns

From synthetic tests, let's move on to emulating real problems. The VDbench test package allows you to run patterns captured by I/O tracing programs from real tasks. Roughly speaking, special software records how an application, whether it is a database or something else, works with the file system: the percentage of writing and reading with a different combination of random and sequential operations and different sizes of a write and read block. We used patterns captured by Pure Storage for three cases: VSI (Virtual Storage Infrastructure), VDI (Virtual Desktop Infrastructure), and SQL Database. The test was run with 16 threads for each virtual machine, which created a queue depth of approximately 64.

Virtual Desktop Environment pattern

VDI test

From the point of view of the maximum latency that our test bench was able to provide, the performance is very high. Roughly speaking, 64 single-threaded VDI connections are not even noticed by the storage system .

VSI Pattern

VSI test

The situation is similar when using the storage system as a virtual storage.

SQL Pattern

SQL test

When working with databases, the file-level iSCSI volume latency curve soars vertically, so you should definitely only use block-level iSCSI for these tasks.

Oracle pattern

Oracle test

Performance in real tasks is very different from the values ​​that storage manufacturers measure in tests of random reading of a 4kb sector. The response time in all three tests was less than 4ms, about 5 times the recommended threshold, above which one should already think about upgrading the storage system.

Sequential access

We tested sequential access with standard read/write ratios of 100/0, 70/30 and 0/100 with 16 threads creating a queue depth of about 64 and varying sector sizes.

Sequential reading

Sequential access 70/30

Sequential write

Block-level writing is much faster. This doesn't seem to be related to caching, but rather to the btrfs filesystem.

Based on the results of speed measurements, the following conclusions can be drawn: FS3017 performance is sufficient for servicing virtual machine hosts and storing database files and logs . When testing, FS3017 processors do not notice the load, so I have no reason to doubt that the storage system will show performance in the region of 500k IOPS 4k and 6.4 GB/s.

Energy efficiency and environmental friendliness

The Synology FS3017 makes a lot of noise during operation, so you should not install this storage system in the same room as the workforce. The head unit clearly lacks some super-quiet mode of operation, because in storage mode, if you do not run virtual machines on Synology VMM, only expansion cards are heated here.

Typical power consumption of FS3017 in tested configuration is shown in diagram below.

Power consumption

The power efficiency of the Synology FS3017 is amazing, with the storage system consuming less than 180 watts for typical file operations. Of course, power consumption will be higher when running virtual machines on storage systems, but when used in the main mode, FS3017 does not even require any more or less powerful UPS.

Ownership and economic performance

Retail price of Synology FS3017 is 880 thousand rubles , 6 thousand more rubles will have to be spent on rails for installation in a rack. In the field of Flash storage, it's free. The purchase and ownership cost, including electricity for the 5 year warranty period, is as follows:

Cost of ownership for 5 years

Let's consider the relative economic indicators of the device taking into account Real-world applications in the tested configuration.

Relative efficiency

According to the declared characteristics, Synology FS3017 with 24 SSD Intel DC S3710 SSDSC2BA800G4 has a performance of 541,157 random read operations in 4Kb blocks. In this configuration, the storage cost was 2,341,600 rubles and the relative read efficiency with a 4KB transaction was 13.2 IOPS/$.

Purchase process

Although Synology FS3017 is a top-end product, it belongs to the Run Rate equipment, which means that the company's distributors will maintain availability in warehouses, and the price will be fixed and generally available. This means that if you need Flash storage, you simply contact the company that sells Synology storage systems and purchase it with delivery in 1-2 days. You do not need to talk to a sales manager who will initially try to set you a triple price from the ceiling and then make discounts. Transparent pricing policy and availability allow you to purchase storage systems of this class `` right now '' and install in 2 hours in production. For this type of storage system, this is a rarity and a huge plus for the customer.


FlashStation FS3017 is Synology's first All-Flash Enterprise Array to unleash the full potential of SSDs and Virtual Machine Manager virtualization technology. The storage system has excellent indicators both in speed and in terms of purchase and operation costs. The declared performance of 500 thousand IOPs means that the FS3017 can be used as the head unit in your own virtual infrastructure for dozens of virtual applications.

I consider only two network interfaces in the basic configuration a disadvantage, but it can be easily eliminated by purchasing expansion cards that are not proprietary, and therefore have a cost comparable to typical host interfaces. And, of course, I still don't understand the strange habit of not completing Rack devices with rails for mounting in a rack.

At the same time, FlashStation FS3017 in terms of economic indicators is one of the leaders among those proposals that are present on the market, even taking into account the duplication of the head unit when using High Availability. The economic efficiency will be the higher, the more modern technologies the customer uses: ISER protocols for iSCSI, virtualization within the storage system itself, Off-site copying of important data to the cloud, etc. We've seen all of this in Synology's small business products, and now it's available to larger customers.

Mikhail Degtyarev (aka LIKE OFF)

Read also:

Storage and backup of virtual machines on QNAP NAS

In small companies, NAS can significantly reduce infrastructure costs. Those who decommissioned servers yesterday by moving resources to the NAS are canceling their subscriptions to the software that QNAP performs out of the ...