QNAP TS-h1886XU-RP Review: meet the ZFS-based QuTS hero operating system

In the 2020s, QNAP could no longer avoid using a file system with Copy On Write support: yes, three years ago they had a powerful 2-controller machine based on the FreeBSD kernel in their arsenal, but the lack of compatibility with all the legacy software packages available for the main QNAP line significantly narrowed the possibilities of using the device. At the same time, there were notable changes in the Linux file system world: the focus in file systems shifted from BTRFS to the ZoL (ZFS On Linux) project. Originally created for FreeBSD, the ZFS file system was first rewritten as a module for Linux, and then completely included in such super-popular distributions as Ubuntu or Alt Linux, and the development of both branches of ZFS: for FreeBSD and for Linux was combined into a common OpenZFS project. Naturally, the integration of ZFS into the business models for QNAP was only a matter of time, and in September 2020, the QuTS hero operating system was released, and on its basis - a whole range of NASes with ZFS support.

Just want to say that QuTS hero is only available for the business line of QNAP NAS-s, as well as our current test TS-h1886XU-RP. The company QNAP has allocated the ZFS-direction in a separate operating system QuTS hero, which was deprived of support for traditional EXT4. But no one forces you to use QuTS hero-when you first set up the NAS-and you can choose which OS to install: the traditional QTS or QuTS hero, and then continue to use it. And QNAP itself will now support one OS with EXT4 and as many as two with ZFS: QuTS hero based on Linux and QES based on FreeBSD for the most senior models. So why is ZFS so important at all?

ZFS is faster when working with lots of small files

For systems related to machine learning, the real deal will be working quickly with small files. When reading or writing tens of thousands of 1 MB files, ZFS shows 9-10 times faster speed compared to EXT4. Accordingly, where we used to have to pack thousands of files into an archive for faster migration between devices, now it is not necessary to do this. As a result, it accelerates all office applications that use the NAS as a file storage. In our tests, I could not recreate a situation in which copying a 9 GB folder containing 45 thousand files would show different speeds on ZFS and EXT4: the result was always within the margin of error.

RAID TP, surviving failing of 3 HDDs, as well as a triple mirror

For archive systems, file repositories, and video surveillance systems, where a single large-volume disk array is often used, RAID-TP (similar to RAID-Z3 in ZFS), which can withstand the departure of three hard drives from the array at once, will be a gift. It is especially nice that the QNAP developers have traditionally simplified the native ZFS hierarchy, where zpool and zdev are used, to the traditional, understandable RAID. For example, to create an analog of RAID 50 in a regular ZFS, you first need to create the necessary number of RAID Z1 (analog of RAID 5), and then combine them in stripe, which can cause difficulties even in the best GUI, and in QNAP you just create the RAID 50 and RAID 60 you are familiar with, without thinking about the hierarchy of vdev and vpool, although in reality there, in the depths of the operating system, it remains, but the user does not encounter it. There is an opinion that with the increase in the cost of calculating parity, the performance of the device decreases. Let's test this hypothesis without using SSD caching.

Interestingly, in the transactional load, when reading, RAID TP in some ways even wins over RAID 5 and RAID 6 when reading, but when writing, the speed distribution in general remains in favor of RAID 5. In general, ZFS is often criticized for the fact that with linear writing, when you transfer large files to the NAS, the speed of the arrays is limited by the performance of the slowest drive in the pool.

When reading, you can see that in the area of the in-RAM cache (zARC), the more complex the array, the higher its speed, and I think this is due to the fact that the total capacity of the array decreases with increasing parity blocks. In the future, the fifth RAID works with approximately the same performance as the RAID TP, leaving the 6th RAID in the outsiders. But as soon as it comes to writing, we see that all three arrays perform equally poorly, at the level of the speed of a single HDD.

And, in general, it is reasonable to assume that the SSD cache will change the situation for the better, but no. The fact is that in QuTS hero, SSD caching can be used either for read operations, which the server RAM perfectly copes with on small volumes, or for log write operations, the so-called ZIL, which, in general, does not slow down. There is no such direct write caching as we saw in QNAP TDS-16489U.

And sequential access:

QNAP also has support for a "triple mirror" array, in which, as the name suggests, three hard drives duplicate each other. From the point of view of space efficiency, such an array loses even to RAID 6, since it is limited by the volume of the smallest hard disk included in the trio. But even with the physical destruction of the entire NAS, it will not be difficult to remove data from the drive included in the triple mirror. In general, to use a NAS as an archiving device with a volume of hundreds of terabytes, where you need confidence in the safety of data in both 10 and 20 years, ZFS will minimize the possible options for data loss.

ZFS Deduplication

In virtualization systems, where there is always not enough disk space, ZFS offers compression and deduplication functions, which in VDS conditions can save hundreds of tens of terabytes on the same type of virtual machine images. Moreover, compression and deduplication are performed at the time of writing the block to disk, using the NAS processor, so QTS Hero is currently available only for business class with powerful server CPUs and at least 16 GB of memory. Deduplication in ZFS is applied "on the fly", that is, compaction occurs at the time of writing data to disk, and it should be understood that even today deduplication remains a very expensive process for the CPU, so expensive that the developers of ZFS do not recommend using it for volumes with an intense transactional load, that is, for databases, some real-time applications and virtualization. Even in 4-processor storage systems for hundreds of thousands of dollars, manufacturers are trying to bring this resource-intensive process to the background. Deduplication tables are stored on the volume itself, not in the device's memory, so the NAS periodically cleans them to save space.


In terms of performance, deduplication allows you to increase speed on highly compressible data, such as databases, logs, office documents, etc.

Yes, you can use this feature on the most loaded volumes, given over to databases and VDS, but still in most cases, regular compression will give the same disk space savings as deduplication, especially since QNAP QuTS hero uses a fast and efficient LZ4 compression algorithm. I want to add that you can enable and disable both compression and deduplication at any time for any folder or iSCSI LUN.

Speed up by reserving space

To speed up writing to almost full pools, there is a very interesting possibility of reserving disk space. Optimization is achieved by the fact that large blocks of data are always written to unfragmented areas of hard drives and SSDs, and then scheduled to be transferred to the working part of the volume, freeing up the reserved area for the next write. This feature is useful both for HDD, where it helps to fight fragmentation, and for SSD, where it reduces the effect of the cell cleaning process known as write amplification.

When choosing drives, we recommend that you take a closer look at the Seagate Exos X16 series of hard drives. These are helium hard drives with a capacity of 10 TB or more, designed for round-the-clock operation in data centers. These hard drives have a very low power consumption in standby mode, only 5 W, which achieves a record low relative power consumption of 0.31 W per 1 TB of capacity. The reliability of the drives is indicated by their 5-year warranty.

There is no such thing as defragmentation in ZFS, so there is one less headache for video systems.

What can I say about the reliability of ZFS?

Today, ZFS has the image of the most reliable file system, but with regard to NAS-am, I would not sing songs of praise to it: this file system received its fame for high reliability thanks to the background check of volumes and SMART disks, and these functions have long been implemented in modern storage systems. And even with respect to the device in question, if at the time of initialization you choose a classic QTS operating system without ZFS support, you will get similar reliability, with the only difference that you will not have a triple-redundant RAID array. Adepts of ZFS consider it important to be able to take out the write log (ZIL)to a separate device, but here I do not agree - this mode will not help from a sudden power off or failure of this very media. From my point of view, the value of the NAS is not in how many chips it has to protect the file system, but in how many opportunities to protect the data stored on this file system, because all FS have weaknesses, and any of them can leave you with empty disk space at the same time.

But it is much more likely that the data will die from the actions of the user, the virus, or the actions of the application itself. And here at QNAP everything is just super.

QSAL-Preventing multiple SSDs from crashing at the same time

Now all major storage manufacturers have their own technologies for leveling SSD wear, which extends the life of an array of solid-state drives. In QNAP, this technology is called QSAL (stands for QNAP SSD Antiwear Leveling), and it works as follows: a small amount of excess space is reserved on SSD drives, and as soon as the health of these disks decreases to 50%, the system automatically begins to redistribute this space in such a way as to prevent simultaneous failure of several disks in the array. QSAL can be enabled at any time for arrays with parity (RAID 5/6/TP), but it is advisable to use it before the SSD resource reaches 50%. This technology does not affect performance and almost does not take up a noticeable share of space, so it makes sense to use it.

Ability to cache cloud drives

First, Cloud storage support has evolved to the point where the NAS can act as a caching device between you and Amazon S3 or any other service. For shared file storage, this is a godsend, because you can scale your disk space not only by replacing disks with NAS, but also by buying space in the cloud, and it is possible that data protection from a third-party provider will be cheaper for you than on a local device. This feature is configured in the HybridMount package, and in the basic version of the software, up to two connections to cloud storage are supported, and a larger number of links are activated using a license.

There are not many initial settings: the synchronization frequency, remote and local folders, plus the volume of the local cache, but after connecting the balls, you can configure the simultaneous number of downloads/uploads, and the synchronization priority, and in general, you can switch the mode of operation from caching a remote resource to mounting it as a normal network folder. Well, in order to restore order in your "clouds", you can use the built-in CloudFuze service, which allows you to transfer data from one cloud to another, although in the current form, the QNAP NAS can move information, for example, from Amazon S3 to Google.


Secondly, in QNAP QuTS hero, snapshots are a "native" function of ZFS, which is performed without stopping access to the FS. QNAP developers have added the ability to reserve guaranteed space for snapshots to the NAS, and if you have so many of them that you will not have enough local disks , you can export them to another NAS or use snapshots to synchronize data between NAS-es.

In general, the situation with snapshots is interesting: QNAP has done a monstrous job to get them created on a schedule, replicated, synchronized, but has not made a single interface for centralized management of them. As a result, you set the schedule and parameters for each shared folder in each separate window, for iSCSI volumes - in the block access manager, and replication is configured there. Figuratively speaking, the snapshot settings are scattered here and there in the interface, and everywhere you look, there will be a mention of snapshots. You can restore snapshots to both existing and new folders, as well as configure a guaranteed storage space for snapshots. In case the network folders are encrypted by a virus-encryptor, you can always recover.

By the way, I did not mention block access for nothing: recently, QNAP business devices began to support the Fibre Channel protocol when installing the corresponding interface card. FC management is carried out in the same manager as iSCSI volumes, because the LUNs for iSCSI and FC access are compatible with each other, so if you are faced with the task of switching from SAN to iSCSI, then using QNAP QuTS hero, you can do this without data loss: import the LUN from a third-party device into the system, connect via Fiber Channel, and then switch the LUN to the iSCSI interface.

HBS 3 - unified interface for backups 1-2-3

And of course, I couldn't ignore the completely redesigned Hybrid Backup Sync backup system. I want to say right away that HBS 3 refers specifically to the backup of files and folders, and for backing up virtual machines, a separate powerful Hyper Data Protector package is still used, which we reviewedhttps://hwp.media/articles/storage_and_backup_of_virtual_machines_on_qnap_nas/ earlier. What is interesting about the concept of HBS 3 is that in one window you set up backups to a local folder, to a remote NAS, and to the cloud (cloud drives are supported here as well). So you can control both the time window for backups, and the consistency check of copies, and disk space in one built-in application.

But the functionality is not limited to backup only: HBS3 supports synchronization between any file resources, namely:

  • SmB/CIFS
  • FTP
  • RSync
  • remote NAS folder
  • local folder
  • cloud drive

Why I have HBS 3 is just a puppy delight, so it's because I can work with the "cloud", without visiting this very cloud, without understanding their interfaces, and manage from a single window. And if, for example, I need to store data on different continents of the planet, or even more-in countries with different political systems, then in a couple of minutes I will set up a backup and sync data between Amazon S3, Alibaba, Yandex accounts and some very simple VPS on Hetzner, whose address only I know. And here you are, now my backups are stored in the USA, Asia, Europe and Russia, and are protected in case of war, blockade, tsunami or meteorite fall.

HBS 3 has its own deduplication, independent of the use of ZFS. QNAP QTS-h1886XU-RP is powerful enough to perform this procedure itself before sending data to remote storage, thus saving bandwidth. Of course, deduplication would be very useful if HBS 3 was able to reserve iSCSI LUNs, but the protection of block volumes here is perfectly implemented through snapshots.

QNAP TS-h1886XU-RP Interior

In terms of design, the QNAP TS-h1886XU-RP does not stand out among its analogues. The machine is built on an Intel Xeon D-1622 processor, which has 4 cores with HyperThreading support. By default, the NAS has 2 moluoy pr 16 GB ECC DDR4 with the ability to expand up to 128 GB.

To save the minimum cost, the basic configuration has only 1-gigabit RJ45 ports in the amount of 4 pieces, and high-speed interfaces are implemented at the expense of expansion cards. QNAP has very successful boards that combine 10-Gigabit "copper" and a pair of NVME M.2 on the same map. Given that QuTS hero does not support SSD caching of write operations and does not have multi-layer data storage features, you may even refuse to use SSD to speed up HDD arrays: in ZFS, this is not as important as traditional NAS-s. Much more important is to be able to create a "fast" small RAID on the SSD for demanding applications that can run both locally on the device and on third-party hosts. And here-yes, you can do a fast RAID on a 2.5 " SATA SSD or super-fast on an M. 2 NVME.

Much more interesting is that for a 10/25/40-gigabit optical environment, QNAP now uses models with Mellanox Connect-X 4 chips instead of Intel controllers, the most advanced solution with NFS and iSCSI offloading.

The layout of the case provides for the installation of twelve 3.5-inch bays with a SATA interface on the front panel and six 2.5-inch bays for a SATA SSD on the back side. The air flow that cools the SSD baskets is isolated from the motherboard, so the drives should not overheat. By the way, I would like to note that the fans in the NAS do not have a hot-swap function: this unpleasant trend of reducing the cost of everything and everything has reached the storage systems with traditionally high added value. It is also unpleasant that the QNAP TS-h1886XU-RP works very noisily even when the Smart Fan function is activated, so it should only be placed in an isolated room of the data center.

But the manufacturer did not save on power supplies: fault-tolerant 550-Watt modules with an 80 Plus Platinum certificate are a standard solution for such devices.

Scalability options

The vertical scaling capabilities are impressive: you can connect up to 16 disk shelves for 16 (model TL-R1620Sep-RP) or 12 (model TL-R1220Sep-RP) disks each to a single TS-h1886XU-RP head unit. The connection uses a 12 Gbit/s SAS interface, so you will first need to install a 2-port QXP-820S-B3408 adapter or a 4-port QXP-1620S-B3616W adapter in the head unit.

The total options are amazing: you can put as much as 4.6 petabytes of disk space on one head unit (when using 18 TB of hard drives). When using a multipath connection, the NAS will allow you to remove any connected disk shelf for prevention without interrupting the work of the rest.

Bonus: Chia Mining

We received the device for testing when the cryptolihoradka was in full swing, and system administrators joked about the fact that no matter what your organization does, it is more profitable to mine cryptocurrency on servers and storage systems than to conduct normal economic activities. On data storage systems, you can mine a cryptocurrency with the Proof of Capacity (PoC) algorithm, which uses disk space to confirm a transaction. Such currencies were Filecoin, BitTorrent coin, SiaCoin, and the "hottest" was Chia Coin, which appeared in the spring of 2021 and caused an explosive demand for HDD and SSD. Mining by the Proof of Capacity algorithm is divided into two stages: first, there is a so-called "seeding", in which the space on the hard drives is filled with 128 GB hash tables. At the second stage, the resulting table files are used for the process of mining itself, or as it is called here, "farming". Moreover, you can create tables (sow the disk) and actually "farm" on different machines. The first process is quite resource-intensive both in relation to the CPU and in relation to RAM, and the second can be done even on the Rapsberry Pi. Of course, a NAS with a 4-core Xeon, and even cool Exos series hard drives, looks like an ideal tool for additional income, and I decided to check how QNAP is ready for Chia mining.

Chia mining client available at Github for Windows and Linux. In the case of QNAP TS-h1886XU-RP, you have three ways to start the client:

  • using the hardware virtualization system Virtualization Station,
  • using the container virtualization system (Container Station)
  • using the built-in Linux Station package, where several versions of Ubuntu are available to you

Using Virtualization Station makes sense when you want to hide the mining process, or if you want to use Windows. In general, "Windows" in this process does not give absolutely no advantages either in speed or in stability: both there and there the client works slowly, glitches and clearly suffers from a lack of optimization. The best option is to install on Linux Station - when Ubuntu 20.04 LTS is deployed inside QNAP, and you can access it via VNC from the browser. Technically, container virtualization is also used here, but without unnecessary settings, you can access all the NAS resources: all shared folders, all processor cores, and all memory.

After booting Linux from the GUI, 13-14 GB of RAM remains available in the system from 32 GB of RAM, and taking into account Linux optimizations, the system begins to swap already with 3-4 GB of free memory. Therefore, the most complex and lengthy process of "plotting" or filling the amount of disk space, you have to carry out no more than 2 threads, well, or increase the amount of RAM. What is the beauty of QNAP TS-h1886XU-RP from the mining point of view is the centralized management of SSD caching. The temporary folder that the Chia developers recommend to take out on the NVME SSD, here you can place on the same HDD pool - there will be no difference in speed, and the ZFS array perfectly copes with the transactional multithreaded load on generating tables.

However, ZFS does not provide any other optimization except for speed for mining: Chia tables are not compressed at all and are not deduplicated, so file system optimizations can not be included. In general, if you plot in 2 threads on RAID 5, then seeding occurs at a rate of about 1 TB per day: much slower than the disk subsystem allows. So in general, with a not very high load, you can plot and farm Chia on QNAP TS-h1886XU-RP without compromising business processes. Why not? Perhaps when the Chia grows 100 times, you will understand that it was the NAS from QNAP that allowed you to buy Lambo. :)


Structurally, the QNAP TS-h1886XU-RP meets all the canons of modern enterprise storage systems at the entry level: it is based on a simple platform based on the Intel Xeon D SoC, and all additions are implemented through expansion cards. This approach allows you to build both SAN networks, iSCSI and NAS on the basis of a single platform, both separately and simultaneously. The placement of SSD drives on the back side of the case even today is still far from all manufacturers, and in vain, because this design allows you to very often start with the right set of HDD/SSD without using expansion shelves. But even in the field of storage growth, there are almost limitless possibilities that will rather rest on the customer's budget than on the inability to put another shelf with HDD.

It is also a huge relief that the QuTS hero operating system is not the only available use case on the NAS, and you can reset the device at any time by switching to QTS using LVM and EXT4, for example, to get faster write speeds on HDD arrays. In terms of software package support, these operating systems are identical.

Michael Degtjarev (aka LIKE OFF)

Read also:

Using the ECS Liva Q1 micro-computer as a home server

Today we will talk about the ECS Liva Q1 micro-computer, a model for office tasks with a 4-core Pentium N4200 processor, 4 GB of RAM, Wi-Fi 802.11n and two 1GBase-T network ports, which is ideal for creating an Edge server for VPN, Wi-Fi, NAS a...

Impregnable NAS: hardening and protecting Synology

A modern NAS is quite capable of protecting itself from most attacks and guaranteeing not only the continuity of the service, but also the inviolability of the stored data. Even with minimal settings and following the manufactur...