Adata Falcon plus Goodram PX500 - testing RAID from cheap NVME drives
In general, if we look at hard drives from the point of view of performance, then everything is simple and clear, and even the slow "green" models at 5400 RPM do not differ much from the server ones at 10 and 15 thousand RPM, unless of course the manufacturer is clever with the firmware. In both cases, the speed decreases linearly from the beginning of the disk to the end, the controller buffers homogeneous operations, and in general, we are used to the fact that RAID can be collected from any heterogeneous drives. Even mixing SAS and SATA in one array, in principle, was not shameful, and even putting disks of different vendors in one array to reduce the probability of simultaneous failure - this is the case described in "Best Practice".
Quite another thing is with SSD drives, in which the controller plays the main violin, the firmware often completely changes the behavior of the drive, and even the number of chips on one board strongly affects the speed. And if you already have an NVME drive, old and slow, and you want more speed and more volume, will buying a second 512 GB drive and building a RAID for the sake of speed help? Well, let's check it out.
And how are things going with NVME RAID for the M. 2 format?
Things are terrible with NVME RAID: this format, compared to PCI Express drives, is not perceived by the industry as devices that collect the amount of storage. The market needs drives with a capacity of 10 or more terabytes, and a small 2280-format board simply can't fit so many chips, so they lose in volume to SAS SSDs and NVME SSDs in 2.5-inch format.As for speed, the principle of using M.2 SSD is simple: if you need a higher speed-buy a new SSD on a faster controller and do not do nonsense.
M.2 format SSD.2 take up a lot of space on the motherboard, while SAS SSDs are installed somewhere there, in the computer case, and you can put several such drives through adapters, if the motherboard supports the division of one slot, well, for example, PCI Express 16x into 4 buses of 4x. In this case, the price of the board for installing 4 SSDs in one PCI Express 16x slot will be relatively cheap, and if the motherboard does not have such a possibility, then you need to either score PCI Express slots according to the principle " 1 slot = 1 M.2 disk", or buy an adapter board with an active PCI Express divider, the price of which will be around $500, which is no longer practical.
Based on all of the above, the fate of RAID M.2 has developed as follows: they are perceived as system or caching drives, for which either redundancy is not needed at all, or simple arrays such as "mirror" or "stripe", well, in extreme cases - RAID 10. More complex arrays of RAID 5 and RAID 6 levels can be implemented programmatically on the basis of TrueNAS or Windows Spaces.
But even in hardware, through the BIOS, it would seem, what is the difficulty to build a "mirror" of M. 2 disks? But no, this technology is available from AMD for the Threadripper platform (NVME RAID) and Intel for servers (VROC), and from the "blue" to enable this option, you need to buy a hardware key worth$ 150 for RAID 0/1 and$ 250 for RAID 5/6, and from AMD it is even without 5/6, but free.
The reason for such problems is quite simple: NVME disks are too fast, and the processor performance is not enough to process XOR sums of complex arrays. In RAID 0, 1, 10, there are no such calculations, and this array is almost free in terms of speed, and even expensive 16-core processors completely fall under RAID 5/6.
For the average user, the easiest way to create RAID arrays is the Windows Administration Tools tool: it is less buggy than "Windows disk spaces" and is supported by all motherboards.
Well, it seems that we have figured out the reasons, and it's time to deal with practical issues: we have all seen tests with the same SSDs in arrays many times, but different ones will have different SLC caches, different firmware and a different number of channels.
The budget AData Falcon 512 drive is built on the basis of the Realtek RTS5762 controller, which supports write speeds of up to 3100 and read speeds of up to 1500 MB / s, respectively. Interestingly, the SSD does not have an onboard DRAM memory chip, and caching is carried out in the RAM of the computer itself (HMB - Host Memory Buffer technology). At the same time, the SLC cache is present here, as in most other SSDs with TLC memory. The Realtek RTS5762DL controller has 4 memory channels, so the drive has 4 chips of 96-layer 3D TLC from Micron.
There is no mounting on the back side of the board, and a thin, non-removable heat distributor plate covers the top of the drive. In our test, this is a typical representative of the ultra budget class, without a dedicated DRAM cache and definitely for simple office / gaming tasks.
Perhaps the last thing I would like to see in my computer is an SSD based on the Silicon Motion SM2263XT controller, as in the GoodRam PX500 drive. In 2018, we studied in detail the behavior of the Transcend MT110S based on this controller, and then it looked more or less, but now it is hopelessly outdated.
The claimed linear access speed of the GoodRam PX500 is 2000 and 1600 MB/s, there is also no built-in DRAM cache (the Host Memory Buffer technology is also used here), and the SLC cache occupies a fixed 10% of the storage capacity. The memory is filled with 4 3D NAND TLC chips, arranged on the front side and covered with an aluminum plate.
The first tests show that the Adata Falcon is much faster than its Goodram PX500 counterpart, although of course there should be no objective reasons for such a difference. This is all the more interesting because in terms of speed, we have RAID arrays from different disks.
It can be seen that the RAID software layer introduces its own delay, and almost does not correct the situation with the Goodram brake drive. Things are completely different with recording, because as we remember, both drives use HBM technology and cache data in the computer's RAM. So the "mirror" array is slower than the slowest drive, and the "alternating" viddimo uses its cache in memory, otherwise I can not explain such a smooth and fast response to random writes. By the way, pay attention to how much less data points for the Goodram PX500 - everything that does not fit on the graph is much higher than 0.7 ms, and we can say that the drive is very slow.
When reading sequentially, for example, when copying large files to somewhere else, such as a network drive, both "Mirror" and "Stripe" significantly benefit from simple drives, and we can say that here is a case where just adding one good SSD to one slow one, you will get a strong speed increase.
But when writing "to" the drive, this will not work: the pure RAW memory speed of the Goodram PX500 and Adata Falcon is about the same, only the Adata Falcon somehow works better caching, resulting in less speed spread. RAID 1 in this case is only able to greatly slow down the system and RAID 0 behaves non-linearly: sometimes it helps, and sometimes it hurts.
Let's move on to the tests of office work patterns. The first graph of disk access in batch processing of photos shows that RAID 1 significantly smooths out the typical problem of cheap SSDs, such as the Goodram PX500, namely the early development of the SLC cache.
The mirror in this case plays on the user's side only until the cache runs out, and then only harms.
In the anti-virus scan tests, the situation persists.
And again, it all depends heavily on the size of the blocks that are being read: on large Stripe blocks, the array speeds up the system well, and the mirror also works in a plus, but just a little bit.
On complex patterns, an array of the Stripe type significantly, I would even say multiples, speeds up the write speed, probably due to software optimization. Look: Goodram PX500 in this test writes at a speed of about 24 MB/s, AData Falcon - about 120 MB/s, and an array of these disks accelerates to 850 MB/s with an average speed of around 500 MB/s.
The test of importing video into the editor program only confirms the previous arguments, although RAID 1 does not particularly interfere with living here.
Our testing shows a clear picture: if you already have one slow SSD, you can buy a second fast SSD, such as the Adata Falcon, and combine them into a Stripe-type RAID (interleaved array). As a result, you will reduce the reliability of such a design by half, since when any SSD crashes, you will lose your data, but you will get a significant speed increase even in relation to the fastest SSD in the system. This can be implemented with Windows tools on a non-system volume, such as for storage of work projects or games.
Combining the two drives in a "mirror" makes no sense, except to increase reliability: at best, you will get the same speed as it was, and at worst-even more aggravate the situation. Well, the most important thing to pay attention to is how SSD drives on the same controllers can differ in speed, and the Adata Falcon is a good example of how to make budget SSDs.
Michael Degtjarev (aka LIKE OFF)