Adata XPG SX8200 Pro 512GB review in server and professional workloads

What's the difference between a home and gaming drive versus a professional drive? The fact that when developing a Pro device, the manufacturer can optimize it for certain tasks. When choosing an SSD for a server, you will be faced with the fact that you will be offered models for intensive writing or intensive reading, and moreover - some SSDs have optimized firmware for databases, logs, or even cold data, that is, it is assumed that you initially know the nature of your application's disk access.

The Adata XPG SX8200 Pro is positioned as an economical alternative to the Samsung EVO 970 Pro or WD Black SN750, and the savings are primarily on the controller. While Samsung's and Western Digital's Pro lines use a proprietary controller, Adata installs Silicon Motion's SM2262EN. As a buffer, Adata uses DDR3-1600 memory, recruited with two 256MB chips, while other manufacturers implement DDR4 for this purpose. It should be said here that all other things being equal, DDR3 has lower latency, and with the right approach, such an SSD can be more "responsive" to operations with cached data. The main volume of the drive is filled with four 64-layer TLC 3D Nand chips manufactured by Micron. The microcircuits are mounted on both sides, and the kit also includes a thin plate of the heat spreader for cooling the controller, which already has a built-in plate for heat dissipation.

I recommend gluing the complete heatsink to the back of the drive, and here's why: if your workstation is based on a desktop motherboard, you will most likely have a more serious heatsink for an SSD, and you want to install it in a server, then there is an intensive the air flow will cope with heat dissipation without a radiator. If the heatsink peels off over time, it will not fall on the GPU or on some controller, but lie on the surface of the board, which is usually completely naked under the NVME M.2 slot.

ESXi 6.7 U2 + and ESXi 7 Compatible

If you are looking for an SSD for caching in an ESXi hypervisor, then be aware that starting with the 6.7 U2 update, VMware has removed support for "desktop" NVMe drives. At the same time, the aforementioned Samsung 970 Pro, WD Black SN750 are detected without problems, and for SSDs with Silicon Motion controllers, you need to conjure up a little.

You need the nvme.v00 file from the VMWare ESXi 6.5 installation (you can download from here ), which can be found as part of the old distribution. It must be loaded into the / bootbank folder, going into the hypervisor via SSH, and then rebooting the host. Interestingly, the same replacement works for the latest ESXi 7, with the only difference that the file needs to be renamed to nvme_pci.v00.

According to user reviews, there are no problems with such a replacement, except that with the next update your file may be overwritten and you have to replace it again, so do not hide it far.

How and why will we test?

For workstations, the situation is somewhat different: as a rule, when developing projects, working with graphics or video, compiling software, as well as when launching virtual machines, the main activity falls on a small area of ​​the drive, which easily fits into the device cache. Therefore, when choosing an SSD for working with digital data, we never look at the CrystalDisk Benchmark - a test for which each manufacturer fits the data. More important for us are the latency and stability of access that arise when using access patterns taken from real tasks. It is the patterns that can drop performance from the mythical 3 Gb / s to the real 80 Mb / s - the speed at which your software will work. And since it is extremely rare to write tens or hundreds of gigabytes at a time, we are interested in the average latency of disk access, or rather its stability over time, and this completely depends on the drive controller. Using the IOmeter test suite, we will fill the disk 80% with random data, and overwrite it several times to get away from the "fresh SSD" state.

All our tests will be carried out on time intervals of 600 and 800 seconds - this is enough to see the controller and cache work.

  • Motherboard - ASRock EPYCD8-2T
  • CPU - AMD EPYC 7531p
  • RAM - 64 GB
  • VMware ESXi 6.7 U2
    • Windows Server 2016
    • NTFS, sector size 512 b
  • IOmeter, Pseudo Random


We will carry out synthetic tests in 1 thread, on a drive that was prepared by overwriting its volume 2 times with random data. These are the most representative results that will show the performance of an SSD in a workstation installation with a single application. Before each test run, the SSD was idle for 15 minutes to initiate the execution of the Garbage Collection (GC) mechanisms.

Let's start our testing by examining the speed of random access:

The random reading is almost smooth, no surprises. The average access time is 0.15 ms, which is considered normal for a drive of this class. On a random write, we see how over time, literally after 10 minutes of recording, the entire SLC cache will obviously be used up, and the delay doubles. With 70% reads, the drive manages to cope with the distribution of new data, and the cache does not overflow. We turn to operations with sequential access in blocks of 1 megabyte.

Typical application read time is 880 MB / s, which is generally on par with other 480-512 GB SSDs. Linear write to the cache reaches 1.4 Gb / s, but over time the speed drops down to 150-170 Mb/s. A SATA SSD Transcend 230S, also created on the Silicon Motion controller, true version SM2258. In our testlab, we use this SSD for storing virtual machines, and experience has shown that, in principle, when writing, it gives out its honest 147 MB​​/s, and it is this speed that should be counted on, and not the readings of CrystalDisk Benchmark, which draws at times great performance.

However, if you do not take the situation of processing Big Data or migrating virtual machines, then SSD is an SSD, and you can never see a low linear speed. Let's look at the access times in server loads.

Server workloads

For testing server loads, we use the same patterns as for testing corporate SAN and NAS class storage systems. You can read about them in our storage reviews Synology and Huawei . These patterns were developed by Pure Storage specialists based on I/O tracing of real applications. Here we run each test in 1-thread and 8-thread modes. It makes no sense to install a larger number of threads, since there the SSD is already stepping over the value of 10 ms, which, according to modern market requirements, I consider the limit value.

In all our tests of server tasks, the first 5 minutes access time was the highest. Perhaps this is a warm-up period. internal cache, after which the time is leveled and reduced by almost 3 times. For 8 threads, the situation is fundamentally different: the cache is full, the drive does not have time to optimize the freshly written data, and the access time is constantly growing.

The more writes a worker task produces, the more the latency increases over time on multithreaded workloads.

Again, in transactional workloads such as databases, SSDs perform well.

So if you are developing and testing DBMS, taking into account that one database writes to disk in 1 thread, you will get very low access time from Adata XPG SX8200 Pro - less than 1 ms. And in principle, this drive will also withstand 6-8 databases without any problems. But I would be careful not to install it in the storage system as a cache or main one.

Office workloads

We also simulate office workloads using patterns, but already removed on our own. We run all tests in 1 thread, and since in real office and multimedia tasks the performance never rests on the disk subsystem, but always depends on the CPU and GPU, we use the bandwidth in megabytes per second as a metric. Each test here lasts 600 seconds.

Best of all, the Adata XPG SX8200 Pro copes with tests that primarily read large blocks: checking backup archives, or importing 4K video into the editor. Any record mixed with reading drops the speed to 60-70 MB/ s.

Office workloads in a limited area (closer to real data)

And now let's move on to where we started: do you constantly have a sample from the entire disk volume when working in office tasks? You open / close projects, pull up data, import and export, launch software, and all this happens on only a small part of your drive. Even databases, for the most part, intensively work with only a part of the data, caching the most important ones in order to unload the disk subsystem. Let's limit the area of ​​operation of the SSD to twenty gigabytes, increase the test time to 800 seconds and see if the drive cache will pull the results obtained in a plus?

In server tasks, which are mostly transactional, that is, they are characterized by random access, there is not much sense in limiting the scope of the benchmark.

But in office tasks, on the contrary, caching shows a huge increase in speed, bringing the drive to indicators worthy of an NVME disk. Probably, the caching mechanism is configured for operations with sequential blocks, at least in storage systems such a differentiation as to what to cache has been used for quite some time.


Adata XPG SX8200 Pro is suitable for the following tasks:

  • Working with databases
  • Project development, programming
  • Windows / Linux system drive in desktop or laptop
  • WORM write (single write with read many times)
  • Game development and testing
  • SQL databases, Oracle
  • Regular office work (Word / Excel / SAP / photo and video processing)

Adata XPG SX8200 Pro is not suitable for the following tasks:

  • Virtualization
  • Backup storage
  • Storage Usage
  • Data Science, Big Data

Mikhail Degtyarev (aka LIKE OFF)

Read also: