What factors should be used to choose the server processor?

Here's how it used to be good: for servers, there was a single Pentium Pro processor, which you didn't need to choose. With the advent of multi-core processors, it became clear that the more computing blocks, the better, but only until marketers stepped in and pided one CPU into ten sockets with ten cache options and ten frequency characteristics, so today the choice of CPU in the server is hell even for a trained technician. In this article, we will compare the comparison coefficients themselves, which marketers are so fond of giving in their documentation.

1. Price per Core

The simplest and most understandable coefficient, which is obtained by simply piding the price of the processor by the number of physical cores. Due to the fact that it does not take into account parameters such as processor architecture, cache size, memory controller type, and architecture, it is the most useless metric, which is currently only beneficial to AMD, which flaunts its 64-core processors before Intel.

Of course, it's time to remember the Intel Xeon Phi series, which has x86 - 64 processors with up to 72 cores, with an extremely low frequency from 1 to 1.7 GHz. In General, Intel likes such low frequencies, and even in the initial line of universal Xeon Bronze processors, you can buy 6-core models with a frequency from 1.6 to 1.8 GHz, depending on the generation, and the price for the core there is also very low. And if you look at the random selection of server processors, the price per core does not mean anything at all!



Price per core,$

Intel Xeon Phi 7290, 72C, 1.5-1.7 GHz



Intel Xeon Gold 5320H, 20C, 2.4-4.2 GHz



Intel Xeon Bronze 3204, 6C, 1.9 GHz



AMD EPYC 7662, 64C, 2.0-3.3 GHz



AMD EPYC 7272, 12C, 2.9 GHz



AMD EPYC 7401P, 24C, 2.0 - 3.0 GHz



AMD EPYC 7251, 8C, 2.1 - 2.9 GHz



In General, when you are considering a processor for a server, you should already be aware of the load that the machine will carry, and if a 64-core processor has the optimal core price, but your cloud is more than enough for a 16-core machine, you will agree that you will not want to pay extra for unnecessary cores.

2. Price per Megahertz

The optimal method is to multiply the number of cores by their base frequency, since modern operating systems have already learned to transfer tasks between cores without losing performance. The idea that you are evaluating your entire server, your entire cluster, in terms of the total number of megahertz is actively promoted by VMware, and it looks very sensible, especially when you compare the load of virtual machines with the frequency capacity of the server or cluster. For example, if the average consumption of a virtual machine is 500 MHz with a peak of up to 1.7 GHz, then we can roughly say that an 8-core processor with a frequency of 3 GHz will pull about 30 virtual machines, depending on how synchronously their consumption changes.

The disadvantages of this method are also enough: first, we have a question with HyperThreading technology, which offers the operating system from 1 to 4 virtual cores for each physical one. As a rule, they are useless in General tasks, and VMware generally recommends disabling this technology in the BIOS, but if you take into account the frequency of virtual cores, the comparison will be unfair.

Ёмкость процессора

The second drawback is Turbo Boost, which can increase the frequency for both 1 core out of 32 and all 32, while the processor documentation may not specify overclocking restrictions. Comparing two generations of EPYC and one Threadripper, we found that only 2 cores are "turbo - charged" in one processor, 4 in the other, and 32 in the third. With this balance of power, it is logical to take into account the nominal core frequency in a system with good cooling, meaning that the processor will not go into trottling.

And yet, with all the disadvantages, this is the best coefficient for choosing a processor for private cloud and virtualization today.

3. Core-to-Watt power ratio

In European countries with high electricity prices, IT companies sometimes use the ratio of the total TDP to the number of cores. This metric is not suitable for comparing AMD vs Intel, because there may be a situation when AMD's processor is a SoC, and Intel also needs to take into account the power consumption of the chipset. This metric is not suitable for comparing "1 socket vs 2 sockets", because an additional processor socket usually requires more serious cooling by powerful fans, which will kill all the benefits.

This metric is only suitable for one rare case with AMD processors, when a CPU with different architecture can be installed on the same server: EPYC Napples, which has 32 cores, or EPYC Rome, which has 64 cores, while all other components remain unchanged. Intel does not spoil us with such gifts and, on the contrary, likes to change sockets for and without, so I recommend not to bother with questions of the ratio of power consumption per core.

4. The benchmark points on the dollar

It would seem that the most logical ratio is the performance that a test gives for every dollar invested, but there are drawbacks here. First, you need to clearly understand how your loads are parallelized, and does it make sense to test a 64-core CPU on a task that can't run more than 8 threads?

Second, the more perse the load in the cloud, the more difficult it will be for you to choose a test that simultaneously takes into account the CPU load, the intensity of reading from memory, and the load on disk or network systems. In the end, you will be comparing servers, not processors.

Third, there are typical cases when cheap processors that occupy the last lines in the performance table will win at the expense of a low price by such comparative coefficients.

Fourth, different processors may have different technologies for speeding up typical tasks: this may be an interconnect that is started directly in the CPU, hardware encryption unloading technology, various algorithms for improving system security, which in principle may not be available at all on today's version of the software, but will appear in the next updates, or vice versa will be turned off, which is also not uncommon.

Of course, if your main task is rendering on the CPU, then Cinebench points per dollar is your main metric, but again, with the change in the renderer type, it can let you down.

Why complex coefficients like "Wt / core/channel / memory / cache" are not possible?

Basically, if you are required to justify the contract price to compare processors, no one will be offended if you start to enter your ratios of frequencies, caches, number of cores and number of memory channels. Any parameter that has a quantitative characteristic, even the number of legs, can be put both in the numerator and in the denominator of the ratio. The main thing is to show the importance of this ratio with all your appearance.

In fact, neither the number of memory channels, nor the number of cores, their frequency, or cache size will tell you how well a particular processor fits your needs.


For the company's private cloud, use the Megahertz-to-dollar ratio, but pay attention only to the physical cores and be sure to take into account the actual operation of the turbo boost: how many cores can work at what frequency. This information can be gleaned from reviews and tests. You can use the ratio of benchmark points per price for render farms or similar pre-known tasks that have servers allocated for them.

All the other coefficients, such as "core price", "core-to-TDP ratio", or "frequency per core", are so specific and apply so rarely that you can come up with any of them yourself and flaunt hitherto unknown numbers in your presentations and technical tasks.

Michael Degtjarev(aka LIKE OFF)

Read also:

Why AMD bought Xilinx: simple, clear language

When you look at how large companies are rowing everything that is bad in the world of protocols and network solutions, you do not immediately understand what is happening, so let's figure it out together.

NVMEoF: how does the fastest data protocol work?

In recent years, we have seen an increase in the performance of data storage technologies, which eventually reached the physical limits due to outdated data exchange protocols in data centers. Despite the use of 100GbE and new n...