Testing the latest x86 rack servers and low power server CPUs
by Johan De Gelas on July 22, 2009 2:00 AM EST- Posted in
- IT Computing
Making Sense of AMD ACP/TDP and Intel TDP
The only thing clear about the TDP numbers of AMD and Intel is that they are confusing and not comparable. It is clear that no power consumption or thermal design point number is going to make much sense to the server buyer unless the method of determining power consumption (or dissipation) is precisely defined by an independent third party. From that point of view, AMD Average CPU Power (ACP) only blurs the picture, even though it offers interesting information to those who are well informed about its purpose.
The idea behind ACP is good: the TDP number is for system builders and the people who design heatsinks, so you need another metric for the data center people. ACP should "reflect the CPU power consumption running typical data center workloads" according to AMD. That sounds like a reasonable goal, but we are less enthusiastic about the methodology. ACP is the geometric mean of what a CPU consumes running TPC-C, SPECcpu2006, SPECjbb2005, and Stream. Stream hardly stresses the CPU at all (it focuses on stressing memory bandwidth) and a geometric mean is always lower than an arithmetic mean. It seems to us that when it comes to power consumption it is better to use a slightly higher estimate than to introduce clever tricks to lower the number. Moreover, if you are running floating-point applications you should expect much higher power draw numbers than ACP suggests.
TDP or Thermal Design Power is defined by AMD as (from AMD Family 10h Server and Workstation Processor Power and Thermal Data Sheet, Rev 3.04 - June 2009):
"The maximum power a processor draws for a thermally significant period while running commercially useful software. The constraining conditions for TDP are specified in the notes in the thermal and power tables."
That seems very similar to Intel's TDP, but in practice AMD's TDP is (almost) equal to the maximum electrical power a CPU can draw (current times voltage). Therefore, AMD's own definition is not accurate for the published TDP results. The irony is that the published Intel TDP numbers are more accurately described by AMD's definition. Intel's engineers measure the power draw of hundreds of commercially available software packages and ignore the "not thermally significant" peaks. All those power measurements are averaged and a small percentage (a buffer) is added. Thus, Intel's TDP is lower than the maximum power draw.
In a nutshell:
- AMD's ACP uses a "round down" average of power measurements performed with industry standard benchmarks (usually running at 100% CPU load, with the exception of Stream).
- AMD's TDP is close to the electrical maximum a CPU can draw (when it is operating at its maximum voltage).
- Intel's TDP is a "round up" average of power measurements of processor intensive benchmarks.
If AMD would apply the methodology of Intel to determine TDP they would end up somewhere between ACP and the current "AMD TDP". "There is no substitute for your own power measurements" is a correct but an incredibly lame and unrealistic conclusion. Few IT professionals are going to perform power measurements on all the potentially interesting CPUs systematically. It would be nice if the SPEC committee would step up and determine a realistic power measurement for both integer and floating point loads based upon several industry standard benchmarks (with only basic tuning allowed) and not a flawed SPECjbb based one. Until then, this report should offer you a second opinion. It is worth noting that SPEC has launched their new SPECweb2009, which includes a SPECpower methodology. This is certainly a step in the right direction as SPECweb2009 is a lot more realistic than SPECjbb.
After so much theory, it is time to go hands-on again. Let's meet the server hardware we gathered for this test.
12 Comments
View All Comments
Doby - Thursday, July 23, 2009 - link
I don't understand why virtualization benchmarking is done with 16 or fewer VMs. With the CPU power of the newer CPU you can consolidate far more on there. Why aren't the benchmarks done with VMs with varying workloads, around 5% or less utilization, and then see how many VMs a particular server can handle. It would be far more real world.I have customers running over 150 VMs on a 4 CPU box, the performance compison of which CPU can handle 16 VMs better is completely bogus. It's all about how many VMs can I get without overloading the server (60-80% utilization).
JohanAnandtech - Thursday, July 23, 2009 - link
As explained in the article, we were limited with the amount of DDR-3 we have available. We had a total of 48 GB of DDR-3 and had to test up to servers. It should not be too hard to figure out what the power consumption could have been with twice or even four times more memory. Just add 5 Watt per DIMM.BTW, 150 VMs on one box is not extremely rare in the realworld. Are those VDI VMs?
"the performance comparison of which CPU can handle 16 VMs better is completely bogus"
On a dual socket machine it is not. Why would it be "bogus"? I agree that in a perfect world we would have loaded that machine up to 48 GB per Server (that is a fortune of 192 GB of RAM) and have run like 20-30 VMs per server. A little bit of understanding for the limitations we have to face would make my day....
uf - Thursday, July 23, 2009 - link
What power consumption is for low loaded server (not idle!) say at 10% and 30% average cpu utilization per core?MODEL3 - Wednesday, July 22, 2009 - link
in your comment:If AMD would apply the methodology of Intel to determine TDP they would end up somewhere between ACP and the current "AMD TDP"
You referring exclusively to the server CPUs?
Because if not, the above statement is false and unprofessional.
I don't have access to server CPUs but from my experience with mainstream consumer CPUs tells me the exact opposite:
65nm dual core (same performance level) 65W max TDP:
both 6420 (2,13GHz)& 4600 (2,4GHz) has lower* actual TDP than 5600 (2,9Ghz)
45nm dual core (same performance level) 65W max TDP:
both 7200 (2,53GHz)& 6300 (2,8GHz) has lower* actual TDP than Athlon 250 (3,0Ghz)
45nm Quad core (same performance level) 65W max TDP:
Q8200S (2,33 GHz) has lower* actual TDP than Phenom II 905e (2.5GHz)
I don't even need to give details for system configurations everyone knows these facts.
* not by much but nevertheless lower (so from that point to the point of " AMD has actual TDP somewhere between AMD's ACP and Intel's TDP " there is a huge gap
JohanAnandtech - Wednesday, July 22, 2009 - link
Correct. I only checked for server CPUs (see the pdf I linked).JarredWalton - Wednesday, July 22, 2009 - link
There are several issues at work, particularly with desktop processors. For one, AMD and Intel both have a range of voltages on desktop parts, so (just throwing out numbers) one CPU might run at 1.2V and another with the same part might run at 1.225V - it's a small difference but it can show up.Next, Intel and AMD both seem to put out numbers that are a theoretical worst case, and clock speed and voltage of a given chip help determine where the CPUs actually fall. The stated TDP on a part might be 65W, and with some 65W chips you can get very close to that while with others you might never get above 50W, but they'll both still state 65W.
The main point is that AMD's ACP ends up lower than what is realistic and their TDP ends up as essentially the worst-case scenario. (AMD parts are marketed with the ACP number, not TDP.) Meanwhile, Intel's TDP is higher than AMD's ACP but isn't quite the worst-case scenario of AMD's TDP.
I believe that's the way it all works out: Intel reports TDP that is lower than the absolute maximum but is typically higher than most users will see. AMD reports ACP that is more like an "average power" instead of a realistic maximum, but their TDP is pretty accurate. Even with this being the general case, processors are still released in families and individual chips can have much lower power requirements than the stated ACP/TDP - basically they should always come in equal to or lower than the ACP/TDP, but one might be 2W lower and another might be 15W lower and there's no easy way to say which it is without testing.
MODEL3 - Wednesday, July 22, 2009 - link
I mostly agree with what you 're saying except 2 things:1.AMD's TDP ends up as essentially the worst-case scenario (not true in all the cases e.g. Phenom X4 9350e (it has actual TDP higher than 65W)
2.In all the examples I gave, Intel & AMD had the same "official" TDP (also same more or less performance & same manufacturing proccess) so with your logic AMD should have lower than Intel actual TDP which is not true.
I live in Greece, here we pay 0,13€ (inc. VAT) per KW, so...
In another topic did you see the new prices for AMD Athlon II X2 245 (66$) & 240 (60$)? (while Intel 5300 cost 64$ & 5400 74$)
They should have priced them at 69$ & 78$.
No wonder why AMD is loosing so much money, they have to fire immediately those idiots who dit it (it reminds me the days before K8 when AMD used these methods)
JPForums - Wednesday, July 22, 2009 - link
I'm having a hard time correlating your chart and your assessment."Notice how adding a second L5520 CPU and three DIMMs of DDR3-1066 to our Chenbro server only adds 9W."
Found that one. However, on the previous page you make this statement:
"So adding a Xeon X5570 adds about 58W (248W - 175W - three DIMMs of 5W), while adding an Opteron 2435 2.6GHz adds about 47W (243 - 181 - three DIMMs of 5W)."
This implies to me that just adding the 3 DIMMs should have raised the power 15W.
"Add an Opteron EE to our AMD server and you add 22W."
Check. Did you add the 3 DIMMs here as well?
"The result is that the best AMD platform consumes about 20W more than the best Intel platform when running idle."
Can't find this one. There are 3W difference between the Xeon L5520 and the Opteron 2377 EE. There are 16W difference for the dual CPU counter parts (closer). All the other comparisons leave the Intel platform consuming more power than the AMD counterpart. Is this supposed a comparison of the platform without the CPU? It is unclear to me given the words chosen. I was under the impression that the CPU is generally considered part of the platform.
"Intel's power gating is the decisive advantage here: it can turn the inactive cores completely off. Another indication is that the dual Opteron 2435 consumes about 156W when we turn off dynamic power management, which is higher than the Xeon X5570 (150W)."
An explanation of dynamic power management would be helpful. It sounds like you're saying that Intel's power management techniques are clearly better because when you turn both their power management and AMD's power management off, the Intel platform works better. The only way your statements make sense is if the dynamic power management you are talking about isn't a CPU level feature like clock gating. In any case, power management techniques are worthless if you can't use them.
As a side question, when the power management support issue with the Xeon X5570 is addressed and AMD has a new lower power platform, where do you predict the power numbers will end up? I'd still expect the "Nahalem" Xeons to win in performance/power, though.
JohanAnandtech - Wednesday, July 22, 2009 - link
Part 2 :-)"The result is that the best AMD platform consumes about 20W more than the best Intel platform when running idle."
Can't find this one. "
135W - 119W = 16W. I made a small error there (spreadsheet error).
"It sounds like you're saying that Intel's power management techniques are clearly better because when you turn both their power management and AMD's power management off, the Intel platform works better. "
More or less. There are two ways the CPU can save power: 1) lower voltage and clockspeed or 2) Shut down the cores that you don't need. In case of the Intel part, it is better at shutting down the cores that it don't need. They simply are completely shut off and consume close to 0 W. In case of AMD, each core still consumes a few watt.
So if you turn Speedstep and power now! off, you can see the effect of the 2nd way to save power. It confirms our suspicion of why the Opteron EE is not able to beat the L5520 when running idle.
JohanAnandtech - Wednesday, July 22, 2009 - link
I'll chop my answers up to keep these comments readable.quote:
"Notice how adding a second L5520 CPU and three DIMMs of DDR3-1066 to our Chenbro server only adds 9W."
Found that one. However, on the previous page you make this statement:
"So adding a Xeon X5570 adds about 58W (248W - 175W - three DIMMs of 5W), while adding an Opteron 2435 2.6GHz adds about 47W (243 - 181 - three DIMMs of 5W)."
This implies to me that just adding the 3 DIMMs should have raised the power 15W. "
No. Because the 9W is measured at idle. It is too small to measure accurately, but DIMMs do not consume 5W per DIMM in idle. Probably more like 1W or so.