Intel’s Dual-Core Xeon First Look
by Jason Clark & Ross Whitehead on December 16, 2005 12:05 AM EST- Posted in
- IT Computing
How does power consumption affect the bottom line?
We've illustrated each system's power consumption levels at different load levels. For the technical folks, those numbers are important. But, for the platform decision maker, the CTO, Director of Technology and any person in a role that looks at the bigger picture, it's important to illustrate how it affects the bottom line. Obviously, if you use more power, it costs you more money. To illustrate this, we used a current rate of 14 cents per kWh, which was taken from a power bill of a resident in Connecticut, New York. We then used the data to extrapolate the cost of each platform at the various load levels.
At 40-60% load, it costs $42 a month to run a Bensley system while an Opteron system would cost $25. Factor that over a year, and a Bensley system would cost $504 to run and an Opteron would cost $300. Now, if that doesn't pique your interest, let's say that you have 40 servers at a datacenter, with the same power characteristics. Over a year, it would cost you $8,160 more to run the Bensley system.
So, you've read through the article and are waiting for us to tell you what platform is better. That entirely depends on what matters to you: performance, power, or both. If all you care about is performance, then Bensley is that platform. If you care about how much power you consume, then Opteron is that platform. Now, if you want the best performance per Watt, then Opteron is that platform. At least that is what the database test results that we've shown here dictate. Obviously, given Intel's roadmap, they are developing platforms to address performance per Watt. Woodcrest will be the first product that is focused on performance per Watt, which we will see in the second half of 2006. Until then, the choice is yours.
We've illustrated each system's power consumption levels at different load levels. For the technical folks, those numbers are important. But, for the platform decision maker, the CTO, Director of Technology and any person in a role that looks at the bigger picture, it's important to illustrate how it affects the bottom line. Obviously, if you use more power, it costs you more money. To illustrate this, we used a current rate of 14 cents per kWh, which was taken from a power bill of a resident in Connecticut, New York. We then used the data to extrapolate the cost of each platform at the various load levels.
At 40-60% load, it costs $42 a month to run a Bensley system while an Opteron system would cost $25. Factor that over a year, and a Bensley system would cost $504 to run and an Opteron would cost $300. Now, if that doesn't pique your interest, let's say that you have 40 servers at a datacenter, with the same power characteristics. Over a year, it would cost you $8,160 more to run the Bensley system.
Conclusion
So, you've read through the article and are waiting for us to tell you what platform is better. That entirely depends on what matters to you: performance, power, or both. If all you care about is performance, then Bensley is that platform. If you care about how much power you consume, then Opteron is that platform. Now, if you want the best performance per Watt, then Opteron is that platform. At least that is what the database test results that we've shown here dictate. Obviously, given Intel's roadmap, they are developing platforms to address performance per Watt. Woodcrest will be the first product that is focused on performance per Watt, which we will see in the second half of 2006. Until then, the choice is yours.
67 Comments
View All Comments
Viditor - Friday, December 16, 2005 - link
1. Double the number (probably more, but for the sake of argument make it double) for the difference in air conditioning.
2. The PSU drawas power at about (average) 75-80% efficiency, so increased power demand increases the loss from PSU inefficiancy
I can't say that the AT number is right, but I can't say it's wrong either...
Furen - Friday, December 16, 2005 - link
Like I said, cooling normally matches the system's power consumption, so the difference is actually 2x $8,140. To this you apply a 20% increase due to inefficiency and you get $19,536. This is why everyone out there is complaining about power consumption and this is why performance per watt is the wave of the future (this is what we all said circa 2001... when transmeta was big) for data centers. I think Intel will have a slight advantage in this regard when it releases its Sossaman (or whatever he hell that's called) unless AMD can push its Opteron a bit more (a 40W version would suffice, considering that it has many benefits over the somewhat-crippled Sossaman).Viditor - Friday, December 16, 2005 - link
Sossaman being only 32 bit will be a fairly big disadvantage, but it might do well in some blade environments...
The HE line of Dual Core Opterons have a TDP of 55w, which means that their actual power draw is substantially less. 40w?...I don't know. If they are able to implement the new strained silicon process when they go 65nm, then probably at least that low...
Furen - Friday, December 16, 2005 - link
Yes, that's what I meant by the "...considering it has many benefits over the somewhat-crippled Sossaman"... that and the significantly inferior floating-point performance. I've never worked with an Opteron HE so I can't say how their power consumption is. The problem is not the hardware itself, though, but also the fact that AMD does not PUSH the damn chip. It's hard to find a non-blade system with HEs, so AMD probably needs to drop price a bit on these to get people to adopt them on regular rack servers.Viditor - Friday, December 16, 2005 - link
Me neither...and judging it by the TDP isn't really a good idea either (but it does give a ceiling). Another point we should probably look at is the FBDimms... I've noticed that they get hot enough to require active cooling (at least on the systems I've seen). I know that Opteron is supposedly going to FBDs with the Socket F platforms (though that's uncomfirmed so far). This brings up 2 important questions...
1. How well do the Dempseys do using standard DIMMS?
2. How much of the Dempsey test system power draw is for the FBDs?
Heinz - Saturday, December 17, 2005 - link
Go to:
http://www.amdcompare.com/techoutlook/">http://www.amdcompare.com/techoutlook/
There, FBDIMM is mentioned in 2007 (Platform overview).
Thus, either the whole Socket F platform is pushed to 2007, or Socket F simply uses DDR2 or maybe both :)
But I guess it will be DDR2. Historically AMD uses old, reliable RAM techniques at higher speeds, like PC133 SDRAM, DDR400.. and now it is DDR2-667/800.
Just wondering, if AMD will introduce then another "Socket G" already by 2007 ?
byebye
Heinz
Furen - Friday, December 16, 2005 - link
Dempseys should perform as badly as current Paxvilles if they use normal DDR2. This is because I seriously doubt anyone (basically, Intel) is going to make a quad-channel DDR2 controller since it requires lots and lots of traces and there's better technology out there (FB IS better, whatever everyone says, it's just having its introduction quirks). Remember that Netbursts are insanely bandwidth starved so having quad-channel DDR2 (which is enough to basically saturate the two 1066MHz Front Side Buses) is extremely useful in a two-way system. Once you get to 4-way, however, the bandwidth starvation comes back again.Intel is being smart by sending out preview systems with the optimal configuration and I'm somewhat disappointed that reviewers dont point out that this IS the best the chips can get. Normally people dont even mention that the quad-channel DDR2 is what gives these systems its performance benefits but let it be assumed that its the 65nm chips that perform better than the 90nm parts (for some miraculous reason) under the same conditions. That why I'm curious about how these perform on a 667MHz FSB. Having quad-channel DDR2 533 is certainly not useful at all with such a low FSB, I think. Remember how Intel didn't send out Paxvilles out to reviewers? I'd guess that the 667FSB Dempseys will perform even worse than those.
IntelUser2000 - Friday, December 16, 2005 - link
What do you mean?? Bensley uses FB-DIMM 4-ch if you didn't know. And people who researches into these stuff/have looked into technical details say, Intel's memory controller design rivals the best, if not better.
Also, Bensley uses DDR2-533, while Lindenhurst uses DDR2-400. We all know that DDR2-533 is faster than DDR2-400.
Yes, but that's certainly better than Dual channel DDR2-400 with 800FSB since Bensley will have two FSB's anyway.
Paxville and Dempsey has same amount of cache, only difference being Dempsey is clocked higher.
Furen - Friday, December 16, 2005 - link
If you read what you quoted again I said that Dempseys should perform the same as current Paxvilles if they use NORMAL DDR2 (as in not FB-DIMMs). I know there are 4 FB channels on Benseley but Viditor said that he wondered how Dempseys would perform on regular DDR2. Yes, Intel's memory controllers are among the best (nVidia's DDR2 mem controller is slightly better, I think) but I said that I don't believe they will make a quad-channel DDR2 northbridge, since the amount of traces coming out of a single chip would be insane. You're correct about Linderhurst sucking.Consider that having two FB533 channels. Basically the two CPUs (4 cores) will share the equivalent of a FSB1066 in memory bandwidth, having an insanely wide FSB doesn't help if you dont have any use for it and memory bandwidth limitations always hurt Netbursts significantly. The same could be said for having 4 FB channels and running at dual FSB667. I dont even want to think about Intel's quad-core stuff, though the P6 architecture is much less reliant on memory bandwidth than Netburst.
Viditor - Friday, December 16, 2005 - link
Some good points Furen...it seems to me that with the Dempsey on Benseley, Intel will finally be competitive in the dual dual market. I agree that the quad duals will still be a problem for them...I think that in the dual dual sector though, while I doubt Intel will regain any marketshare, they may at least slow down the bleeding.