The RV770 Story: Documenting ATI's Road to Success
by Anand Lal Shimpi on December 2, 2008 12:00 AM EST- Posted in
- GPUs
Depression Sets in but the Team Goes On
The entire RV770 design took around three years, which means that while we were beating ATI up over the failure that was R600, those very engineers had to go into work and be positive about RV770. And it was tough to, after all ATI had just completely lost the crown with R600 and Carrell, Rick Bergman and others were asking the team to ignore what happened with R600, ignore the fact that they lost the halo, and try to build a GPU that aimed at a lower market segment.
Through all of my interviews, the one thing that kept coming up was how impressed ATI was with the 770 team - never once did the team fall apart, despite disagreements, despite a shaky direction, the team powered through.
The decision not to go for the king of the hill part was a decision that made a lot of sense with ATI, but there was so much history about what would happen if you didn’t get the halo part; it took a very strong discipline to cast history aside and do what the leads felt was right, but the team did it without question.
The discipline required wasn’t just to ignore history, but to also fight the natural tendency for chips to grow without limits during their design phase. What ATI achieved with RV770 reminded me a lot of Intel’s Atom design team, each member of that team had strict limits on how big their blocks could be and those limits didn’t waver.
Adversity tends to bring the best out of people. The best stories I’ve been told in this industry, the Intel folks who made Banias and the ATIers that were responsible for RV770 put their hearts and souls into their work, despite being beat down. Passion has a funny way of being a person’s strongest ally.
The Power Paradigm
We were all guilty for partaking in the free lunch. Intel designed nearly five years of processors without any concern for power consumption and the GPU guys were no different.
In the R300 and R420 days ATI was almost entirely ignoring power, since estimating how much power the parts would use was so off from the final product that they just didn’t care. It was such a non-issue in those days that ATI didn’t even have a good way to estimate power even if it wanted to, it was impossible to design for a specific TDP. Today ATI’s tools are a lot better, now targeting a specific TDP is no different than aiming for a specific clock speed or die size, it’s another variable that can now be controlled.
These days power doesn’t change much, the thermal envelopes that were carved out over the past couple of years are pretty much stationary (ever wonder why the high end CPUs always fall around 130W?). Everyone designs up to their power envelope and stays there. What matters now is every year or two increasing performance while staying within the same power budget. Our processors, both CPUs and GPUs, are getting more athletic, rather than just putting on pounds to be able to lift more weight.
One of the more interesting things about architecting for power is that simply moving data around these ~1 billion transistor chips takes up a lot of power. Carrell told me that by the time ATI is at 45nm and 32nm, it will take as much power to move the data to the FPU as it does to do the multiply.
Given that data movement is an increasingly power hungry task a big focus going forward is going to be keeping data local when possible, minimizing moving to registers and on-chip caches. We may see more local register files and more multi-tiered memory hierarchies. As chips get more complex, keeping the register file in one central location becomes a problem.
ATI admitted to making a key manufacturing mistake with R600. The transistor technology selected for R600 was performance focused, designed to reach high clock speeds and yielded a part that didn’t have good performance per watt - something we noticed in our review. ATI has since refocused somewhat away from the bleeding edge and now opts for more power efficiency within a given transistor node. With leakage a growing problem as you go to smaller transistors it’s not worth it to be super leaky to gain a few picoseconds. If you’ve got a 100W GPU, do you want to waste 40W of that budget on leakage? Or would you rather do 80W of real work and only waste 20W? It’s the same realization that Intel recognized during the Pentium 4’s term and it’s the mentality that gave us the Core microarchitecture. It’s an approach that just makes sense.
116 Comments
View All Comments
pastyface - Wednesday, December 3, 2008 - link
Great job on the article! Generally today's reviews consist of me quickly going to the benchmarks portion and seeing if a new game was used or if any screwy results came out. This article however was much different. You had my attention from the get go and I didn't take a break in my reading until the whole article was finished.It is a real shame that so much of the work in reviews are overlooked in favor of simple graphs but this article was different and I thank you for that.
MarchTheMonth - Wednesday, December 3, 2008 - link
I really enjoyed the read, and it really gives me an appreciation for the card i just happen to get (hd 4850).I may not speak for others, but these are the kind of articles I like to read, the kind that really explain in detail what's really happening. Anand, you did an excellent job of giving perspective (be in ATI's shoes in 2007 when nvidia was doing this...etc) to the article that gave definition between the "so obvious" hindsight we have now to the "this is suicide!" view that it must have seemed like to be there in 2005.
Now, for my own counter-perspective, I can understand why AMD, Intel, and nVidia may not do this very often. On the flip side of the coin, I'm not a mainstream user, and I don't exactly build 1000s of computers that ATI can sell. Bottom line speaking, a story that's interesting to me, I don't bring them $$$$. And on top of that, this story is also giving a lot of info to the competition, which can be at best a double edged sword, and at worst too much information to be self-destructive.
belladog - Wednesday, December 3, 2008 - link
Excellent article, we love this stuff. Benchmarks gets a bit boring after a while.I wonder what affect if any, the revelations about price fixing(between ATI and Nvidia) had on the pricing of the RV770 GPU's??
I mean if the lawyers hadnt broken up the party maybe the 4870 could have been $80-$100 dearer?
Anyway, interesting article.
Griswold - Wednesday, December 3, 2008 - link
The price fixing took part before AMD bought ATI. And it would be safe to assume that it stopped at the latest at that time, but it probably did stop well before that (the earliest evidence is an e-mail from 2002). AMD should know better than to point the finger at Intel and do something that is equally wrong in another segment of their business.feelingshorter - Wednesday, December 3, 2008 - link
Keep up the good work and never let the haters get you down! There's always people b!tching when they don't know how hard it is to write well (any moron can "write"). But its good stories like this that has been the bread and butter of Anandtech.
The pressure of deadlines, writer's block, or not having enough to write. I appreciate what you do and I know its stressful at times. Others can sympathize but I can empathize having been an amateur journalist myself (in high school and at the university newspaper).
san1s - Wednesday, December 3, 2008 - link
If this article was about an nvidia GPU then the ati fanboys would proclaim it reeks of bias.good article though anamdtech
Adul - Wednesday, December 3, 2008 - link
That was one of the best article I read in a while. It was very enjoyable to get an idea of how things are worked out years in advanced of when the product launches.This was a huge gamble on AMD/ATI part. My hats off to them for having the balls to do something different.
dani31 - Wednesday, December 3, 2008 - link
Speaking of AMD, it would have been nice to have more insight on how the acquisition of ATI fitted in the design process.But this topic seems to have been deliberately omitted in this article.
JonnyDough - Wednesday, December 3, 2008 - link
Maybe that's because the interviewed the ATI chip designers and not the AMD head haunchos? Just a thought.lifeobry - Wednesday, December 3, 2008 - link
Really fascinating article. The amount of work put into creating these cards and the competition between the two companies is compelling stuff.