AMD and GlobalFoundries, CES 2011
by Jarred Walton on January 7, 2011 3:30 AM EST(Belatedly) Examining AMD’s Mobility 6000M
Last but not least, we have AMD’s new mobile GPUs. We already discussed NVIDIA’s new 500M lineup, but somehow we slipped through the cracks and didn’t get briefed on AMD’s 6000M lineup in advance of the Tuesday unveiling. There was a bit of miscommunication between us and AMD, where we thought we were being briefed in person today on products that would be announced post-CES. AMD meanwhile thought we already had the basic information and we’d just get some additional detail and hands-on experience at the show. Well, that didn’t quite happen. We don’t have the depth of information available that we did with the 500M, but we did get the important details like shader counts, clock speeds, etc. As with the GeForce 500M launch, the Radeon 6000M also has some rebranding going on, but there are some completely new chips as well. Here’s the rundown.
AMD Radeon 6000M Specifications | ||||||
6900M | 6800M | 6700M/6600M | 6500M | 6400M | 6300M | |
Target Market | Ultra Enthusiast | Enthusiast | Performance | Performance Thin | Mainstream | Value |
Stream Processors | 960 | 800 | 480 | 400 | 160 | 80 |
Transistors | 1.7 Billion | 1.04 Billion | 715M | 626M | 370M | 242M |
Core Clock (MHz) | 560-680 | 575-675 | 500-725 | 500-650 | 480-800 | 500-750 |
RAM Clock (MHZ) |
900 (3.6GHz) |
900-1000 (3.6-4.0GHz) |
800-900 (3.2-3.6GHz) |
900 (3.6GHz) |
800-900 (3.2-3.6GHz) |
800-900 (1.6-1.8GHz) |
RAM Type | GDDR5 / DDR3 | GDDR5 / DDR3 | GDDR5 / DDR3 | GDDR5 / DDR3 | GDDR5 / DDR3 | DDR3 |
Bus Width | 256-bit | 128-bit | 128-bit | 128-bit | 64-bit | 64-bit |
Compute Performance | ~1.31 TFLOPS | ~1.12 TFLOPS | 696 GFLOPS | 520 GFLOPS | 256 GFLOPS | 120 GFLOPS |
Bandwidth (GB/s) | 115.2 | 57.6-64 |
51.2-57.6 GDDR5 or 25.6-28.8 DDR3 |
57.6 GDDR5 or 28.8 DDR3 |
25.6 GDDR5 or 12.8-14.4 DDR3 |
12.8-14.4 DDR3 |
ROPs | 32 | 16 | 8 | 8 | 4 | 4 |
UVD Version | UVD3 | UVD2 | UVD3 | UVD2 | UVD3 | UVD2 |
Eyefinity | Up to 6 | Up to 6 | Up to 6 | Up to 6 | Up to 4 | Up to 4 |
HDMI 1.4a | Yes | Via Software | Yes | Via Software | Yes | Via Software |
DisplayPort 1.2 | Yes | No | Yes | No | Yes | No |
All of the chips are still on 40nm, but the 6900M, 6700M, and 6400M use new designs based off the Barts architecture. You’ll note that they all include UVD3, HDMI 1.4a, and DisplayPort 1.2. On the rebranding side of things, 6800M, 6500M, and 6300M are all clock speed bumps of the existing 5000M series, which means they’re still the mobile variants of the Redwood architecture. AMD has apparently enabled a software “hack” that lets them do HDMI 1.4a, but they don’t support DP1.2, and they also don’t support Blu-ray 3D. (The HD 6430M also lacks 3D Blu-ray support.) We’ve previously covered the architectural enhancements in the Barts chips, so we won’t dwell on that much here. Clock for clock, Barts should be slightly faster than the previous generation Redwood series, it’s more power efficient, and it has a better video processing engine. One thing that sadly isn’t showing up in mobile GPUs just yet is the Cayman PowerTune technology; we’ll probably have to wait for the next generation mobile chips to get PowerTune as an option, and we’re hopeful that it can do for mobile GPUs what Intel’s Turbo Boost is doing for Sandy Bridge.
As with the NVIDIA hardware, the jury is still out on performance of the various solutions, but on paper everything looks reasonable. Starting at the bottom we have the 6300M, which looks to be a faster clocked HD 5470. That’s not going to win many awards for raw computational prowess, but as with NVIDIA’s 410M/520M it does provide an inexpensive option that will have AMD’s Catalyst drivers, so until Intel can get their Sandy Bridge IGP drivers to the same level we like having alternatives. Of course, we wouldn’t want switchable graphics with something as slow as the 6300M, as the goal should be noticeably better performance. The new 6400M should handle that role nicely. Sporting twice as many stream processors, 6400M should already offer a marked improvement over 6300M/HD 5470. Any configurations that get GDDR5 should reach the point where the GPU core is the sole limiting factor on performance, and while we’re not too fond of the 64-bit interface it should still be a good match for this “mainstream” offering.
Moving up to the next tier, we have the 6500M replacing the HD 5650, with the 6700M using the new architecture. The previous generation HD 5650 at 550MHz generally outperforms the NVIDIA GT 425M, so increasing the bandwidth and clock speeds (i.e. 6500M) should keep the series competitive with (or ahead of) the 525M/535M. The 6700M takes things a step further with 20% more stream processors, and provided the manufacturer uses GDDR5 you’ll get more than enough bandwidth—the 57.6GB/s figure makes the typical DDR3 configurations look archaic, but we worry there will be plenty of slower/cheaper DDR3 models on the market.
Finally, at the top we have the enthusiast and ultra-enthusiast offerings. 6800M is once more a higher clocked version of the existing HD 5850/5870. The 6900M is the potentially killer product. Total computation performance is up 17%, which is nothing special, but the memory interface is specced at 256-bit and 900MHz, yielding a whopping 115.2GB/s of bandwidth. We’ve seen quite a few games in the past where memory bandwidth appears to be a limiting factor, and the 6900M addresses this in a big way. Bandwidth is 80% higher than the previous generation 5870 and the 6800M, and it’s also 20% higher than what NVIDIA is offering with the GTX 485M. Of course, if the games/applications you’re running aren’t bandwidth limited, all that extra headroom might go to waste.
As we stated in the NVIDIA 500M announcement, NVIDIA has a very compelling platform with Optimus Technology allowing them to work seamlessly with integrated graphics and give you the appropriate performance or power savings as appropriate. Okay, so there are occasional bugs to work out with Optimus, but I’d put it at roughly the same level of teething pain as the current SLI support. Since NVIDIA lets you create custom profiles—for SLI as well as Optimus—most of the time things work out fine. The alternatives both involve compromises, namely: lack of regular driver updates in the case of switchable graphics, and lowered battery life with discrete-only.
AMD did inform us that they’re working on some updates to their switchable graphics design, which will involve putting a driver between the OS and the IGP/GPU drivers. They say it will allow users to update drivers for Intel’s IGP separate from the AMD GPU, and that it will address the concerns we’ve mentioned here and provide some needed competition to Optimus. When exactly will this new technology arrive and how will it work? That remains to be seen.
While I still think a good Optimus-enabled GPU with a quad-core Sandy Bridge processor is the best option for a balanced notebook, we need to see what AMD can do in terms of performance and battery life. Idle GPU power draw has been getting better with each generation, and we might not have to give up too much battery life. Certainly it’s less complex to only deal with a single GPU inside a system. There will also be plenty of AMD IGP + GPU designs that can use switchable graphics with AMD drivers, and since both sets of hardware use the same driver you don’t have to worry about lack of support. With Llano APUs later this year, we should see such configurations, but it’s hard to imagine Llana keeping up with Sandy Bridge on the CPU side. That means Trinity in 2012 will be the real alternative to the current “fast CPU + fast GPU + IGP” ecosystem NVIDIA and Intel are pushing.
Wrapping things up, there are a lot of laptops at CES using Brazos, plenty of AMD and Intel CPUs paired with AMD 6000M GPUs, and of course the Intel CPU + NVIDIA GPU combinations we mentioned earlier in the week. The mobile market just keeps growing, and we look forward to seeing how these new NVIDIA and AMD GPUs stack up. The proof will be in the pudding as usual.
72 Comments
View All Comments
Ethaniel - Friday, January 7, 2011 - link
... and that's pretty much it. Fusion looks great but we need it in the market pronto (dropping prices as part of the process), and Bulldozer is, well... not there. Meanwhile, Sandy Bridge is laying waste to every single benchmark it touches. Clock's ticking...medi01 - Friday, January 7, 2011 - link
Well, and what if you don't want to pay 200$ for CPU plus 130-150$ for new motherbord (brilliant marketing move by Intel, nobody bothers to note how much new CPU => new motherboard concept costs) what gives?Sandy Bridge is a nice line of CPUs, but pricing, demanding new mobo makes it "oh well" if you are a typical gamer. Investing those hundreds of bucks into GPU is likely to give much greater performance improvements.
ellarpc - Friday, January 7, 2011 - link
Medi01 lot's of us are sitting around with cash in hand waiting to upgrade. I've been hobbling my x4 955 around since it came out waiting for the BD bomb. Bulldozer was supposed to be out before 2010. Then they promised to send out samples before the end of 2010. It's 2011and they can't even show a tiny sample of it at CES. That looks bad for the chip being out this year. I don't think I can wait a whole other year for the bulldozer. The i7-2600k looks pretty tasty from where I'm standing. Too bad AMD doesn't see that they may lose potential upgraders if the y wait too long.vol7ron - Friday, January 7, 2011 - link
agreedazguy90 - Wednesday, January 12, 2011 - link
Double agreed! I am going to be building a new computer when I get home from Afghanistan, and right now I am planning on a 2600k build, because BD is nowhere in sight.medi01 - Friday, January 7, 2011 - link
I don't get the point of "upgrade for the sake of upgrade" especially considering you already have moder 4 core CPU. What do you do on your PC that would justify giving out hundreds of bucks for the upgrade?ellarpc - Friday, January 7, 2011 - link
I have a computer and like be able to give my customers the best advice on the latest hardware. Anandtech helps me out a ton on the items I've never used or sold (i7-980x as an example) but using hardware myself is a big bonus. My wife kids and I are all gamers so as I upgrade my computer I trickle down my hardware to my wife and kids computers so they essentially get upgrades as well. It's a win win for all of us. I have been holding off for longer than usual waiting for word on Bulldozer but it doesn't look like it will happen any time soon.ellarpc - Friday, January 7, 2011 - link
"computer shop"nofumble62 - Friday, January 7, 2011 - link
Anyone upgrade their system will have to buy new motherboard nowaday. No difference whether AMD or Intel.semo - Saturday, January 8, 2011 - link
Atleast with AMD you get more features if not the highest speed (more PCIe lanes, more SATA 3 ports and an actual "budget" range too).