The NVIDIA GeForce GTX 980 Review: Maxwell Mark 2
by Ryan Smith on September 18, 2014 10:30 PM ESTThief
Our latest addition to our benchmark suite is Eidos Monreal’s stealth action game, Thief. Set amidst a Victorian-era fantasy environment, Thief is an Unreal Engine 3 based title which makes use of a number of supplementary Direct3D 11 effects, including tessellation and advanced lighting. Adding further quality to the game on its highest settings is support for SSAA, which can eliminate most forms of aliasing while bringing even the most powerful video cards to their knees.
Thief is another solid win for the GTX 980. The closest anyone gets to it is within 10%, and the lead only widens from there. Against the GTX 780 Ti, this is a lead of anywhere between 10% and 16%, and against the R9 290 XU it’s 15-22%; Mantle doing the card no favors for average framerates above 1080p.
The performance advantage over the GTX 780 and GTX 680 is also above average. GTX 980 can outrun the previous x80 card by 33% or more, and the GTX 680 by at least 80%.
On an absolute basis the GTX 980 won’t quite crack 60fps at 1440p, but it does come very close at 56fps. And since thief is running an internal form of SSAA, turning up the resolution to 4K and dropping the SSAA still yields playable framerates, though at 48fps it’s closer to 45 than 60. 60fps is going to require a bit more horsepower than what a single GTX 980 can deliver today.
The GTX 980’s performance advantage generally holds up when it comes to minimum framerates as well. Though it is interesting to note that until we get to 4K, the GTX 980 holds a larger minimum framerate advantage over the GTX 780 Ti than it does an average framerate advantage – 20% verus about 10%. On the other hand the use of Mantle begins to close the gap for the R9 290XU a bit, but it’s still not enough to make up for the GTX 980’s strong overall performance advantage, especially at 1080p.
Our delta percentages are once more unremarkable. All cards are consistently below 3% here.
274 Comments
View All Comments
Sttm - Thursday, September 18, 2014 - link
"How will AMD and NVIDIA solve the problem they face and bring newer, better products to the market?"My suggestion is they send their CEOs over to Intel to beg on their knees for access to their 14nm process. This is getting silly, GPUs shouldn't be 4 years behind CPUs on process node. Someone cut Intel a big fat check and get this done already.
joepaxxx - Thursday, September 18, 2014 - link
It's not just about having access to the process technology and fab. The cost of actually designing and verifying an SoC at nodes past 28nm is approaching the breaking point for most markets, that's why companies aren't jumping on to them. I saw one estimate of 500 million for development of a 16/14nm device. You better have a pretty good lock on the market to spend that kind of money.extide - Friday, September 19, 2014 - link
Yeah, but the GPU market is not one of those markets where the verification cost will break the bank, dude.Samus - Friday, September 19, 2014 - link
Seriously, nVidia's market cap is $10 billion dollars, they can spend a tiny fortune moving to 20nm and beyond...if they want too.I just don't think they want to saturate their previous products with such leaps and bounds in performance while also absolutely destroying their competition.
Moving to a smaller process isn't out of nVidia's reach, I just don't think they have a competitive incentive to spend the money on it. They've already been accused of becoming a monopoly after purchasing 3Dfx, and it'd be painful if AMD/ATI exited the PC graphics market because nVidia's Maxwell's, being twice as efficient as GCN, were priced identically.
bernstein - Friday, September 19, 2014 - link
atm. it is out of reach to them. at least from a financial perspective.while it would be awesome to have maxwell designed for & produced on intel's 14nm process, intel doesn't even have the capacity to produce all of their own cpus... until fall 2015 (broadwell xeon-ep release)...
kron123456789 - Friday, September 19, 2014 - link
"it also marks the end of support for NVIDIA’s D3D10 GPUs: the 8, 9, 100, 200, and 300 series. Beginning with R343 these products are no longer supported in new driver branches and have been moved to legacy status." - This is it. The time has come to buy a new card to replace my GeForce 9800GT :)bobwya - Friday, September 19, 2014 - link
Such a modern card - why bother :-) The 980 will finally replace my 8800 GTX. Now that's a genuinely old card!!Actually I mainly need to do the upgrade because the power bills are so ridiculous for the 8800 GTX! For pities sake the card only has one power profile (high power usage).
djscrew - Friday, September 19, 2014 - link
Like +1kron123456789 - Saturday, September 20, 2014 - link
Oh yeah, modern :) It's only 6 years old) But it can handle even Tomb Raider at 1080p with 30-40fps at medium settings :)SkyBill40 - Saturday, September 20, 2014 - link
I've got an 8800 GTS 640MB still running in my mom's rig that's far more than what she'd ever need. Despite getting great performance from my MSI 660Ti OC 2GB Power Edition, it might be time to consider moving up the ladder since finding another identical card at a decent price for SLI likely wouldn't be worth the effort.So, either I sell off this 660Ti, give it to her, or hold onto it for a HTPC build at some point down the line. Decision, decisions. :)