The NVIDIA GeForce GTX 980 Review: Maxwell Mark 2
by Ryan Smith on September 18, 2014 10:30 PM ESTTotal War: Rome 2
The second strategy game in our benchmark suite, Total War: Rome 2 is the latest game in the Total War franchise. Total War games have traditionally been a mix of CPU and GPU bottlenecks, so it takes a good system on both ends of the equation to do well here. In this case the game comes with a built-in benchmark that plays out over a forested area with a large number of units, definitely stressing the GPU in particular.
For this game in particular we’ve also gone and turned down the shadows to medium. Rome’s shadows are extremely CPU intensive (as opposed to GPU intensive), so this keeps us from CPU bottlenecking nearly as easily.
Of all of our games, there is no better set of benchmarks for the GTX 980 than Total War: Rome II. Against both AMD and NVIDIA’s last-generation cards it never wins by as much as it wins here.
Compared to the GTX 780 Ti the GTX 980 is a consistent 16-17% ahead at all resolutions. Meanwhile against the R9 280XU this is an 18% lead at 1080p and 1440p. R9 290XU only begins to catch up at 4K Very High quality, where GTX 980 still leads by a respectable 8%.
This is also a very strong showing compared to the GTX 680. The overall lead is 80-95% depending on the resolution. The GTX 980 was not necessarily meant to double the GTX 680’s performance, but it comes very close to doing so here at 1440p.
Given what happens to the GK104 cards in this game, I suspect we’re looking at the results of either the ROP advantage and/or a very good case CUDA core occupancy improvements. The fact that the lead over the GTX 780 Ti is so consistent over all resolutions does point to the CUDA core theory, but we can’t really rule out the ROPs with the information we have.
As for results on an absolute basis, not even mighty GTX 980 is going to crack 30fps at 4K with Extreme settings. In lieu of that Very High quality comes off quite well at 49fps, and we’re just shy of hitting 60fps at 1440p with Extreme.
274 Comments
View All Comments
Sttm - Thursday, September 18, 2014 - link
"How will AMD and NVIDIA solve the problem they face and bring newer, better products to the market?"My suggestion is they send their CEOs over to Intel to beg on their knees for access to their 14nm process. This is getting silly, GPUs shouldn't be 4 years behind CPUs on process node. Someone cut Intel a big fat check and get this done already.
joepaxxx - Thursday, September 18, 2014 - link
It's not just about having access to the process technology and fab. The cost of actually designing and verifying an SoC at nodes past 28nm is approaching the breaking point for most markets, that's why companies aren't jumping on to them. I saw one estimate of 500 million for development of a 16/14nm device. You better have a pretty good lock on the market to spend that kind of money.extide - Friday, September 19, 2014 - link
Yeah, but the GPU market is not one of those markets where the verification cost will break the bank, dude.Samus - Friday, September 19, 2014 - link
Seriously, nVidia's market cap is $10 billion dollars, they can spend a tiny fortune moving to 20nm and beyond...if they want too.I just don't think they want to saturate their previous products with such leaps and bounds in performance while also absolutely destroying their competition.
Moving to a smaller process isn't out of nVidia's reach, I just don't think they have a competitive incentive to spend the money on it. They've already been accused of becoming a monopoly after purchasing 3Dfx, and it'd be painful if AMD/ATI exited the PC graphics market because nVidia's Maxwell's, being twice as efficient as GCN, were priced identically.
bernstein - Friday, September 19, 2014 - link
atm. it is out of reach to them. at least from a financial perspective.while it would be awesome to have maxwell designed for & produced on intel's 14nm process, intel doesn't even have the capacity to produce all of their own cpus... until fall 2015 (broadwell xeon-ep release)...
kron123456789 - Friday, September 19, 2014 - link
"it also marks the end of support for NVIDIA’s D3D10 GPUs: the 8, 9, 100, 200, and 300 series. Beginning with R343 these products are no longer supported in new driver branches and have been moved to legacy status." - This is it. The time has come to buy a new card to replace my GeForce 9800GT :)bobwya - Friday, September 19, 2014 - link
Such a modern card - why bother :-) The 980 will finally replace my 8800 GTX. Now that's a genuinely old card!!Actually I mainly need to do the upgrade because the power bills are so ridiculous for the 8800 GTX! For pities sake the card only has one power profile (high power usage).
djscrew - Friday, September 19, 2014 - link
Like +1kron123456789 - Saturday, September 20, 2014 - link
Oh yeah, modern :) It's only 6 years old) But it can handle even Tomb Raider at 1080p with 30-40fps at medium settings :)SkyBill40 - Saturday, September 20, 2014 - link
I've got an 8800 GTS 640MB still running in my mom's rig that's far more than what she'd ever need. Despite getting great performance from my MSI 660Ti OC 2GB Power Edition, it might be time to consider moving up the ladder since finding another identical card at a decent price for SLI likely wouldn't be worth the effort.So, either I sell off this 660Ti, give it to her, or hold onto it for a HTPC build at some point down the line. Decision, decisions. :)