CrossFireX Arrives: First Look at 3 and 4 GPUs in 2 Card Setups
by Derek Wilson on March 7, 2008 12:05 PM EST- Posted in
- GPUs
World in Conflict Performance
Version: 1.005
Settings: Medium quality plus Heat Haze, Debris Physics, and DX10
We tested this game using the built-in benchmark feature of the game. In our experience, this does a good job of testing the different graphical scenarios that can be encountered in the game.
World in Conflict, from the data, really looks like we had Vsync enabled, the system was severely CPU limited, or the framerate cap was on. However, this was not the case. These data points had minimum and maximum framerates extending from ~30 to ~160 fps, and there appears to be another factor in the resulting data looking so flat between resolutions.
The 9600 GT SLI was able to break past this barrier and post average framerates higher than 60fps. The only major difference is that that we had to use a different driver just for the 9600. Given what we experienced in our recent Dell XPS M1730 article, the 170 series drivers help significantly in World in Conflict and Crysis; unfortunately, no official beta or other driver with 8800 Ultra support is available. We are investigating further and waiting for driver updates from NVIDIA.
It seems clear that if there is some limit on performance scaling here with our test platform. We expect future driver updates to significantly help both SLI and CrossFireX.
Pushing resolution higher is the way to get more value out of multi-GPU here. Increasing settings may help, and we will go back and look at higher settings with these configurations in the future. Running at Very High is still not a viable option, but there is room for customization to end up with a workable stress test for current high-end systems.
World in Conflict Performance | ||||
1280x1024 | 1600x1200 | 1920x1200 | 2560x1600 | |
NVIDIA GeForce 9600 GT SLI | 73 | 68 | 63 | 51 |
NVIDIA GeForce 8800 Ultra SLI | 58 | 56 | 54 | 52 |
NVIDIA GeForce 8800 Ultra | 58 | 56 | 53 | 44 |
NVIDIA GeForce 9600 GT | 60 | 50 | 44 | 30 |
AMD Radeon HD 3870X2 (x 2) | 63 | 60 | 60 | 58 |
AMD Radeon HD 3870X2 + 3870 | 63 | 60 | 60 | 57 |
AMD Radeon HD 3870X2 | 63 | 60 | 59 | 52 |
AMD Radeon HD 3870 | 57 | 51 | 47 | 34 |
With all the data compressed under 60fps, it is hard to get a clear understanding of what's going on. In the interest of reporting what we actually saw, the above chart shows our results.
36 Comments
View All Comments
DerekWilson - Saturday, March 8, 2008 - link
that is key ... as is what ViRGE said above.in addition, people who want to run 4 GPUs in a system are not going to be the average gamer. this technology does not offer the return on investment anyone with a midrange system would want. people who want to make use of this will also want to eliminate any other bottlenecks to get the most out of it in their systems.
not only does skulltrail help us eliminate bottlenecks and look at the potential of the graphics subsystem, in this case i would even make the argument that the system is a good match for the technology.
Sind - Saturday, March 8, 2008 - link
I agree, I don't think the Skulltrail is doing anyone favours of how they could judge utilising these MGPU solutions in a "average" system that the reader on Anand would be using. X38 seems very popular as is 780i, I really don't think even more then 1% of your traffic would ever utilise the system you used to do this review. I've read the other CrossfireX reviews from around the net, and most had no problems at all, and infact most noted that it worked straight out with no messing around with the lengthy directions that were indicated in the article to get it to work.ViRGE - Saturday, March 8, 2008 - link
Something very, very important to keep in mind is that Skulltrail is the only board out right now that supports Crossfire and SLI. If AT wants to benchmark both technologies without switching the boards and compromising the results, this is the only board they can use.Cookie Monster - Saturday, March 8, 2008 - link
No 8800Ultra or GTX Tri-SLI for comparison?DerekWilson - Saturday, March 8, 2008 - link
we were looking at 2 card configurations here ... i'll check out three and four card configs laterJarredWalton - Saturday, March 8, 2008 - link
Unfortunately, Tri-SLI requires a 780i motherboard. That's fine for Tri-SLI, but CrossFire (and CrossFireX) won't work on 780i AFAIK. I also think Skulltrail may have its own set of issues that prevent things from working optimally - but that's conjecture rather than actual testing. Derek and Anand have Skulltrial; I don't.Slash3 - Saturday, March 8, 2008 - link
...graphs are both using the same image. The Oblivion Performance and 4xAA/16AF Performance line graphs (oblivionscale.png) are just duplicates and link to the same file. :)JarredWalton - Saturday, March 8, 2008 - link
Fixed, thanks.slashbinslashbash - Saturday, March 8, 2008 - link
Graphics really are fairly unique in the computing world in that they are easily parallelized. While we're pretty quickly reaching a point of diminishing returns in number of cores in a general-purpose CPU (8 is more than enough for any current desktop type of usage), the same point has not been reached for graphics. That is why we continue to see increasing numbers of pipelines in individual GPU's, and why we continue to see effective scaling to multiple cards and multiple GPU's per card. As long as there is memory bandwidth to support the GPU power, the GPU looks like it is capable of taking advantage of much more parallelization. I expect 1000+ pipes on a 2-billion-transistor+ GPU by 2011.So, I expect multi-GPU to remain with us, but any high-end multi-GPU setup will always be surpassed by a single-GPU solution within a generation or two.
DerekWilson - Saturday, March 8, 2008 - link
that's not the issue ... graphics is infinitely parallelizeable ...the problems are die size and power.
beyond a certain die size there is huge drop off in the amount of money and IHV can make on their silicon. despite the fact that every chip could have been made larger, we are working with engineers, not scientists -- they have a budget.
multiGPU allows IHVs to improve performance nearly linearly in some cases without the non-linear increase in cost they would see from (nearly) doubling the size of their GPU.
...
then there is power. as dies shrink and we can fit more into a smaller space, will GPU makers still be able to make chips as big as R600 was? power density goes way up as die size goes down. power requirements are already crazy and it could get very difficult to properly dissipate the heat from a chips with small enough surface area and huge enough power output ... ...
but speading the heat out over two less powerful cards would help handle that.
...
in short, multigpu isn't about performance ... it's about engineering, flexibility and profitability. we could always get better performance improvement from a single GPU if it could be built to match the specs of a multiGPU config.