Jump to content
Sign in to follow this  
frag85

CPU and SLI scaling (v 0.5.102571)

Recommended Posts

This post is will show my findings on CPU and SLI scaling in the A3 Beta, version 0.5.102571 on my system.

i7 3570k

Z77x-UD3H

GTX275 SLI (314.xx drivers)

16GB 1866mhz CL10 DDR3

Intel 520 SSD

I went through and did some benchmarks using the first Showcase mission. I started the bench after "they're outflanking Bravo, over" and briskly continued through the left side of the valley taking out enemies as fast as possible with my team till the village save point. Fraps reported each run time was around 5 minutes +/- 30 seconds (~300000ms). On multiple runthroughs I found the average FPS to be within 2 FPS from one run to the next. I used Rivatuner to monitor CPU/Disk usage and MSI afterburner to monitor GPU Usage. I was going to include graphs of CPU usage, but they were almost identical and hovered around 50% with no real difference from one clock speed to another. I had done something similar for ArmA2 and ArmA1 when they came out, as well as a

comparison that I need to update to reflect recent patches and SSDs.

Keep in mind even though I am running SLI, they are GTX275 cards which were the cutting edge 4 years ago. In most games they perform between a 480 and 570, they are pretty close to a single 560ti.

CPU Scaling

I lowered my resolution to 1280x1024 to check CPU scaling with SLI on and Off. I could have gone down to 1024x768, or dropped the rendering resolution from 100%, but i wanted to get a more real world performance comparison for myself. To get this done quick I did 3 CPU frequencies, each 50% more than the last. 2ghz, 3ghz and 4.5ghz. I ran the Standard detail preset because I felt this would give the best real world results and is the most balanced level of detail vs performance (at least for me).

2.0ghz single

Avg: 35.834 - Min: 25 - Max: 56

2.0 SLI

Avg: 33.681 - Min: 22 - Max: 55

3ghz single (50% clock increase)

Avg: 40.513 - Min: 27 - Max: 88

3 SLI

Avg: 59.562 - Min: 37 - Max: 115

4.5ghz single

Avg: 43.543 - Min: 29 - Max: 138

4.5 sli

Avg: 69.792 - Min: 42 - Max: 14

4.5 Triplehead

Avg: 42.352 - Min: 25 - Max: 95

As you can see I doubled my framerate (average 34 to 70, minimum 22 to 42)) going from 2ghz to 4.5ghz where I was GPU bound (see the graphs below). When a powerful CPU is used, SLI should scale very well. I saw around a 46% increase at 3ghz and a 60% increase at 4.5 when running a resolution that fits these older video cards.

GPU Usage.

First 'half' of the graph is Single GPU, 2nd half on the right is SLI.

2ghz

Dh2z89r.jpg

3ghz

Fr26ZnV.jpg

4.5ghz

Glxc3ZM.jpg

and triplehead 3840x1024

jYst64a.jpg

------------------------------------------------------------------------------------------------

On another test when I was comparing my GTx275 performance to a rig with CF7970's I found this:

2560x1600, Visibility 3800 Overall 3200 Object 100 Shadow

Vsynch, AA, PPAA, ATOC, Post Processing Disabled, HDR Standard, Anisotropic Ultra, PIP High, Dynamic Lights Ultra

Textures, Objects, Terrain, Clouds Ultra, Shadows High, Particles Very High

Mack is running a CPU with similar performance so CPU power should be similar (2600k@4.4 vs my 3570k@4.2)

CF did not scale for this user, so a single 7970 was used for the results.

Mack Runs 2560x1600=4.096 million pixels in 16:10 (1.6:1)

I run 3840x1024=3.93 Million pixels in 15:4 (3.75:1)

Variables: Pixel Count by 4%, aspect ratio (should still be very close though), Possible CPU performance, but a Sandy @4.4 should be about an Ivy @4.2.

7970+2600k@4.4 vs. SLI275+3570k@4.2

Standard: ~56 FPS. vs. 39.677 - Min: 20 - Max: 49

High: ~52 FPS. vs. 28.904 - Min: 18 - Max: 58

Ultra: ~44 FPS. 22.653 - Min: 18 - Max: 38

% Performance Difference:

Standard: 33.3%

High: 56.8%

Ultra: 62.7%

Here you can see the newer architecture and a GPU built for high pixel counts really takes off as more objects, shaders, particles and effects are rendered.

Edited by frag85

Share this post


Link to post
Share on other sites

Yeah, if you could get a 6 GHz dual-core CPU, this game would run amazing. Anything beyond 2 cores seems more or less useless, though.

Share this post


Link to post
Share on other sites

Yeah, too bad there are no cheap highly overclockable dualcores anymore.

Share this post


Link to post
Share on other sites

If A3 runs anything like A2 and A1, then you want a quad. Even if you are only seeing 50% cpu usage on a quad (equivalent of 2 full cores) you are processing 4 threads at a time. I had done some extensive tests locking arma2 to 2 and 3 cores (trying to get the best performance while recording with fraps/dxtory) on my i7 920. When running the game on just 2 physical cores you lose some performance over the quad. Trust me, you want a slower quad or more over a dual core. I lost those benchmarks when I had multiple hard drive failures at the same time but it was somewhere in the range of needing a 5+ghz dual to meet the performance of a 4ghz quad on a bloomfield chip. At today's prices (got my 3570k for $169, and 4 years ago I bought my 920 for $199) its practically nothing to get a quad compared to a dual that will come near the performance when HEAVILY overclocked. I see no reason to ever get a dual for gaming.

If I had a x79 board with a 3930k I would compare what a hex core would do. Scaling probably isn't much with 2 more cores, but I bet it does offer slightly better performance. Somewhere there is a breakdown showing how ArmA distributes things among the processors.

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  

×