Exposing tests
The VIA Nano CPU on a VIA VB8001 motherboard based on the old CN896 chipset made its way from Copenhagen, Denmark to Saint Petersburg, Russia in just 2 months (so much for our postal service). A PCIe x16 slot allowed us to install a graphics card from our regular testbed. So, except for RAM capacity (just 4GB in 2 sockets), we were good to test VIA Nano like a real 64-bit x86 processor. Besides, our new test method of 2010 had been almost completed by that time. First of all, we were interested in how it would perform with native and changed vendor strings. This would let us check Agner Fog's conclusions in practice.
But testing was truly painful. We didn't know who to blame -- negligent software/compiler developers or motherboard/CPU makers. But we had so many crashes when only disabling power or pushing the Reset button could help, that we had to extend the Reset button cable to tester's desktop so he wouldn't have to come up to the testbed every time.
Below is the complete list of benchmarks we ran. Markings show if VIA Nano completed them successfully or failed. A minus means that the test crashed or caused a reboot. Note that we only assigned minuses to tests that failed to run completely. Situations like "run-crashed-rerun-crashed-rerun-completed" were nothing but ordinary.
Test |
Native |
Core 2 |
Visualization: 3ds max |
- |
- |
Visualization: Maya |
- |
- |
Visualization: Lightwave |
+ |
+ |
Visualization: Pro/ENGINEER |
- |
- |
Visualization: SolidWorks |
- |
- |
Visualization: UGS NX 6 |
+ |
+ |
Rendering: 3ds max |
+ |
+ |
Rendering: Maya |
- |
- |
Rendering: Lightwave |
+ |
+ |
Calculations: Maya |
- |
- |
Calculations: Pro/ENGINEER |
- |
- |
Calculations: SolidWorks |
- |
- |
Calculations: UGS NX 6 |
+ |
+ |
Calculations: MAPLE |
+ |
+ |
Calculations: Mathematica (int) |
+ |
+ |
Calculations: Mathematica (MMA) |
+ |
+ |
Calculations: MATLAB |
+ |
+ |
Image editing: ACDSee |
+ |
+ |
Image editing: PaintShop |
+ |
+ |
Image editing: PhotoImpact |
+ |
+ |
Image editing: Photoshop |
+* |
+ |
Video: Premiere |
+ |
+ |
Video: DivX |
+ |
+ |
Video: XviD |
+ |
+ |
Video: Mainconcept |
+ |
+ |
Video: Vegas |
+ |
- |
Video: x264 |
+ |
+ |
|
Test |
Native |
Core 2 |
Archiving: 7-Zip |
+ |
+ |
Archiving: WinRAR |
+ |
+ |
Archiving: unpacking |
+ |
+ |
Audio: Apple Lossless |
+ |
+ |
Audio: FLAC |
+ |
+ |
Audio: Monkey's Audio |
+ |
+ |
Audio: MP3 (LAME) |
+ |
+ |
Audio: Nero AAC |
+ |
+ |
Audio: OGG Vorbis |
+ |
+ |
Compilation |
+ |
+ |
Java |
+ |
+ |
Web browser: SunSpider/Chrome |
+ |
+ |
Web browser: SunSpider/Firefox |
+ |
+ |
Web browser: SunSpider/IE |
+ |
+ |
Web browser: SunSpider/Opera |
+ |
+ |
Web browser: SunSpider/Safari |
+ |
+ |
Games: Batman |
- |
- |
Games: Borderlands |
+ |
+ |
Games: DiRT 2 |
- |
- |
Games: GTA IV |
- |
- |
Games: S.T.A.L.K.E.R. |
- |
- |
Games: UT3 |
- |
- |
Games: World in Conflict |
- |
- |
Games: Crysis Warhead |
- |
- |
Games: Resident Evil 5 |
+ |
+ |
Games: Fritz Chess |
+ |
+ |
Games: Far Cry 2 |
+ |
+ |
|
* Consider it "relatively completed". Our test method implies running every test 4 times: 1 "warm-up" runtime and 3 regular ones. VIA Nano failed to complete Adobe Photoshop test entirely in the native mode. The best it could do was 2 runtimes. The 3rd resulted in a crash. Back ↑
In total, VIA Nano failed 15 tests of 54, which is over a quarter. More than surprising for a processor positioned as x86/x64-compatible. As we have already stated, we can't blame the processor alone. Moreover, the motherboard we've tested is now used in a home NAS solution. And it's been working on a 24x7 basis with a 32-bit Windows XP Professional OS and NAS-related software (media server, ftp/http server, antivirus) without a single issue for two months. Food for thought.
All possible processor bugs may be fixed in the updated Nano 3xxx series (CNB revision). But let's make it clear: we don't have any information on actual fixes or on the list of bugs VIA is aware of. Anyway, bugs may be caused by something else. Of 15 failed tests 11 actively used graphics engine, hardware acceleration included. So anything from graphics card drivers to PCIe controller driver (or the controller itself) could fail. But it was probably VIA's fault anyway.
Native/Intel Core 2 modes comparison
Let's see the percentage difference between the two modes. Only completed tests are considered.
Test |
Native |
Core 2 |
Δ% |
Java |
10.55 |
10.67 |
1% |
Archiving: 7-Zip |
0:18:25 |
0:18:20 |
0% |
Archiving: WinRAR |
0:08:09 |
0:08:01 |
2% |
Archiving: unpacking |
0:02:23 |
0:02:23 |
0% |
Audio: Apple Lossless |
20 |
21 |
5% |
Audio: FLAC |
25 |
25 |
0% |
Audio: Monkey's Audio |
19 |
19 |
0% |
Audio: MP3 (LAME) |
9 |
10 |
11% |
Audio: Nero AAC |
10 |
10 |
0% |
Audio: OGG Vorbis |
7 |
7 |
0% |
Web browser: SunSpider/Chrome |
1357 |
1322 |
3% |
Web browser: SunSpider/Firefox |
2259 |
2237 |
1% |
Web browser: SunSpider/IE |
14875 |
15121 |
-2% |
Web browser: SunSpider/Opera |
863 |
859 |
0% |
Web browser: SunSpider/Safari |
1245 |
1231 |
1% |
Video: DivX |
0:18:51 |
0:18:42 |
1% |
Video: Mainconcept |
0:56:32 |
0:56:31 |
0% |
Video: Premiere |
1:04:31 |
1:04:33 |
0% |
Video: x264 |
1:58:46 |
1:52:01 |
6% |
Video: XviD |
0:17:43 |
0:17:17 |
3% |
Visualization: Lightwave |
75.56 |
78.31 |
-4% |
Visualization: UGS NX 6 |
0.74 |
0.73 |
-1% |
Image editing: ACDSee |
0:21:59 |
0:21:43 |
1% |
Image editing: PaintShop |
0:38:01 |
0:37:58 |
0% |
Image editing: PhotoImpact |
0:33:10 |
0:32:53 |
1% |
Image editing: Photoshop |
0:31:47 |
0:31:27 |
1% |
Games: Borderlands |
11 |
11 |
0% |
Games: Far Cry 2 |
5.6 |
5.6 |
0% |
Games: Fritz Chess |
740 |
748 |
1% |
Games: Resident Evil 5 |
18.2 |
18.5 |
2% |
Compilation |
0:52:52 |
0:52:01 |
2% |
Calculations: MAPLE |
0.1211 |
0.1216 |
0% |
Calculations: Mathematica (int) |
0.804 |
0.891 |
11% |
Calculations: Mathematica (MMA) |
0.5269 |
0.4849 |
-8% |
Calculations: MATLAB |
0.3421 |
0.2677 |
28% |
Calculations: UGS NX 6 |
1.47 |
1.47 |
0% |
Rendering: 3ds max |
4:17:04 |
4:16:31 |
0% |
Rendering: Lightwave |
1370.05 |
1428.26 |
-4% |
First, let's get rid of obvious and regular. As we have repeatedly stated, 1-2% difference doesn't mean anything, unless it's repeating regularly. It's measurement error. Well, it is repeating regularly in our case: there are 11 occurences of +1/+2% difference, and just 2 occurences of -1/-2% difference. On the one hard, this confirms the unfairness of determining of instruction set support. On the other hand, it means that unfairness is pretty harmless.
Now let's take a look at other tests. In terms of checking out Agner Fog's assumptions, most important are Lightwave, Apple Lossless, MP3 (LAME), SunSpider/Chrome, Mathematica, MATLAB, x264 and XviD. Let's comply with presumption of innocence and remove SunSpider/Chrome and XviD from this list. Where there are 1-2%, there may easily be 3%. It's harder to explain why we also have to remove Apple Lossless and MP3 (LAME).
The matter is that our test method, like any other solution tailored to specific needs, can't be fully universal. In particular, it's not the best for testing very slow processors. One of the reasons is dBpoweramp that outputs integer results. When very small values are involved, an integer difference may mean as much as 5-10%. E.g. the original values could've been 9.4 and 9.6, not 9 and 10, but we wouldn't know that, because rounding would be done by the benchmark.
So left are Lightwave, Mathematica, MATLAB and x264. Could be more you might say, but we believe it's still enough for our needs. Because these cases can only be explained by Agner Fog's assumptions. Well, MATLAB developers've never hid that they optimize their software. We even did a special research of that two years ago. But we can't say the same about other tests. Besides, the very unstable Adobe Photoshop performance in the native mode was "fixed" by the fake mode. This serves as a much better confirmation than a minor speed boost. But Sony Vegas, for example, refused to work in the fake mode at all.
So, of 38 tests we have 6 cases of significant performance difference between the native and fake modes. In 3 of those 6 the fake mode resulted in the average performance boost of 15%. In the other 3 it caused the average performance drop of 5%. Below are conclusions we can draw.
- The very existence of performance difference caused by changing CPUID means that CPU vendor string does affect performance. This is a proven fact.
- We can't say the problem is definitely harmful to non-Intel processors. Most tests didn't react or reacted insignificantly to the change of CPUID.
- We don't claim that all suspicious applications were made with Intel compiler (though it's what should cause things we witnessed), while other programs were made with some other compilers. Tools provided by other companies may have similar flaws affecting application performance.
The latter conclusion is indirectly confirmed by x264 which is open source software (OSS) compiled with an OSS compiler and libraries. Theoretically, no Intel tools could be involved. Still it demonstrates a considerable performance boost in the fake mode.
Summing it all up, we can say the problem does exist, but one shouldn't overestimate its effect on actual software.
Write a comment below. No registration needed!