Synthetic tests (Everest, Sandra, RMMA)
Each of these tests uses its own methods of measuring bandwidth, but maximum read/write rates obtained by Everest and RMMA do not differ much. The same with Sandra (we used this benchmark to compare read rates with integer registers of a processor and with FPU -- practically identical results). Random access latency values in Sandra and RMMA are higher than in Everest. However, despite the differences in absolute values, these benchmarks agree with relative results. In other words, it's possible to choose a winner in any pair of memory modules using results of any of these benchmarks. Only tests of AENEON modules in Windows XP 64 bit upset the harmony of results. That is both tests run in this operating system (Sandra and RMMA) unanimously demonstrated low results, which did not agree with results of the other tests. RMMA leads in detail (diversity of tests and modes for memory modules) among synthetic benchmarks. We'd like to continue using this program in future, but we need Vista support (RT driver).
PCMark Vantage
From pure synthetic tests we turn to programs that imitate computations in certain software classes. We are biased against such programs, because no one really knows what they actually test. On the other hand, PCMark Vantage is an industry-recognized test, which is not purely synthetic (unlike previous versions of this test), as it imitates algorithms used in real applications. If we take a look at memory tests from the purely utilitarian point of view, there is nothing wrong in using such tests. They must not contradict to real applications, that's all. However, we have a serious grudge against PCMark here, at least the total score won't do for memory comparison. A clear-cut victory of the weakest memory kit today is too extravagant to be well-grounded. This strange outcome is produced by the last three subtests, which apparently responded to something else instead of memory characteristics. The gaming subtest will hardly interest users as well, taking into account that Futuremark offers a more authoritative benchmark to analyze performance in this popular category of applications. We can use the remaining three results, but it's better to use all of them as a whole, because some tests are more sensitive to memory frequency, while the others respond to memory volume. In fact, we haven't noticed apparent disagreement with appetites of real applications. For example, memory speed is more important than its volume for editing photos (Memories Suite).
3DMark Vantage
This test is also criticized to demonstrate different results than real games. We do not want to say that it reflects real performance perfectly, for example that of graphics cards. But we'd risk a revolting statement that this test do not disagree with average results of a more or less representative selection of games. In other words, the best graphics card in games almost always gets the highest mark in this test as well. Witch hunts started for less provoking ideas. But you can make sure it's true with your own eyes, just take results from any of our 3D Speed issues. We understand, of course, that it's not quite correct to use this benchmark instead of real games to evaluate graphics cards, at least because people are interested not only how much one card is faster than the other in abstract units, but also in real fps values (CPU rating and the approach to testing processors in this benchmark is far from reality).
Memory tests are another pair of shoes, as it's not difficult to emulate memory load in games. So we decided to check it up: how the total rating in this test correlates with results in real games, when we change memory modules. These results do not look that bad. Just like 3DMark, the games vote for higher frequencies and dislike this very memory kit from Kingston (which was totally unexpected, but confirmed by other tests, except for synthetic ones).
Games
As we already noted, games prefer higher frequencies. However, we cannot say that the performance gain (in fps) is significant. The graphics system apparently acts as a bottleneck for modern games, and any upgrade of a graphics card guarantees more noticeable progress. However, this conclusion does not mean that we can abandon this category of tests. They still manage to reveal the difference, even in such conditions. And perhaps it makes sense to upgrade our testbed to a modern level: a better graphics card and higher resolution. But it will be a mistake to install the most powerful card and use a low resolution, because 'real' tests must not be half real (when real games or programs use resolutions and settings, which are rarely used in real life), they are no better than synthetic tests in this situation.
Archivers
There is nothing to comment on here, higher frequencies are welcome with applause. The second DDR2-800 kit is again outperformed by the first kit, it's very strange from the theoretical (and synthetic) point of view.
Multitasking environment
All games demonstrated a noticeable performance drop with 2-GB memory kits, while 8-GB memory kits returned the results to original values (as if no processes had been started in the background). Well, a quad-core processor finally got an opportunity to show its worth! However, in case of UT 3, raising memory frequency yielded almost identical performance gains. So we cannot say that it's boring to analyze results in this group of tests. It goes without saying, the recommendation to install more memory guarantees higher performance. But judging by manufacturers' intentions, we'll soon have to choose only between high-capacity memory kits to obtain high memory performance.
Conclusions
The main conclusion is the most trivial one: memory must be tested :). At least, even in a small selection of memory kits we found a couple of modules that demonstrated strange results, although the first module had better specifications than the second. Such situations happen very rarely. But most tests of other PC components (be it motherboards or graphics cards) are also based on registering deviations from certain reference characteristics (among equally priced devices) to better or worse.
As for the selection of tests, you can take any of synthetic tests that register memory bandwidth and latencies, because we revealed no principal differences between their results. It stands to reason that measurement methods differ in various tests, so it's not correct to compare test results obtained in Sandra and RMMA, for example. But if one memory module is faster than the other according to one benchmark, the other tests confirm such results. Except for one benchmark, which demonstrated different results in different operating systems. In this case we have nothing to add. The other tests were run in Vista only, and their results confirm the opinion of Lavalys Everest that AENEON kit performance is closer to Transcend rather than to Kingston. Results obtained in 64-bit XP are either worse, or XP failed to cooperate with the benchmarking process here. A further investigation is needed here. But first of all we'd like to know whether our readers are still interested in 64-bit XP for testing memory performance.
As for other tests, we have no doubts about archivers (memory upgrades have a similar effect as CPU upgrades here). Frankly speaking, it's quite possible to let 3DMark Vantage represent games in our memory tests. At least, opinion of this benchmark is really close to that of games. It must be taken into account here that the difference is really small. So even such a sensitive benchmark as 3DMark may register parity between memory modules from different manufacturers with the same properties. In this case games will also fail to detect performance differences. Some tests from PCMark that illustrate performance in applications from several popular software groups do not disagree with results of other benchmarks either. That is their response to memory changes is predictable. However, there are some 'wrong' subtests, so it's not advisable to use the total score from this benchmark.
And finally, tests in a multitasking environment demonstrated importance of high memory volumes in the first place, if you really want to get maximum performance from a powerful multi-core processor. So performance differences in this mode should be analyzed only for memory kits of high capacity (4 GB and higher). We have no doubts in the promising nature of these tests (a quad-core processor can be bought for $150 now, and a triple-core model costs just $100, the share of these processors will inevitably grow, and users will want to know what new opportunities they will have). As for the question how to imitate a multitasking environment (what antivirus, firewall, media encoders, or other programs to run in the background), it's open for discussion.
Write a comment below. No registration needed!