iXBT Labs - Computer Hardware in Detail






Gigabit Network Adapters on 32-bit PCI Bus Roundup

January 11, 2003

We keep on testing Gygabit network adapters and today we have cards meant both for a 32-bit PCI bus and boards with the 64bit PCI interface but back compatible with a 32-bit bus. This is exactly the PC bus mainboards of usual computers are equipped with. A 64-bit PCI bus is still a priveledge of server mainboards. They are used for other tasks, have several processors, and much more expensive. 

The theory on the Gigabit Ethernet was given last time. The today's article extends the last one; besides, we have altered the testing technique. 


Five out of seven adapters were assembled on a microcontroller from National Semiconductor Corporation. They have two chips, one being a transceiver of the physical level. The DP83861VQM-3 transceiver is identical for all five boards. Some trasceivers have a heatsink above. 

It's for a manufacturer to decide whether a heatsink is necessary or not; but I must say that the transceivers heat a lot. When changing the cards right after switching off the computer, the chip or heatsink can burn. The transceiver can work at 10/100/1000 Mbit/s in the full or half-duplex modes. It supports the auto-negotiation mode for the speeds and modes listed above (IEEE 802.3u Auto-Negotiation). 

Microcontrollers installed on these boards which are 10/100/1000 Mbit Ethernet solutions and that connect a network card with a PCI bus differ in a bit capacity of the PCI bus. The 64-bit adapters have DP83820BVUW controllers, while the 32-bit ones have DP83821BVM ones.


Well, the only key difference is a bit capacity of the PCI bus they are working with. The other characteristics are identical: 

  • MDI-X - auto-detection of straight-through and cross cables; 
  • FIFO integrated queues with 8 KB for transfer and 32 KB for reception; 
  • Jumbo frames support; 
  • VLAN (Virtual LAN) with auto-detection and deletion of VLAN tags for received packets and automatic setting of VLAN tags for outgoing packets; 
  • 802.1D and 802.1Q QoS support, including several queues in both directions (for transfer and reception); 
  • accelerated calculation of checksums of frames of the IPv.4 protocol - calculation and checkup of headers of IP, TCP, and UDP protocols. 

In course of installation under the Windows all 5 cards had a funny bug - after rebooting and searching new devices the computer found an unknown additional PCI device 

which couldn't be idntified and refused the driver I offered. Nevertheless, the adapters worked smoothlty and no other bugs were noticed. 

The first card is TRENDnet TEG-PCITX from TRENDware

The card has a 32-bit PCI interface. There are 6 LEDs behind, 3 informing about the connection speed of 10/100/1000 Mbit, and the rest showing collisions, full duplex and data transfer. The transceiver comes with a heatsink. 

For installation of the network adapter under the Windows 2000 we used the latest driver version downloaded from the company's site - 5.01.24. 

The driver sports rich settings for adjustment and tuning of the card, the Jumbo Frame size can be set manually in steps of 1, the maximum size is unknown. However, the adapter worked flawlessly with the frame of 16128. 

For the Linux OS there are drivers only for the core of 2.2.x. They are lacking in the cores of the version 2.4.x, that is why we used the "National Semiconduct DP83820" driver v.0.18 integrated into the OS. Here the maximum size of the Jumbo Frame is specified - 8192. Note that without correcting the original driver code it's impossible to set a packet size over 1500. 

The second card is TRENDnet TEG-PCITX2 from the same firm. It has a 64-bit PCI interface and an older microcontroller's revision - DP83820BVUW. The transceiver is also equipped with a heatsink. 

The drivers under the Windows and Linux are the same as of the 32-bit model. 

Next two cards come from SMC Networks. They both have two chips and are based on the above mentioned microcontrollers. There are no heatsinks.   

The SMC9452TX card has a 32-bit PCI interface and 5 LEDs, 3 notifying about the operating speed, and the other two about the link and activity. Both adapters have a curious PCB design - next to the interface Ethernet connector the card has its corner cut off while on the other side the card juts out. I have no idea what it's done for.

For the Windows we used the latest drivers from the company's site - 1.2.905.2001. There are less options and settings compared to the last time, and a Jumbo Frame size can have the following values: 1514, 4088, 9014, 10000, 16128. 

Although SMC offers drivers for the Linux for download, we used the driver integrated into the OS (the same) because the SMC's driver is quite old (dated middle of 2001). 

The second adapter from SMC Networks - SMC9462TX - possesses almost the same characteristics but uses a 64-bit PCI bus (and, thus, another Gigabit Ethernet microcontroller). The situation with the drivers is absolutely the same. 

The last two-chip card is LNIC-1000T(64) from LG Electronics

The card has a 64-bit PCI interface. The transceiver is capped with a heatsink. There are also 6 LEDs with the functions identical to the TEG-PCITX adapter. 


For the Windows we used the drivers coming with the card as there were no newer drivers on the site. It seems that they are the reference drivers (identical to the TEG-PCITX) with the same functions. Their version is 5.01.24. 

The drivers for Linux were available only for the core 2.2.x, that is why in the tests we used the same OS-integrated driver. 

The next adapter - Hardlink HA-64G - comes from MAS Elektronik AG with the 64-bit interface. 

The card has three LEDs that show the speed of 10/100/1000 Mbit and the connection status. The LED blinks when data are transferred. 

This card has only one chip, and it comes with the new controller AC1001KPB from Altima. The controller is covered with a heatsink. 

In the tests under the Windows we used the drivers coming with the card - because ot the site they don't have a newer version. There isn't a wide range of options and settings. The size of the Jumbo Frame can have the following values: 1500, 2000, 3000 and 4000. 

The drivers for Linux are also supplied with the card; they were installed successfully and the system detected the adapter. But then the card refused to change the MTU size to more than 1500. The fast search of redefined single limit in the driver's source code gave no results. Probably, it's possible to enable Jumbo frames somehow (there are a lot of variables in the source code relating to the Jumbo), but there is no a single word about it. That is why this adapter was tested in the Linux only with the packets of 1500, i.e. without Jumbo frames. 

And the last card is Intel PRO/1000 MT Desktop from Intel Corporation. It was already tested last time, but today we ran the tests with the newer driver version. 

If you remember, this is a one-chip solution based on the Intel 82540EPBp microcontroller. The card has two LEDs, one displaying the connection status and data transfer, and the other (two-color) showing the speed of 10/100/1000 Mbit. Here are some parameters of the microcontroller: 

  • Support of 10/100/1000 Mbit in the half- and full-diplex modes;  
  • MDI-X - auto detection of straight-through and cross cables; 
  • auto detection of cable's length; 
  • configurable FIFO queues for reception and transfer, 64 KB; 
  • reception/transfer queues with low latency; 
  • transmission of up to 64 descriptors of packets via the bus in an operation; 
  • programmable reception buffer, from 256 bytes to 64 KB; 
  • accelerated calculation of checksums of frames of the IP, TCP and UDP protocols. 
  • Support of IEEE 802.1Q VLAN with setting and deletion of VLAN tags and packet filtering of up to 4096 VLAN tags; 
  • Control means to reduce the number of interrupts during operation; 
  • Jumbo Frames up to 16 KB supported. 

The latest versions ofthe drivers for both OSes were taken from the company's site: v.6.4.16 for the Windows 2000 (it provides a wide range of options for configuration of the adapter; the Jumbo Frame can be of 4088, 9014 and 16128 bytes) and v.4.4.19 for the Linux (it can work only as a module). 

Testing technique

Test computers: 

  • Pentium 4 1.8 GHz and 2.2 GHz; 
  • 512 MB memory; 
  • Maxtor 20 GB hard drive; 
  • Windows 2000 with Service Pack 3 and Linux Red Hat 7.3 on the core v. 2.4.19 

The computers were connected directly (without a switch) with a 5m cable of the 5e category (almost ideal conditions). 

In the Windows 2000 for TCP traffic generation and measurements we used Iperf 1.2 and NTttcp programs from the Windows 2000 DDK. The programs were used to measure data rates and CPU utilization at the following Jumbo Frame sizes: 

  • 1514 bytes (no Jumbo frames); 
  • 3014 bytes; 
  • 6014 bytes; 
  • 9014 bytes; 
  • 16128 bytes. 

The cards that didn't support certain Jumbo Frame sizes weren't used in these tests. 

Besides, the Iperf was run in the UDP traffic generation mode. The stream speed of the UDP traffic was set to 200 Mbit which increased in the cycle up to 800 in 10 Mbit steps. The maximum speed reached was recorded as a test result. 

The OS was also slightly tuned up. The startup parameters of the programs and settings of the register are the following: 

  • The maximum packet size is 1514 bytes (no Jumbo Frame)

  • Hkey_Local_Machine\System\CurrentControlSet\Services\Tcpip\Parameters 
    TcpWindowSize = ffff

    Startup options of the Iperf in TCP mode:
    client: iperf -c -M 100000 -w 64K -l 24K
    server: iperf -s -m -M 100000 -w 64K -l 24K 

    Startup options of the Iperf in UDP mode:
    client: iperf -c -M 100000 -w 64K -l 24K -u -b 200M
    server: iperf -s -m -M 100000 -w 64K -l 24K -u 

    Startup options of the NTttcp:
    transmitter: ntttcps -m 1,0, -a 4 256K -n 10000
    receiver: ntttcpr -m 1,0, -a 4 -l 256K -n 10000 

  • Packet size of 3014, 6014, 9014 and 16128 bytes (Jumbo Frame used)

  • Hkey_Local_Machine\System\CurrentControlSet\Services\Tcpip\Parameters 
    TcpWindowSize = 20971520 (20 Mb)
    Tcp1323Opts = 3

    Startup options of the Iperf in TCP mode:
    client: iperf -c -M 100000 -w 1M -l 24K
    server: iperf -s -m -M 100000 -w 1M -l 24K 

    Startup options of the Iperf in UDP mode:
    client: iperf -c -M 100000 -w 1M -l 24K -u -b 200M
    server: iperf -s -m -M 100000 -w 1M -l 24K -u 

    Startup options of the NTttcp:
    transmitter: ntttcps -m 1,0, -a 4 256K -n 10000
    receiver: ntttcpr -m 1,0, -a 4 -l 256K -rb 20000000 -n 10000 

Each TCP test was run 15 times, with the best speed result chosen at the end. In case of the NTttcp the CPU's load was measured with the program's own means, and in the Iperf it was done with the system monitor of the Windows 2000. 

In the Linux OS for traffic generation and measurements we used the netPIPE 2.4. The program generates a traffic with a gradually growing size of the data packet (a packet of the size N is transferred several times, the number of transmissions is inversly proportional to its size, but not less than 7). Such method shows the percentage of the channel utilization depending on the size of data transferred. 

The size of the Jumbo Frame was changed by changing the MTU in the settings of the network interface by command  ifconfig eth0 MTU $size up

In the tests the following MTU sizes were used: 

  • 1500 bytes (no Jumbo Frames); 
  • 3000 bytes; 
  • 6000 bytes; 
  • 9000 bytes; 
  • 16128 bytes. 

Startup options of the netPIPE:
receiver: NTtcp -b 65535 -o logfile -P -r
transmitter: NTtcp -b 65535 -o logfile -P -t 

Test results

The SMC adapters were tested on their own drivers and on the reference one (originally developed for TEG-PCITX). These adapters are built on the same microprocessors, that is why the drivers are compatible. Only the SMC's driver demonstrates inferior performance and a greater CPU load. The other adapters were tested with their own drivers. 

1. Windows 2000, transfer speed. 

2. Windows 2000, CPU load. 

The Intel PRO/1000 MT Desktop comes forward in almost all the tests; the developers at Intel have obviously done their best. But with the Jumbo Frame of 16128 the Intel's adapter lost its advantage and showed the lowest speed. The second position is taken by the Hardlink HA-64G thanks to the new microcontroller (one-chip card). 

The other 5 cards show approximately identical scores. When running on their own drivers, the SMC demonstrates a higher CPU load with the speed comparable to the rest. It's well seen in the mode with the Jumbo frames disabled. But with the reference drivers the situation gets better. Probably, th problem will be solved with the new driver version, and now it's worth using the reference drivers. 

The UDP test must have a bottleneck somewhere because the scores of the participants are very close. Probably, the problem is on realization of this test in the Iperf. 

3. Linux, MTU size. 

All the cards except the Intel PRO/1000 MT Desktop have expected results. In case of Intel, the data transfer rate goes up only to a certain level as the Jumbo Frame size increases. At the Jumbo Frame size of 16000 the speed falls down sharply. Besides, the difference between the speeds of identical cards with different PCI interfaces is minimal. 

4. Linux, performance comparison with the equal MTU size. 

The Intel PRO 1000/MT takes the lead when the frame sizes are small, but at 6000 all the cards peform almost equally. And at 3000 the TRENDnet TEG-PCITX slows down to an unknown reason. 

The last diagram is comparison of peak speed of all adapters in netPIPE. Note that the speed in the Linux is a bit higher than in the Windows. But frankly speaking this isn't a fully correct comparison as it involves peak speeds. 


Jumbo Frames are certainly a useful thing. But the speed is still too dependent on the driver, just look at the Intel PRO 1000/MT. It performs better compared to the previous tests. But still, the maximum (1 Gbit) can't be reached because of the 32-bit PCI bus. Nevertheless, the server (64-bit) adapters were tested on the 32-bit PCI bus for comparison of both versions of the cards. As you can see from the diagrams, there are almost no differences in case of the 32-bit bus. 

Evgeniy Zaycev (eightn@ixbt.com)

Write a comment below. No registration needed!

Article navigation:

blog comments powered by Disqus

  Most Popular Reviews More    RSS  

AMD Phenom II X4 955, Phenom II X4 960T, Phenom II X6 1075T, and Intel Pentium G2120, Core i3-3220, Core i5-3330 Processors

Comparing old, cheap solutions from AMD with new, budget offerings from Intel.
February 1, 2013 · Processor Roundups

Inno3D GeForce GTX 670 iChill, Inno3D GeForce GTX 660 Ti Graphics Cards

A couple of mid-range adapters with original cooling systems.
January 30, 2013 · Video cards: NVIDIA GPUs

Creative Sound Blaster X-Fi Surround 5.1

An external X-Fi solution in tests.
September 9, 2008 · Sound Cards

AMD FX-8350 Processor

The first worthwhile Piledriver CPU.
September 11, 2012 · Processors: AMD

Consumed Power, Energy Consumption: Ivy Bridge vs. Sandy Bridge

Trying out the new method.
September 18, 2012 · Processors: Intel
  Latest Reviews More    RSS  

i3DSpeed, September 2013

Retested all graphics cards with the new drivers.
Oct 18, 2013 · 3Digests

i3DSpeed, August 2013

Added new benchmarks: BioShock Infinite and Metro: Last Light.
Sep 06, 2013 · 3Digests

i3DSpeed, July 2013

Added the test results of NVIDIA GeForce GTX 760 and AMD Radeon HD 7730.
Aug 05, 2013 · 3Digests

Gainward GeForce GTX 650 Ti BOOST 2GB Golden Sample Graphics Card

An excellent hybrid of GeForce GTX 650 Ti and GeForce GTX 660.
Jun 24, 2013 · Video cards: NVIDIA GPUs

i3DSpeed, May 2013

Added the test results of NVIDIA GeForce GTX 770/780.
Jun 03, 2013 · 3Digests
  Latest News More    RSS  

Platform  ·  Video  ·  Multimedia  ·  Mobile  ·  Other  ||  About us & Privacy policy  ·  Twitter  ·  Facebook

Copyright © Byrds Research & Publishing, Ltd., 1997–2011. All rights reserved.