Original Link: https://www.anandtech.com/show/4695/handson-powerline-networking-how-well-or-not-are-latestgeneration-devices-working



Call me a Luddite, but I've always found the whole idea of setting up a dedicated wired connection just to get a gadget on the network to be a superfluous hassle. At least with Wi-Fi, as both Brian Klug and Jarred Walton have exemplified in recent days, all that's normally involved is twiddling a few software settings to bring a widget online. The approach is particularly attractive for mobile devices, which by their inherent natures are incompatible with wired tethers. But, as wireless networking veterans already intimately realize, the process is rarely that simple. First off, there's interference to consider; from Bluetooth transmitters, cordless phones, microwave ovens, and neighbors' access points. Don't forget about reflection and attenuation; glass, metal and tile, chicken-wire mesh in walls, and the like. Finally, consider the fundamental physics-induced range limitations, which no amount of antenna array augmentation and DSP signal boosting can ultimately surmount. All other factors being equal, for example, you're not going to be able to successfully bridge as lengthy a span at 5 GHz as you can at 2.4 GHz.

AC-powered devices aren't portable, of course; they're permanently mated to a nearby wall socket. Here's where hooking up a network-dedicated Ethernet, coax, phone line or other connection has always annoyed me. I've already hooked up one (thick) wire, the AC power cord. Why can't I just use it for network packet-shuttling purposes, too? In fact, I can; that's the whole premise of powerline networking, although few devices (save the occasional router) currently integrate power-and-packets within them. Instead, indicative of the still-embryonic state of this particular market, you're forced to externally connect a dedicated Ethernet-to-powerline bridge adapter, which you then connect to a different AC socket.

Conceptually, however, the single-connection vision remains valid. And I've noticed encouraging signs of market maturation in recent months. Now-conventional '200 Mbps' powerline adapters are now advertised on sale for around $50 for a two-pack; that's less than half the price that manufacturers and retail partners were promoting them at not so very long ago. And latest-generation '500 Mbps' adapter two-packs are selling for not much more moola; $75 or so. I've been daily using as well as periodically evaluating various powerline networking technologies since the early portion of the last decade, back in the '14 Mbps' HomePlug 1.0 days (say hi if you ever see me at a show, and I'll show you my scars ;-) ). Given recent trends, I figured it was high time for an evaluation revisit. How well do latest generation adapters fulfill their marketing promises? Is it finally time to dispense with burrowing through dirty, spider- and snake-infested crawlspaces and drilling holes in walls and floors in order to route Cat5e cable around?



This is, I think, the first time that AnandTech has published in-depth hands-on testing of powerline networking equipment, although staffers such as Ganesh T S have covered the technology both conceptually and via hands-on overviews in the past. As such, I thought a short upfront tutorial might be in order. Powerline networking transceivers employ the 50- or 60 Hz sine wave as a carrier, superimposing the higher-frequency data packets on it. Sounds simple, right? While it may be elementary in concept, the implementation is quite complex. Take this excerpt from the technical white paper for HomePlug 1.0 (PDF), which touted up-to-14 Mbps PHY rates and dates from mid-2001:


Orthogonal Frequency Division Multiplexing (OFDM) is the basic transmission technique used by the HomePlug. OFDM is well known in the literature and in industry. It is currently used in DSL technology, terrestrial wireless distribution of television signals, and has also been adapted for IEEE’s high rate wireless LAN Standards (802.11a and 802.11g). The basic idea of OFDM is to divide the available spectrum into several narrowband, low data rate subcarriers. To obtain high spectral efficiency the frequency response of the subcarriers are overlapping and orthogonal, hence the name OFDM. Each narrowband subcarrier can be modulated using various modulation formats. By choosing the subcarrier spacing to be small the channel transfer function reduces to a simple constant within the bandwidth of each subcarrier. In this way, a frequency selective channel is divided into many flat fading subchannels, which eliminates the need for sophisticated equalizers.

The OFDM used by HomePlug is specially tailored for powerline environments. It uses 84 equally spaced subcarriers in the frequency band between 4.5MHz and 21MHz. Cyclic prefix and differential modulation techniques (DBPSK, DQPSK) are used to completely eliminate the need for any equalization. Impulsive noise events are overcome by means of forward error correction and data interleaving. HomePlug payload uses a concatenation of Viterbi and Reed-Solomon FEC. Sensitive frame control data is encoded using turbo product codes.

The powerline channel between any two links has a different amplitude and phase response. Furthermore, noise on the powerline is local to the receiver. HomePlug technology optimizes the data rate on each link by means of an adaptive approach. Channel adaptation is achieved by Tone Allocation, modulation and FEC choice. Tone allocation is the process by which certain heavily impaired carriers are turned off. This significantly reduces the bit error rates and helps in targeting the power of FEC and Modulation choices on the good carriers. HomePlug allows for choosing from DBPSK 1/2, DQPSK 1/2 and DQPSK 3/4 on all the carriers. The end result of this adaptation is a highly optimized link throughput.

Certain types of information, such as broadcast packets, cannot make use of channel adaptation techniques. HomePlug uses an innovative modulation called ROBO, so that information is reliably transmitted. ROBO modulation uses a DBPSK with heavy error correction with bit repetition in time and frequency to enable highly reliable communication. ROBO frames are also used for channel adaptation.


HomePlug 1.0 was an industry standard, at least in concept, although Intellon (now a part of Atheros, which was subsequently acquired by Qualcomm) supplied the bulk of the transceivers used in HomePlug 1.0 transceivers. Follow-on HomePlug 1.0 Turbo, first unveiled in product form at the January 2005 Consumer Electronics Show, was a more overt Intellon-proprietary offering, backwards compatible with HomePlug 1.0 (at HomePlug 1.0 speeds) but delivering up to 85 Mbps PHY rates in Turbo-only adapter topologies. Marketing claims aside, however, Home Plug 1.0 Turbo products delivered little to no performance improvement over their HomePlug 1.0 predecessors in most real-life configurations.

Next up was HomePlug AV, which represented a return to the consortium-inclusive (albeit still Intellon-led) approach and was spec-wise first unveiled in August 2005. Like HomePlug 1.0 Turbo, it focused the bulk of its performance improvement attention on UDP (User Datagram Protocol) typically employed by high bitrate streaming multimedia applications (therefore the AV acronym within its name), versus TCP (Transmission Control Protocol). And how did it accomplish its 200 Mbps peak PHY rate claims? Here's a quote from its corresponding technical white paper (PDF):


The Physical Layer (PHY) operates in the frequency range of 2 - 28 MHz and provides a 200 Mbps PHY channel rate and a 150 Mbps information rate. It uses windowed OFDM and a powerful Turbo Convolutional Code (TCC), which provides robust performance within 0.5 dB of Shannon Capacity. Windowed OFDM provides flexible spectrum notching capability where the notches can exceed 30 dB in depth without losing significant useful spectrum outside of the notch. Long OFDM symbols with 917 usable carriers (tones) are used in conjunction with a flexible guard interval. Modulation densities from BPSK (which carries 1 bit of information per carrier per symbol) to 1024 QAM (which carries 10 bits of information per carrier per symbol) are independently applied to each carrier based on the channel characteristics between the transmitter and the receiver

On the transmitter side, the PHY layer receives its inputs from the Medium Access Control (MAC) layer. There are separate inputs for HPAV data, HPAV control information, and HomePlug 1.0 control information (the latter in order to support HomePlug 1.0 compatibility). HPAV control information is processed by the Frame Control Encoder block, which has an embedded Frame Control FEC block and Diversity Interleaver. The HPAV data stream passes through a Scrambler, a Turbo FEC Encoder and an Interleaver. The outputs of the three streams lead into a common OFDM Modulation structure, consisting of a Mapper, an IFFT processor, Preamble and Cyclic prefix insertion and a Peak Limiter. This output eventually feeds the Analog Front End (AFE) module which couples the signal to the Powerline medium.

At the receiver, an AFE operates in conjunction with an Automatic Gain Controller (AGC) and a time synchronization module to feed separate data information and data recovery circuits. The HPAV Frame Control is recovered by processing the received stream through a 3072-point FFT, a Frame Control Demodulator and a Frame Control Decoder. The HomePlug 1.0 Frame Control, if present, is recovered by a 384-point FFT. In parallel, the data stream is retrieved after processing through a 3072-point FFT for HPAV, a demodulator with SNR estimation, a De-mapper, De-interleaver, Turbo FEC decoder, and a De-scrambler for HPAV data.

The HPAV PHY provides for the implementation of flexible spectrum policy mechanisms to allow for adaptation in varying geographic, network and regulatory environments. Frequency notches can be applied easily and dynamically, even in deployed devices. Region-specific keep-out regions can be set under software control. The ability to make soft changes to alter the device’s tone mask (enabled tones) allows for implementations that can dynamically adapt their keep-out regions.


HomePlug AV can coexist with HomePlug 1.0 and HomePlug 1.0 Turbo, although it doesn't interoperate with either predecessor technology. It also (at least initially) competed against two other '200 Mbps' powerline networking approaches. UPA (the Universal Powerline Association) was largely controlled by DS2 (Design of Systems on Silicon), much as the HomePlug Powerline Alliance was Intellon-dominated in its early days. UPA was, like HomePlug AV, OFDM-based, but the two technologies' implementation specifics were incompatible (UPA, for example, used 1,536 carriers across a 3-to-34 MHz frequency range). Nor did they even coexist, in fact; they'd notably degrade each other if you tried to simultaneously run both approaches on the same power grid. Spain-based DS2 achieved some success, especially in Europe, but declared bankruptcy in 2010 and was subsequently acquired by Marvell.

The same non-coexistence situation occurred with Panasonic-championed HD-PLC approach, which was also somewhat popular in its day, especially in Asia. Here, the versus-HomePlug AV outcome was somewhat different, although HD-PLC has also largely faded from the market. The IEEE 1901 standard supports HomePlug AV, HD-PLC, or both technologies; the latter implementation resolves technical issues that had previously precluded coexistence, but it requires a costly dual-MAC design, since HomePlug AV is a FFT-based approach whereas HD-PLC harnesses wavelet transforms. IEEE 1901 also optionally expands the employed spectrum swath beyond 28 MHz all the way up to 50 Mhz, with a corresponding peak PHY rate increase from 200 to 500 Mbps. Note, however, that just as with 5 Ghz versus 2.4 Ghz Wi-Fi, higher frequency powerline channels travel shorter distances (before attenuation leads to insufficient signal strength) than do their lower frequency, longer-distance peers.

Then there's G.hn, an ITU-sanctioned standard whose participants include many past representatives of DS2, now with various new employers. Chano Gómez, DS2's former VP of Marketing, is now Director of Business Development at Lantiq (the former networking division of Infineon), for example. Sigma Designs, specifically the corporate division formed by the late-2009 acquisition of CopperGate Communications (which had itself obtained powerline networking expertise via purchase from Conexant), is developing G.hn chipsets, too, although in this particular case the company is hedging its bets by also (and, in fact, initially) designing HomePlug AV transceivers. G.h n is an attempt to unify powerline, phone line and coaxial cable-based networking with a single protocol stack that can run on multiple physical media backbones. As such, it competes not only against IEEE 1901 but also with technologies such as HomePNA and MoCA. And the IEEE is also developing a unified approach, the 1905 standard.



If you made it through the italicized technical protocol descriptions of the prior page, you encountered numerous references to bit errors and their mitigation; forward error correction approaches, retransmit schemes, hundreds or thousands of discrete transmission channels with independently configured modulation densities, etc. Such workarounds exist because the power grid was never intended for networking purposes and, as such, is a quite unfriendly environment for reliable high-speed data transfers.

Consider, first and foremost, that every AC-fed device creates a momentary dip or surge (however slight) when it first powers up or off. Such situations are usually occasional, and as such can be dealt with packet retransmission (in the case of TCP) or brief loss (with UDP). More egregious, on the other hand, are devices that inject a constant stream of high frequency noise onto the power grid, such as:

  • Switching power supplies (including AC-to-DC converters used in cellphone chargers and the like)
  • Motors in devices such as fans, hair dryers, vacuum cleaners, washers and dryers, furnaces and air conditioners, and refrigerator compressors
  • Illuminated CFLs (compact fluorescent lamps)

Such devices' noise patterns can destructively interfere with one or multiple channels' worth of powerline networking data. And at this point, I should also point out that the active powerline network can itself be a destructive interference source, specifically for shortwave radios, by virtue of the fact that current passing through a wire creates a magnetic field surrounding that wire, thereby turning it into an antenna. Powerline technologies are a longstanding sworn enemy to many 'ham' radio operators, although LAN-based powerline approaches are far less egregious in this regard than are WAN BPL (broadband over powerline) approaches spanning a large region. And powerline adapters are also intentionally designed with notch filters that, when activated, create channels (at the tradeoff of reduced peak bandwidth) that might interfere with other transmitters and receivers in a particular geography.

Next is the issue of networking signal attenuation, which is first and foremost caused by old or otherwise low-quality electrical wiring. Other potential problems include narrow-gauge wiring, with excessively high impedance; poor intra-span connections and variable gauge wiring across the span both result in unwanted reflections. Powerline packet 'jumps' across circuit breakers are performance-problematic; even more so are source-to-destination paths that involve a transition from one 110V (U.S.) phase of the incoming 220V source to the other phase. Even within a particular circuit breaker wiring spur, the presence of GFCI (Ground Fault Circuit Interrupter) outlets can cause problems, even if a powerline adapter isn't directly connected to them.

Don't try to connect a powerline adapter to a surge protector, which will filter out the high frequency data modulated on the 50 or 60 Hz carrier, unless the adapter is three-prong and implements Sigma Designs' ClearPath approach. ClearPath, according to Sigma Designs, alternately routes packets over the earth ground connection, which is normally not filtered. (Atheros also eventually plans to implement a similar approach, called Smart Link.) Keep in mind that surge protection circuitry is increasingly not just included in standalone power strips but also embedded within wall outlets. And a UPS (uninterruptable power supply) also acts as an effective deterrent to powerline packet propagation.

Speaking of circuit breakers, now's as good a time as any to discuss security. Don't worry about your next-door neighbor accessing your LAN if a transformer is in-between your respective street-side power connections. On the other hand, there's a tangible possibility that multiple powerline networking users sharing a common transformer feed (such as, for example, in the same multi-apartment building) could tap into each others' equipment. That's where encryption comes in. HomePlug 1.0 and 1.0 Turbo harness 56-bit DES encryption, while HomePlug AV leverages even more robust 128-bit AES. And altering an adapter's password requires access to a 16-digit unique password stamped on the unit. Just change your equipments' passwords from the 'HomePlug' or 'HomePlugAV' default, and other folks on the same transformer feed won't subsequently have access to them.



My approximately 1300-square-foot geodesic-dome residence dating from the mid-1980s serves as my test bed.

Three of the powerline-network nodes I tested were in the approximately 25-foot-diameter downstairs main room:

  • Against one wall in a dining-room nook (node 1)
  • In the middle of the room on a stairwell near the entertainment system, (node 2)
  • Against the opposing wall near the router (a 'fall 2007' Apple Airport Extreme N, model MB053LL/A, with GbE LAN ports).

Next door to the router is the "mud room", containing the fourth power outlet I employed in the testing (node 4), as well as the circuit-breaker box. I used another entertainment system in the upstairs bedroom as the fifth power-line node (node 3). Each AC outlet connects to a phase of the premises' 220V power feed, but I didn’t know (and intentionally didn't try to figure out, thereby mimicking a typical consumer's approach) which phase or which circuit breaker each network node employed.

Per the discussion in the prior section of this writeup, I employed various measures to minimize the effect of attenuation for these tests. I ensured that surge protectors and UPSs weren't between any of the power-line adapters and their associated power outlets. As a long-time powerline networking veteran, I have noise filters permanently installed between the power grid and both my refrigerator’s compressor and my home’s furnace fan; neither the refrigerator, furnace nor any other motors were operating when I was logging benchmark results. I also kept fluorescent bulbs extinguished, and I unplugged all of my AC adapters, battery chargers, and other wall-wart-based and otherwise AC-to-DC-fueled devices.

With respect to benchmarking utilities, AnandTech's Brian Klug leveraged iPerf (an open-source package I've also extensively used in the past) for his recent testing of Apple's latest generation Airport Extreme router and Time Machine. And Jarred Walton harnessed a suite of software in his more recent evaluation of Bigfoot’s Killer-N 1102 Wireless Half-Mini PCIe 1.1 add-in module. In my particular case, I went with another program I'd used before, Ixia's IxChariot. IxChariot’s Console utility, a Microsoft Windows application, sets up, manages, and periodically collects data from various tests.

The companion network-node-resident utility, Ixia’s Endpoint, runs on Linux, Mac OS X, and Windows OSes. Console comes with more than 100 company-created scripts, and users can also customize them and create brand-new scripts. Because Console also bundles Endpoint, you can theoretically run Console from the same machine that acts as one of the tested network nodes. However, as I'd learned in the past, such a setup is not optimal for accurate testing. Using the same system CPU resources for both Console and Endpoint means that you may be unable to run either of them at full speed. Console and Endpoint functions also contend for limited networking-transceiver bandwidth.

Alternatively, as I did in these particular tests, you can run Endpoint software on two systems, with the Console utility executing on a third computer that communicates with the other two. In that case, the Console-installed PC can also have lower network performance than the others. According to Michael Githens, lab-programming manager at Ixia, “It doesn’t need a high-speed connection, [as the test links do]. It is passing less data; the results come only from the test links, not the management links.” And in fact, the Console communication with each Endpoint in my particular setup occurs over Wi-Fi, so as to not impede the Endpoint-to-Endpoint traffic flow over the power grid (which would therefore under-report the performance potential of any particular powerline networking span).

However, I chose to disregard one other Ixia setup recommendation offered by Intellon-then-Atheros-now-Qualcomm, who sells the silicon inside two of the three powerline adapters tested in this study. Qualcomm suggested that I give the Endpoint systems' wired Ethernet transceivers (therefore the powerline adapters connected to them) static IP assignments, thereby enabling them to directly communicate with each other with no periodic router overhead. While Qualcomm's proposal is likely accurate in its prediction, it doesn't match the DHCP-assigned way that the bulk of LAN clients in both corporate and consumer settings obtain their IP addresses. As such, I left the Endpoints at their DHCP-configured defaults, thereby explaining the fifth powerline adapter in my topology, connected to the router for DHCP assignments (and renewals) and other like functions.

My Console-running system is a Dell XPS-M1330 laptop, based on Windows Vista Ultimate. As previously mentioned, it interacts with the two Endpoints over 802.11n Wi-Fi connections. The Endpoint systems are both Macs, a first-generation May 2006 13" MacBook running Mac OS 10.5 Leopard and an April 2010 13" MacBook Pro based on Mac OS 10.6 Snow Leopard. Both of the latter two systems contain GbE transceivers, a critical requirement for matching up with the GbE ports in the '500 Mpbs' IEEE 1901 powerline adapters I evaluated.



Speaking of adapters, which ones received scrutiny in this study? The five 'base case' adapters were Netgear's XAV2001 HomePlug AV 200 units, which I made sure beforehand were upgraded to the latest-available v0.2.0.5NA firmware release, and are based on the Qualcomm third-generation INT6400 chipset:

Sigma Designs' CG2110 chipset was the foundation of the Motorola-branded (and Sigma Designs-supplied) second HomePlug AV adapter suite that I tested. These particular units offer three-prong power plugs, thereby at least conceptually implementing Sigma Designs' earlier-mentioned ClearPath technology for optionally routing powerline networking traffic over the earth ground connection:

Note that in this particular case, I was unfortunately only able to obtain three adapters from Sigma Designs, two of which I had connected to the powerline network nodes under test at any particular time, and the third mated to the router. The reason for mentioning this discrepancy will become clear in the next section of this article.

And at the performance high end, at least on paper, are Netgear's five XAV5001 IEEE 1901 adapters, derived from Qualcomm's latest-generation AR7400 chipset. As with the XAV2001s, I made sure that they were updated to the latest-available public firmware (v0.2.0.9NA in this particular case) before subjecting them to my barrage of tests:

Although at any particular point in time I was only measuring bandwidth between any two adapters, I had all of the adapters of any particular technology (five each for Netgear HomePlug AV and IEEE 1901, three for Sigma Designs/Motorola HomePlug AV) connected to the power grid at all times. As with my earlier description of static-vs-DHCP IP address assignments, this decision might not have enabled any particular adapter to operate at highest possible performance, because it was periodically (albeit briefly) interacting with all of its peers on the premises' power grid. However, such a setup more accurately mimics the way that powerline adapters will be operating in real-life usage settings, which is why I went this route (poor networking pun intended).

I harnessed the Ixia-supplied 'Throughput.scr' script, albeit customizing it to extend the test time by increasing the test-data payload size from 100 KBytes to 1 MByte. And, having learned through past testing situations that powerline adapters (and, more generally, all types of networking equipment) sometimes deliver higher aggregate throughput when more than one stream's worth of data is simultaneously flowing through them, I ran both one- and four-stream testing scenarios:



(Cautious) good news, first; powerline networking technology has matured to an impressive degree from a reliability standpoint, at least with respect to my particular test bed, and across my limited test time. I've actually been running a mix of XAV5001s and four-port-hub-inclusive XAV5004s (in network node positions 2 and 3) since this spring, and they've handled power losses and other hiccups with aplomb, never requiring occasional (or not-so-occasional) manual unplug-and-replug sequences to revive their network connections as was the case with early offerings. The INT6400-based devices that they replaced were equally robust. And Sigma Designs' adapters worked without a hitch, too, at least in the several-day span that I used them.

Maybe it's a nitpick, and maybe under normal operating scenarios it might even be seen as a plus, but at least for testing purposes I wish that the XAV5001's Ethernet LED (the one on the right, with the adapter right-side up) would blink during active data transfers. As for the Powerline LED (the one in the middle), it's sometimes green:

sometimes yellow:

and sometimes red...even when installed in the exact same power plug!

Here's what the XAV5001 user guide says, "The Pick A Plug feature lets you pick the electrical outlet with the strongest link rate, indicated by the color displayed by the LED. Green: Link rate > 80 Mbps (Best). Amber: Link rate > 50 and < 80 Mbps (Better). Red: Link rate < 50 Mbps (Good)." But don't be alarmed (as I initially was) if an adapter's Power LED (the one on the left), normally green, turns amber:

That color change simply signifies that the adapter hasn't seen any network traffic passing through it for a while, and has therefore automatically transitioned into a power savings mode. Trust me, it'll wake up again once you 'ping' it.

The Sigma Designs/Motorola adapters exhibited similar behavior to their Qualcomm/Netgear peers, with the PLC Link LED sometimes green:

and sometimes red:

Again, this occurs even when used with the exact same power outlet plug, at different points in time. My email to Sigma Designs' PR contact requesting an explanation of the difference between 'green' and 'red' went unanswered, but I suspect 'red' also signifies a somehow-degraded powerline connection in this particular case.



The following table summarizes the abundant data I gathered across the 72 (and more...keep reading) tests, from one node combination to another, from one data-flow direction to another within a given node combination, and across the three adapter technologies that comprised this study:

Average Bandwidth Testing Results (Mbps)
  NETGEAR HomePlug AV1 Sigma Designs HomePlug AV2 NETGEAR IEEE 19013
Nodes TCP (single-stream) TCP (quad-stream) TCP (single-stream) TCP (quad-stream) TCP (single-stream) TCP (quad-stream)
1->2 36.97 36.881 26.554 37.048 27.037 26.231
2->1 36.314 37.636 28.5 31.77 25.772 26.487
1->3 32.548 32.857 31.72 39.69 14.335 12.106
3->1 32.419 33.676 29.815 36.546 15.09 16.486
1->4 30.488 31.976 33.778 43.918 13.621 14.57
4->1 35.801 36.795 33.952 39.076 21.544 14.225
2->3 48.379 52.413 40.515 47.587 29.931 31.376
3->2 50.665 54.364 38.632 45.338 31.929 33.602
2->4 52.083 56.974 40.29 48.872 53.199 42.231
4->2 53.717 56.584 41.068 48.879 37.215 36.953
3->4 48.054 53.037 34.547 50.343 35.632 33.381
4->3 51.793 51.576 41.762 50.695 30.846 30.662

Nodes

  1. Dining room
  2. Living room
  3. Bedroom
  4. "Mud room"

Notes

  1. Firmware v0.2.0.5NA
  2. Firmware v1.2.15
  3. Firmware v0.2.0.9NA

Keep in mind that the table only shows only the average throughput across each test’s duration. Each single-number test summary omits per-stream average results for four-stream tests; the minimum and maximum per-stream and aggregate transfer rates across the test runtime; the overall transfer-rate pattern spread; and the measured minimum, maximum, average, and spread latency. The IxChariot reports embed all of this information and more. You can download a folder-delimited ZIP archive of all 72 report suites, in both native Ixia TST and exported TXT, PDF and HTML-plus-GIF formats, here.

In scanning over the data, you'll perhaps first notice that powerline network node 1 is the obvious bandwidth-compromised location. It's nearby the Dell XPS M1330, which acts as Media Center server for my Xbox 360s (acting as Media Center Extenders). Right now, I use a dedicated wide-channel 5 GHz 802.11n link to stream data from the laptop to my router. I'd originally thought that I could perhaps replace it with a powerline connection, thereby enabling me to (for example) simultaneously send multiple TV channels' worth of data to different game consoles. Guess not.

The Netgear XAV2001 (Qualcomm HomePlug AV) results closely align with those I've obtained in past studies using a number of different measurement schemes, giving me confidence both in this study's numbers and in my overall testing methodology. Note that the four-stream variant of each node-to-node TCP test delivered little to no incremental measured bandwidth versus the single-stream base case alternative.

That's not the case with the Sigma Designs-based HomePlug AV adapters. Although the single-stream results were respectable, they only exceeded those of the INT6400-based adapters in one of twelve cases. However, unlike with the INT6400-derived devices, the Motorola-branded HomePlug AV adapters showed consistent and significant speed improvements when I simultaneously sent four TCP streams through them, beating out the XAV2001s in five of twelve testing scenarios. Keep in mind, in comparing the Sigma Designs results with the others, that you're matching up a three-adapter Sigma Design topology against a five-adapter Netgear alternative...although in all three cases, only three adapters were in active use at any point in time (two Endpoint units, plus one at the router).

Finally, turn your attention to the AR7400-based Netgear XAV5001 IEEE 1901 adapters...and to the most baffling aspect of this study. In past testing, admittedly using different benchmark utilities and hardware configurations, the XAV5001s had exhibited consistent node combination-to-combination performance improvements over the XAV2001s, although the magnitude of the improvement varied depending on which specific node combination was being measured at the time. This time, on the other hand, the XAV5001s consistently undershot both adapter alternatives, and the XAV5001 four-stream testing setup even undershot the single-stream variant in a couple of cases.

I was so baffled by the outcome that I re-ran the XAV5001 tests several days later, obtaining a near-identical results set. I frankly don't know for sure what's going on, although I've seen similar discrepancies at other sites that have compared IEEE 1901 adapters to their HomePlug AV precursors. The last time I substantially tested the XAV5001 adapters, they were running the same firmware release as now, but it was springtime. Perhaps the overall power grid noise level is higher in late summer, or perhaps the higher ambient temperature caused a performance degradation; I've encountered similar scenarios in testing older-generation powerline technologies. Keep in mind, too, that the AR7400 is Qualcomm's first-generation IEEE 1901 chipset, versus the INT6400 third-generation HomePlug AV chipset, and that the associated AR7400 firmware is also comparatively immature.

One other Qualcomm-vs-Sigma Designs discrepancy bears mentioning. Check out these representative one- and four-stream testing bandwidth profiles from the Qualcomm AR7400 testing:

 

Now check out their Sigma Designs CG2110 counterparts:

Note that in the Qualcomm cases, bandwidth initially spiked high and then quickly tailed off to a steady-state value; the converse was the case with the Sigma Designs-based adapters.



I also ran 72 tests' worth of UDP measurements, leveraging the Ixia-supplied UDP_Throughput.scr script as-is in the four-stream case. For single-stream UDP tests, I modified UDP_Throughput. scr, increasing the per-stream data-payload size from 730 KBytes to 7.3 MBytes in order to lengthen the tests' runtimes. After all, as I previously mentioned, newer powerline technologies focused their performance-improvement attention on UDP, which finds use in streaming large-payload multimedia content from one network client to another. And you can find my UDP testing results reports in the earlier-mentioned ZIP archive.

But I've decided not to publish the average bandwidth measurements in a table, as I did one page back with the TCP results. That's because, in every testing case, only a few packets succeeded in making their way from one Endpoint to the other, translating to a near-100% packet loss scenario. To be clear, this underwhelming outcome did not occur due to any inherent HomePlug AV or IEEE 1901 UDP incompatibility; as I type these words, for example, I'm watching a video streamed from a computer to my television over an AR7400-implemented powerline spur and using UDP. But after a few moments' reflection, I came up with a seemingly plausible explanation, which a subsequent chat conversation with Brian Klug sanity-checked and an email back-and-forth with Ixia's Michael Githens confirmed.

The issue, as networking wizards among you out there may have already figured out, is that by definition, UDP is (in the words of Brian Klug) 'connectionless'. This means less protocol overhead, but with UDP being a best-effort approach, it's possible for an abundance of packets to get dropped due to transmission channel bandwidth limitations...that is, unless the transmitter is somehow (at the application layer) able to monitor the receiver's success-or-failure statistics and (if necessary) back off its packet-output rate. Windows Media Center, for example, does this; if necessary, it'll more aggressively compress video (for lower per-frame average bitrate, at the tradeoff of reduced image quality) and/or drop frames in order to match its transmission speed to the channel and receiver's capabilities.

IxChariot unfortunately doesn't seem to include destination-cognizance features such as these. All it seemingly knows is that, for example, the two HomePlug AV adapters I tested are reporting 100 Mbps PHY rate capabilities, with the XAV5001 adapters touting GbE speeds, and it subsequently sends out packets at much faster rates than the powerline spur is capable of reliably routing to their destination. Here's what Ixia's Michael Githens said when I explained my predicament and theory as to its root cause:

Unfortunately there isn't anything built into IxChariot to automatically help you figure out that no-drop rate. You could, however, script it to do a test, check results and change an input value to have it do multiple runs for you so that you can zero in on the no-drop rate. That would be the best I could suggest.

Such an iterative approach is actually something I'd considered, but its tedious and time-consuming aspects make it unattractive for the involved testing suite that I tackled in this project. And unfortunately, Brian Klug reports that iPerf is seemingly similarly destination-unaware in its consequently 'blind' UDP testing approach. I welcome reader suggestions on utilities or other techniques that will measure the peak no-packet-drop UDP bitrate without excessively burdening the source-to-destination channel (thereby under-representing the channel's UDP potential). Applications that are Mac OS X-cognizant are particularly welcome, since the only two GbE-capable laptops in my possession are Macs, neither of which has Windows optionally installed via Boot Camp. Please leave a comment on this writeup, or drop me an email.

For whatever it's worth, I'll wrap up this particular page by passing along the four router-to-node screenshots of 'raw' measured bandwidth potential that Netgear's Powerline Utility (Windows version; here's the Mac edition) measures for my AR7400-based XAV5001-plus-XAV5004 powerline network, from the router-based adapter to each of the four other network nodes.

Node 1

 

Node 2

 

Node 3

 

Node 4



Having previously tested the IEEE 1901-implementing XAV5001 earlier this year, with much better results (both absolute and relative to prior-generation technologies) I'm admittedly baffled by its notable under-performance this time around. I'm sure I'll hear back from Netgear and/or Qualcomm soon after this writeup is published, and I'll post any notable follow-up findings.

Cautious congratulations go out to Sigma Designs, who's proven itself to me as a credible alternative supplier of HomePlug AV silicon. While the CG2110 chipset under-performed its INT6400 competitor in most of my tests, particularly in single-stream configurations, the fact remains that for many applications, its delivered bandwidth has met the 'good enough' metric.

Speaking of Sigma Designs, I continue to await adapter-based samples of the company's follow-on CG5110 chipset, which claims to comprehend both HomePlug AV and G.hn. AnandTech's Ganesh T S and I both saw demonstrations of functional silicon at January's CES; the latest word from Sigma Designs' PR contact is that "G.hn looks like Q3/4." With Q3 more than half over, the company's only got around four months to make good on its (schedule-slipped, originally targeted for March delivery) promises. I'll repeat-extend a similar invitation to Lanqiq, or for that matter any other G.hn silicon supplier; you've been highly critical of the HomePlug Powerline Alliance over the past year-plus, but it's time to put some 'steak' behind your 'sizzle' and show me what your chips can comparatively do.

Bottom line, though, I'll reiterate something I initially said earlier in this writeup: powerline networking technology has matured to an impressive degree. A few routers build powerline networking transceivers directly into their power supplies, therefore enabling voltage/current and packet transfers via a unified AC cord and outlet connection. Standalone adapter vendors should strive to further drop their prices, thereby cultivating additional demand volume, and systems suppliers should also begin to obsolete the need for standalone adapters by integrating powerline networking transceivers.

What was previously a confusing muddle of competing, incompatible pseudo-standards has finally been whittled down to two...and only one of them is shipping meaningful product volume at the moment. Pick and proceed, folks. It's time to simplify.

p.s. Fellow AnandTech staffer Ganesh just gave me a heads-up that he has a Netgear XAVB5501 two-adapter kit in-hand, with a review slated to appear in a few weeks. The XAV5501 is a three-prong powerline networking adapter which reportedly supports Qualcomm's Smart Link technology, the company's conceptual equivalent to the Sigma Designs ClearPath approach discussed in this article. Keep an eye out for Ganesh's writeup; I know I will.

Log in

Don't have an account? Sign up now