Comments Locked

38 Comments

Back to Article

  • therealnickdanger - Thursday, January 5, 2012 - link

    Forgive my ignorance here, but with wired connections Jumbo Frame support is often vital to streaming full bit rate 1080p files. While 802.11n is often fast enough in terms of throughput, most wireless solutions still stutter when fed such content. Will 802.11ac put an end to this problem?
  • quiksilvr - Thursday, January 5, 2012 - link

    That's a wait and see question.
  • dagamer34 - Thursday, January 5, 2012 - link

    Are you streaming full Bluray rips or compressed .mkv versions? I'd say a lot has to do with the fact that unless your in ideal conditions, there is just barely enough bandwidth to stream full HD, and stutters come from lost frames do to any number of causes (dropped packets, wireless interference, a microwave turning on, solar flare, etc...) I'd look at the number of networks in your area, switch to 802.11n 5Ghz, provide a clearer path for the signal (if possible), but if all that fails, you may have to go wired.
  • AnnihilatorX - Thursday, January 5, 2012 - link

    That depends a lot on the compression. On H.264, I can stream 1080p with 802.11n without any slutter over my wireless. 1080p video stream, 5.1 sound with H.264 and m4a audio has bit rate of about 3-12Mbps variable.

    This is entirely handled easily by even 802.11g. However, latency is inconsistent with wireless networks, which mean video playback apps need some level of buffering to prevent sudden sluttering, not all of players do well do over home networks as they assume the content to be local.
  • xdrol - Thursday, January 5, 2012 - link

    You already have that.

    802.11a/g has a maximum frame size of 2272 byte, 802.11n increased that to 7935 byte. Well yeah, not 9000, like the "usual" Eth jumbo frame.

    Main problem is that Jumbo Frames is NOT in the Ethernet standards (many devices support it, but there is absolutely no guarantee it will work among different vendors too), so AP implementations usually don't support that (you cannot transmit packet larger than 1500 on the uplink interface anyway..)
  • relativity1 - Thursday, January 5, 2012 - link

    Someone correctly stated that current Ethernet implementation (layer 2 on the stack) maxes out at 1500B per frame before fragmentation. Ethernet (802.3 and 802.11) doesn't really limit packet/frame size to 1500B but, rather, it was an accepted sweet spot of "throughput" and "error rate". Those two key words really define the maximum frame size (MTU - maximum transmission unit) is currently and, more importantly, dominantly set at 1500B.

    802.11ac's wirelink (PHY) performance may yet enhance the "sweet spot" between throughput and error rate so chipset producers can enhance the MAC to support greater than the norm and up to 9000B or even 16,384B (super jumbo) as I can with my traffic generators.

    Remember, the PHY wireless link is independent of the Ethernet (layer 2) of the IP stack. 802.11ac can implement a super-jumbo frame and Ethernet will stay limited to 1500B or 9000B (if that is called for). Also, all access points and wired routers are still fixed with this 1500B limit so your performance gain between the node and AP on the wirelink is negated by the LAN ports connecting your wired devices such as your TV, AppleTV, Roku, BD player, or computer.

    Just my 2 cents...
  • icrf - Thursday, January 5, 2012 - link

    The problem with 802.11n being 2.4 GHz or 5 GHz is that many devices don't actually have 5 GHz support. If 802.11ac required support for both, then it could use the better propagating 2.4 GHz if it were available, or 5 GHz if it were clearer (or maybe even both if you live in the sticks like me). Using 5 GHz only seems odd. What's the benefit in avoiding 2.4 GHz? Is it significantly cheaper or simpler than offering both?
  • ViRGE - Thursday, January 5, 2012 - link

    If nothing else, avoiding 2.4GHz means you get to use wider channels. The 2.4GHz band isn't even 80MHz wide to begin with, never mind having multiple networks in the same space.
  • DanNeely - Friday, January 6, 2012 - link

    There's not enough 2.4ghz spectrum for wider channels and the interference levels apparently made 256QAM too difficult to achieve. Thus the main benefits of 802.11.ac weren't available; and apparently whatever other miscellaneous improvements the standard included weren't considered beneficial enough to justify a new generation of 2.4ghz hardware.

    However I think it's safe to assume that all 801.11.ac devices will also support 802.11n 2.4ghz operation.
  • xdrol - Thursday, January 5, 2012 - link

    Kinda disappointed to see the "feature" of 256QAM. Why not 1024QAM? Or 4096QAM..?

    256QAM might be used if you are very-very lucky (sitting in a Faraday-cage next to the AP..), but saying 802.11ac can do 6.93 Gbps (8x8:8 mode, 160 MHz channel) is like saying this car can go with 450 km/h / 300 mph. Yeah, free-falling. With two jet engines attached.

    However, there is an über- awesome new feature (and I find very curious why is there nothing about that in the article, and you only talk about the previous evolutionary step, beamforming): MU-MIMO. Current MIMO is single-user (SU). That means, during at a time, the AP can send to 1 user, regardless of how many Spatial Streams can the clients handle.

    For example if you got a 3x3:3 AP, a 2x2:2 and a 1x1:1 STA's (a laptop and a cellphone for instance), you will have 300 Mbps to the first and 150 Mbps to the second - but not simultaneously. If you want to use both at the same time, then first packet will be sent to the first STA with 300 Mbps (2 streams), second packet to the second STA with 150 Mbps (1 stream). If the stations are transmitting 50%-50% of the time, you got total of ~225 Mbps.

    But with MU-MIMO, the AP can send 2 streams to the first STA and 1 stream to the second STA at the same time. In the previous example that would mean the total throughput to be 450 Mbps (without the further modulation / channel width enhancements).
  • name99 - Thursday, January 5, 2012 - link

    "Kinda disappointed to see the "feature" of 256QAM. Why not 1024QAM? Or 4096QAM..?"

    This is a silly comment. The ultimate limitation on how aggressively you can run your QAM is the SINR ratio. The noise floor is fixed, the interference level depends on your situation. Given the power limits on WiFi and the noise floor, this gives us how aggressively we can run QAM. I don't know the relevant numbers, but obviously if the power level/noise floor ratio were such that QAM256 is not feasible, it would not be part of the spec. Recall also that better decoding (eg using turbo codes, or even convolutional codes with turbo-inspired soft-value multi-pass algorithms rather than conventional viterbi decoding) allows one to approach closer to the theoretical noise limits.

    Now perhaps you are in an environment where interference is high and constant, and so you're not going to be able to take advantage of such codes. But that is no reason to prevent others, who might be in quieter environments, from being able to utilize them.

    A more interesting point, and I was upset the article did not discuss this, is whether the godawful 802.11 MAC has been improved in this spec. It is the MAC more than anything else that is responsible for the factor of a half or so that we always see in WiFi, even under the best conditions, between real world throughput and nominal throughput. The 802.11 committee seems always to be concerned with the extreme situation of a conference with two hundred people sharing a network, but never seems interested in the opposite (but much more common) situation of a home with a base station and usually one, occasionally two, devices trying to talk to the access point. This sort of home environment could utilize a managed MAC (controlled by the base station) and could practically double its throughput under most actual conditions.
  • MGSsancho - Thursday, January 5, 2012 - link

    First thank you for bringing up 802.11n already has all the mentioned features except 246QAM. Thank you again for mentioning that the big changed from abg -> n were about 200 people in conferences as well as the need for a base station MAC.

    a few devices over a large area is more important to me honestly. If i need to populate a large area there are many vendors who sell amazing APs for that market ($$$ each.) What option is there for home users for need a large area and must use more than one AP? disable DHCP on the AP, same SSID and make them use channels 1, 6 and 11? ( many of my devices are limited to 2.4ghz.) what about dual radio APs under $200 so i can set this all up? some of use are forced to set up our own mesh network for home use. honestly very complicated if you want to stream pr0ns/many people play on hand held devices while walking about the place. Personal I use a microtik router with microtik software APs with a radio for each band then let the software handle all the negotiations but this IS NOT for even the, "I built my own PC so I am 1337" individuals.

    Will 802.11ac be able to deliver those amazing speeds with WPA or RADIUS or even WEP? currently you turn on any encryption and you are limited to g speeds. What about if i bring in a slower device say a Nintendo DS, will the entire network fall back to 12mbs or slower? Some of us will spend the cash (but not $700USD per AP for enterprise stuff.)

    Basically the OP and Parent hit the nail on the head. Current speeds are fine for now. Give us a reasonably affordable way to actually get those with a dozen devices in homes.
  • akbo - Thursday, January 5, 2012 - link

    Most routers nowadays are only 1x1:1. Only the more expensive ones can get more antennae. Hoping that 2x2:2 or 3x3:3 get more common.

    But will this fry my brain?
  • DanNeely - Friday, January 6, 2012 - link

    Checking on newegg, 2x2:2 routers (technically ones listed as 300Mbps support) start at $25; only marginally above basic 1x1 models. 3x3:3 is still significantly more expensive; I wonder if that reflects actual implementation costs or the desire to make the top level products high margin as long as possible.
  • douglaswilliams - Thursday, January 5, 2012 - link

    "...something we'll obviously test once we have the first ac devices in house."

    That's funny, everything in my house runs on AC. Okay, okay, just making a pun here.

    But really, the majority of my data is streaming video. I would expect an increase in power draw while streaming video. Web page to web page, yes, maybe there would be a decrease in power draw.

    I believe Hulu's highest quality stream is only 3.2 Mbps, which doesn't near cap out an 802.11g connection - so this is definitely not needed now (for me). Perhaps in 10 years when I have kids who also want to stream video, and the video is way higher resolution than it is now, this might come in handy at that time.

    What are y'all's thoughts?
  • A5 - Thursday, January 5, 2012 - link

    If you're streaming real HD video (20+ Mbps, not 3 Mbps "HD" video) around your house, this a good thing (though the people who do that probably already have a wired gigabit network...), but if all of your content is streamed from low-quality internet sources this isn't useful.
  • name99 - Thursday, January 5, 2012 - link

    This is not a technology that is trying to connect you to the internet faster. If all you want is "fast" internet, the crappy 802.11g in the box your internet provider gives you will do the job.

    This is for things like
    - providing a large aggregate bandwidth for many users (hotels, conferences, offices, schools) OR
    - fast large file transfers or backups in the home
  • jwilliams4200 - Thursday, January 5, 2012 - link

    Except for very rare cases (extremely clean wireless spectrum), people will be doing VERY well if they double their bandwidth with 802.11ac as compared to 5GHz 802.11n.

    All of the numbers you quote are best case scenarios, then you say that real world will likely be 1/3 to 1/2 that. But that is optimistic. Current testing with 3-stream 802.11n in real-world situations is lucky to consistently exceed 100Mbps, which is 22% of the theoretical 450Mbps bandwidth available. With 802.11ac, it is likely that the real-world factor will be significantly lower than 22%, I'd bet you will be lucky to see 10% of best-case scenarios. So 802.11ac with a best case of 1.3Gbps will translate into 130Mbps real-world speeds. Maybe 150-200Mbps if you have a remarkably clean wireless spectrum available. And short, obstruction-free link distances. The 5MHz spectrum attenuates rapidly with distance and obstructions.

    Bottom line is do not get your hopes up too high. Wireless technology has a history of over-promising and under delivering.
  • xdrol - Friday, January 6, 2012 - link

    Jup, exactly my points up above, just beamforming / MU-MIMO will make quite the difference.
  • tpurves - Thursday, January 5, 2012 - link

    You missed the part where you were going to explain what NFC has to do with the new wifi standard... as it says in your graphic.
  • DanNeely - Friday, January 6, 2012 - link

    The article also didn't say a word about 4G cellular data. The graphic showed all the wireless connections that a phone will likely have in a few years; but just the new wifi. I suspect it was created by a 3rd party; not by an AT graphic artist for this story.
  • gevorg - Thursday, January 5, 2012 - link

    Looks like I'll just skip the 802.11n and upgrade straight to 802.11ac when its widely available and ready. Glad I didn't spend hundreds last year to upgrade the whole house with 802.11n.
  • Mr Perfect - Thursday, January 5, 2012 - link

    So, will they be iterating the security in AC too, or will they be sticking with WPA2 with AES encryption? I'm not even sure how well that's stood up over the last couple years.
  • dcollins - Thursday, January 5, 2012 - link

    WPA2 with AES is still considered secure as long as the PSK is sufficiently long and rotated with some regularity.
  • xdrol - Friday, January 6, 2012 - link

    What do you mean? It still takes more time than the age of the Universe to crack a WPA2-PSK with AES - it is magnitudes of magnitudes easier to hack you in the system somehow else..
  • Mr Perfect - Friday, January 6, 2012 - link

    Ah, ok, so WPA2 with AES is still decent. From what I'd read, every other security protocol(WEP, WPA, WPA2 with TKIP) isn't recommended due to some weakness. I was half expecting some flaw to have been found in WPA2 with AES since the last time I read up on this stuff.
  • mckirkus - Thursday, January 5, 2012 - link

    Looks like we might finally crack the 100Mbps barrier with wireless. I have Blu-Rays that hit upwards of 35Mbps and need a lot more than that to stream effectively so this will be a major deal for home entertainment.
  • JarredWalton - Thursday, January 5, 2012 - link

    A good 2x2:2 802.11n solution on 5GHz can easily crack 100Mbps, at least provided you're in the same room. That said, I'm super excited to see WiFi get a needed boost in performance. I still run cables all over my house because 802.11n simply isn't fast enough for file transfers, even within the same room. However, even on Gigabit Ethernet I rarely transfer at more than 50MB/s (hard drive speeds for large data files), so I really only "need" 400Mbps or so of usable bandwidth. A 2x2:2 802.11ac solution would be just about perfect--at least until SSDs reach the point where I can conceivably store hundreds of gigabytes of data and archives on SSDs instead of HDDs.
  • jwilliams4200 - Thursday, January 5, 2012 - link

    Problem is that there is no way you are going to actually achieve 400Mbps real-world with 802.11ac. I'd lay long odds that 95% of users don't consistently achieve anything higher than 200Mbps.
  • RF_Guru - Thursday, January 5, 2012 - link

    In general this is a well written article - comprehensive, clear, and concise - but unfortunately, not entirely correct.

    For an 80MHz wide 256QAM signal to be successfully received (without too many errors), the signal to noise ratio required at a receiver input is going to have to be in the order of 35dB (per stream). This is significantly more than is required for an 802.11n 64QAM signal. This means that in order to support .11ac higher data rates, the signal level at the receiver is going to have to be pretty high (say above -40dBm.....I haven't done the exact math on it yet - this is a back-of-envelope calculation!). Since the noise level isn't going down any (I wish it would though... :), this means that the higher data rates of .11ac are only going to be acheived over fairly short range - around 60' maximum IMO. For distances beyond 60', the rates will drop off significantly. I suggest we ask the WLAN chip vendors to show us their "waterfall" curves of rate vs. distance for .11n vs .11ac....

    That's not the end of the issue. The 60' distance number above assumes a transmit power output of +20dBm. This is not going to be easy to achieve. In order to get the required signal to noise ratio, an 802.11ac transmitter is going to have to be MUCH more linear than an .11n one. For .11n, transmit power amplifier added EVM can be up to about 3% (although I like to limit it to 2.5% myself), but for .11ac, the PA's added EVM cannot be much higher than about 1.5%. This means the PA needs to much more linear than before or run at a lower power output level. In order to acheive better linearity, two things can be done. Either a) throw more DC power at the problem - make the PA more "class A", or b) employ a pre-distortion scheme of some kind - either analog or digital (I have seen some very clever digital pre-distortion techniques from some very clever startups lately). Both of these techniques will require MORE DC power being used per stream (compared to a .11n transmitter), so the statement about transmit efficiency is incorrect.

    Questions? Discussion?

    RF_Guru
  • xdrol - Friday, January 6, 2012 - link

    .11n features are still a subset of the .11ac feature list, so if the software is not a retard and enforcing new stuff, the waterfall curve cannot be worse. If you want to use the new features, then you need better hardware and/or friendlier environment, I don't think that is a surprise for anyone.
  • iwodo - Thursday, January 5, 2012 - link

    I could never understand the channel interference thing. We all said 2.4Ghz is polluted that is because we have too many WiFi devices within range. Once we "all" move to 802.11ac we will have the same number of devices within a 5Ghz range as the 2.4Ghz Range anyway. So isn't the argument for less interference a little self imposed?

    Then there is this "Draft" Status. Which isn't really a Draft anymore it is becoming like the Release Candidate in Software World. ( Draft was supposed to mean Alpha or pre Alpha stage ) Once Broadcom release their Chipset, the standard will now have to consider backward compatibility with the draft, and can not add anymore features that are not in the draft to protect those that are already on the market.....
  • DanNeely - Friday, January 6, 2012 - link

    There are a lot more channels in the 5ghz range than in 2.4. With 2.4 you only have 3 or 4 non-overlapping 20mhz channels (national regulators don't give identical spectrum rights in all parts of the world). In the US there are 21 non-overlapping 20mhz channels currently allowed. Eventually when 160mhz channels become common the overlap situation will become about as bad; but sharing a 160mhz channel 8 ways still gets you 8x the capacity as sharing a 20mhz channel 8 ways.

    There are two other factors that will be beneficial even in spite of all of the above: First, 5ghz signals are attenuated much more rapidly by walls (and air); so that in crowded areas even if the number of 5ghz devices is the same as the number of 2.4 ghz devices the number close enough to cause interference for you is less. Second, there's a lot less other stuff on 5ghz than 2.4. Microwaves cook on 2.4ghz and leak enough that in many ways they function as jammers. Other major 2.4ghz users include cordless phones and bluetooth, less commonly used wireless devices (eg baby monitors) are increasingly on 2.4ghz instead of 900mhz now too. Moving to wifi to 5ghz while leaving them behind will leave all of them behind, improving QoS for both groups.

    http://en.wikipedia.org/wiki/List_of_WLAN_channels...
  • iwod - Sunday, January 8, 2012 - link

    Thanks for the explanations. Cleared a few things on top of my head.
  • xdrol - Friday, January 6, 2012 - link

    Beamforming can do quite a good job in "showing" the AP to only to the STA's that are connected to it, effectively reducing the interference to anything else, be that "else" another STA in the same network, or an interfering network/AP within the same channel.
  • Freyzz - Friday, January 6, 2012 - link

    1.3 gb/s? That is 162.5 MB/s...faster than the read rate of most mechanical hard drives. That's crazy fast. You could basically have a wifi external hard drive and get NATIVE speeds. No more usb cable, etc.
  • WiWavelength - Saturday, January 7, 2012 - link

    "The 4x increase in data encoded on a carrier..."

    In the article, the statement above is in error.

    True, a 256-QAM constellation does contain four times as many symbols as a 64-QAM constellation. But modulation complexity does not translate linearly to bit rate. No, it translates logarithmically. Hence, 256-QAM conveys 8 bits (log₂ 256 = 8) per symbol, while 64-QAM conveys 6 bits (log₂ 64 = 6) per symbol. The increase is thus only 33 percent.

    As a result, increased modulation complexity quickly reaches a point of diminishing returns, since each doubling of symbol density advances symbol payload by only one bit. And each increase in modulation complexity requires greater CNIR for successful demodulation.

    AJ
  • jing0201 - Tuesday, February 7, 2012 - link

    11ac can get high throughput by higher Bandwidth compared with 11n.
    BW=160 is optional in US region.
    But how about in Europe and japan?
    Because refer to IEEE 802.11-09/0992r21 ,I can't figure out why it color the channelization(BW=80 and BW=160) red. What does it mean?

Log in

Don't have an account? Sign up now