Original Link: https://www.anandtech.com/show/4289/verizon-4g-lte-two-datacards-wifi-hotspot-massively-reviewed
Verizon 4G LTE: Two Datacards and a WiFi Hotspot Massively Reviewed
by Brian Klug on April 27, 2011 12:11 AM EST- Posted in
- Samsung
- Verizon
- LTE
- Smartphones
- 4G
- Pantech UML290
- USB551L
- Mobile
- MDM9600
So I have a confession to make—I’m late on this one. Way late. I managed to catch strep throat, came down with a high fever, then a sinus infection, and as a result missed my goal of having everything Verizon 4G LTE wrapped up and published a few weekends ago. One thing led to the other, and I promised a number of readers both in emails and on Twitter that it would be done a long time before it ended up coming to fruition. I think I’m going to add a week to all time estimations from now on, just to be safe. Apologies if I made you refresh obsessively a few times there.
That said, it isn’t entirely a loss. Over the past month, I somehow have found myself getting slowly buried in literally every single Verizon 4G LTE device (with the exception of the LG VL600 data card) and that’s a good position to be in.
The story of our LTE testing started actually before MWC with the Pantech UML290, and since then each time a new device has shown up, I’ve hopped in my car, driven two hours to Phoenix (the nearest LTE market) and spent a sleepless 48 hours testing battery life, speeds, and stability. It’s been a lot of testing, driving, and collecting data. I’ve recorded 542 speed test runs on 4G LTE as a result, and many more on EVDO for comparison. There’s a ton of stuff to go over, so to keep things manageable, I’ve split the review down the middle. This half is everything about Verizon 4G LTE from a cellular perspective including two data cards and a WiFi hotspot. The other half is just the HTC Thunderbolt.
Introduction to Cellular Network Evolution
Before you dive into our review of the Pantech UML290, Novatel Wireless USB551L, and Samsung SCH-LC11, it’s worth it to have a discussion about what “4G,” and further LTE, really is. To that extent, I think it’s also worth it to take a look back on the evolution of wireless network tech from a historical perspective. It’s usually odd to start a story out this way, but it really does give perspective for how far the mobile network story has come since its inception. Crack open any mobile broadband book, and you’ll read a narrative something like this one.
In the 1G days, all we cared about was enabling some very basic things we take completely for granted now: basic voice telephony (analog), mobile billing, roaming from carrier to carrier. Simple problems like multiplexing and managing handover were tackled, and capacity wasn’t a huge concern because of relatively limited adoption due to price. Then came 2G in the form of CDMAone/IS-95 and GSM, which brought more voice capacity (digital), and very basic data connectivity. As adoption increased, more and more capacity was necessary, prompting 3G planning.
Each of the two camps then formed their own 3G projects for improved data speeds (3GPP for GSM family technologies, 3GPP2 for CDMA family technologies), the results of which were WCDMA and CDMA2000, respectively. The W in WCDMA stood for “wide,” since 3GPP settled on relatively wide 5MHz channels compared to CDMA2000’s 1.25MHz channels. The original suite of “3G” technologies didn’t meet ITU-R goals set at IMT-2000 which put a 3G throughput target at 2Mbps, and both the 3GPP and 3GPP2 camps went back to the drawing board with a new focus on data.
3GPP2’s solution was HRPD (High Rate Packet Data) which we now know as EVDO (EVolution Data Optimized), and 3GPP came up with HSPA (High Speed Packet Access). The big differentiator between the two historically has been that HSPA offered simultaneous voice and data multiplexed on same 5MHz carrier, while the 3GPP2 solution required a separate 1.25MHz CDMA2000-1x carrier for voice. 3GPP2 went on to mitigate the lack of simultaneous voice and data with a VoIP solution in EVDV and SVDO, but it hasn’t seen adoption. The 3GPP camp improved on GPRS data rates with EDGE as well. In modern cellular data networks, HSPA and HRPD (EVDO) have become the dominant 3G players we’re used to seeing.
That’s a hugely oversimplified look at the evolution the most popular two of cellular access technologies have taken, but there’s a fairly obvious trend which emerges. Focus has gradually shifted away from delivering more and more voice capacity, and settled on delivering faster and faster data. Voice’s place in the broader picture is just another service atop a data connection, or on a legacy network technology, at least for the time being.
A Verizon 4G LTE eNodeB—the LTE antennas are the bigger ones on the outside
If you’ve been paying attention at all, chances are that you’re pretty familiar with the data scenario everywhere—3G networks based on tech from both the 3GPP and 3GPP2 camps are strained to capacity. The short term solution is to deploy more and more carriers (channels) and linearly scale capacity, but that requires more and more spectrum. The long term solution is even more spectrally efficient multiplexing schemes, smart antennas, and spatial multiplexing, which offer more efficient use of the same spectrum.
The story of 4G thus far has been unfortunately dominated by semantics surrounding what suite of network tech qualifies as being truly fourth generation. Remember how I mentioned that ITU-R set some guidelines way back for what should be considered the bar for 3G? Back then it was 2Mbps while moving. The ITU-R did a similar thing for 4G, and that original guideline was an optimistic 1Gbps stationary and 100Mbps with mobility. It helps sometimes to have actual goals. The exact quote gives a bit more leeway:
“The goals for the capability of systems beyond IMT-2000 are up to approximately 100Mbps for high mobility such as mobile access and up to approximately 1Gbps for low mobility such as nomadic/local wireless access around the year 2010. These goals are targets for research and investigation and may be further developed in other ITU Recommendations, and may be revised in the light of future studies.”
In October, ITU-R recognized LTE-Advanced and WiMAX equivalent (P802.16m) as true 4G technologies that met the 1Gbps stationary and 100Mbps mobility requirements, in addition to a number of other guidelines. In December, however, ITU-R relented and declared that both LTE and WiMAX (as they’re deployed right now) can be called 4G. However, part of this hedging was one more statement—“[in addition,] evolved 3G technologies providing a substantial level of improvement in performance and capabilities” also qualify to be considered 4G.
This essentially is leeway to allow HSPA+ which offers some of the same evolutionary enhancements and features such as higher order modulation, MIMO, and multicarrier to also qualify as 4G. Without them, I think it’s fair to argue that it isn’t really quite the same level of advancement.
In reality, the ITU doesn’t have any ability to police what marketers or carriers bill as 4G. Heck, they could start calling things 5G or 6G tomorrow. One friend I have sarcastically has his N900 set to show “6G” when connected to 3G. But ultimately ITU should be considered an authority nonetheless for setting the bar somewhere.
LTE Network Tech Explained
If you look at the evolution of wireless networks from generation to generation, one of the clear generational delimiters that sticks out is how multiplexing schemes have changed. Multiplexing of course defines how multiple users share a slice of spectrum, probably the most important core function of a cellular network. Early 2G networks divided access into time slots (TDMA—Time Division Multiple Access); GSM is the most notable for being entirely TDMA.
3G saw the march onwards to CDMA (Code Division Multiple Access) where each user transmits on the entire 5 or 1.25MHz, but encodes data atop the spectrum with a unique psueodorandom code. The receiver end also has this pseudorandom code, decodes the signal with it, and all other signals look like noise. Decode the signal with each user’s pseudorandom code, and you can share the slice of spectum with many users. As an aside, Qualcomm initially faced strong criticism and disagreement from GSM proponents when CDMA was first proposed because of how it seems to violate physics. Well, here we are with both 3GPP and 3GPP2 using CDMA in 3G tech.
Regardless, virtually all the real 4G options move to yet another multiplexing scheme called OFDMA (Orthogonal Frequency Division Multiple Access). LTE, WiMAX, and now-defunct UMB (the 3GPP2 4G offering) all use OFDMA on the downlink (or forward link). That’s not to say it’s something super new; 802.11a/g/n use OFDM right now. What OFDMA offers over the other multiplexing schemes is slightly higher spectral efficiency, but more importantly a much easier way to use larger slices of spectrum and different size slices of spectrum—from the 5MHz in WCDMA or 1.5MHz in CDMA2000, to 10, 15, and 20MHz channels.
We could spend a lot of time talking about OFDMA alone, but essentially what you need to know is that OFDMA makes using larger channels much easier from an RF perspective. Engineering similarly large channel size CDMA hardware is much more difficult.
In traditional FDMA, carriers are spaced apart with large enough guard intervals to guarantee no inter-carrier interference occurs, and then band-pass filtered. In OFDM, the subcarriers are generated so that inter-carrier interference doesn’t happen—that’s done by picking a symbol duration and dividing it an integer number of times to create the subcarrier frequencies, and spacing adjacent subcarriers so the number of cycles differ by just one. This relationship guarantees that the overlapping sidebands from other sub-carriers are nulls at every other subcarrier. This results in the interference-free OFDM symbol we’re after, and efficient packing of subcarriers. What makes OFDMA awesome is that at the end of the day, all of this can be generated using an IFFT.
If that’s a confusing mess, just take away that OFDMA enables very dense packing of subcarriers that data can then be modulated on top of. Each client in the network talks on a specific set of OFDM subcarriers, which are shared among all users on the channel through some pre-arranged hopping pattern. This is opposed to the CDMA schema where users encode data across the entire slice of spectrum.
The advantages that OFDMA brings are numerous. If part of the channel suddenly fades or is subject to interference, subcarriers on either side are unaffected and can carry on. User equipment can opportunistically move between subcarriers depending on which have better local propagation characteristics. Even better, each subcarrier can be modulated appropriately for faster performance close to the cell center, and greater link quality at cell edge. That said, there are disadvantages as well—subcarriers need to remain orthogonal at all times or the link will fail due to inter-carrier-interference. If frequency offsets aren’t carefully preserved, subcarriers will no longer be orthogonal and cause interference.
Again, the real differentiator between evolutionary 3G and true 4G can be boiled down to whether the air interface uses OFDMA as its multiplexing scheme, and thus support beefy 10 or 20MHz channels—LTE, WiMAX, and UMB all use it. Upstream on LTE uses SC-FDMA which can be thought of as a sort of precoded OFDMA. One area where WiMAX is technically superior to LTE is OFDMA on the uplink, where it in theory offers faster throughput.
There are other important differentiators like MIMO and 64QAM support. HSPA+ also adds optional MIMO (spatial multiplexing) and 64QAM modulation support, but even the fastest HSPA+ incantation should be differentiated somehow.
Again, OFDMA doesn’t implicitly equal better spectral efficiency. In fact, with the same higher order modulations, channel size, and MIMO support, they’re relatively similar. The difference is that OFDMA in LTE enables variable channel sizes and much larger ones. This table from Qualcomm says as much:
Keep in mind, the LTE device category here is category 4.
Launch LTE devices with MDM9600 are category 3.
LTE heavily leverages MIMO for spatial multiplexing on the downlink, and three different modulation schemes—QPSK, 16QAM, and 64QAM. There are a number of different device categories supported with different maximum bitrates. The differences are outlined in the table on the following page, but essentially all the Verizon launch LTE devices are category 2 or 3 per Verizon specifications. Differences between device categories boil down to the multi-antenna scheme supported and internal buffer size. Again, the table shown corresponds to 20MHz channels—Verizon uses 10MHz channels right now.
One of the limitations of WCDMA UMTS was its requirement of 5MHz channels for operation. LTE mitigates this by allowing a number of different channel sizes—1.4, 3, 5, 10, 15, and 20MHz channel sizes are all supported. In addition, all equipment supports both time division duplexing (TDD) and frequency division duplexing (FDD) for uplink and downlink. Verizon right now has licenses to the 700MHz Upper C-Band (13) in the US, which is 22MHz of FDD paired spectrum. That works out to 10MHz slices for upstream and downstream with an odd 1MHz on each side whose purppse I’m not entirely certain of.
All of what I’ve described so far is part of LTE’s new air interface—EUTRAN (evolved UMTS Terrestrial Radio Access Network). The other half of the picture is the evolved packet core (ePC). The combination of these two form LTE’s evolved packet system. There’s a lot of e-for-evolved prefixes floating around inside LTE, and a host of changes.
More about LTE and Implementation Details
First off, LTE is entirely IP-switched. Gone are circuit-switched components, and in their place is an all-IP network. There’s one radio component, the eNodeB (e again for evolved) that user equipment talks with. Likewise, there are really only three main components in the evolved packet core. First is the PDN Gateway, which faces the internet, outside networks, and serves as the anchor point between 3GPP and 3GPP2 (1xRTT and EVDO) networks. The second component is the serving gateway (SGW), which essentially is a router facing all the eNodeBs on the network. The final part of the network is the mobility management entity (MME), which provides the control interface for mobility, authentication, and plays a critical role in handover.
LTE Network Architecture with 3GPP2 Interworking, or very similar to how Verizon's
current LTE/EVDO network is architected. Source: Alcatel Lucent LTE Poster
Every LTE device gets a persistent IP address from the PDN Gateway at power on, which it keeps for the duration of its time attached to the network. Contrast that to UMTS where equipment is assigned a per-data-session IP address, though in practice it ends up being the same one for the duration of attach. Every component inside the evolved packet core has an internal IP address. To reduce idle-active time and signaling, there are fewer signaling states in LTE than even HSPA+ and much faster state changing. The result is a much simplified architecture that lets devices start transacting data faster and get into sleep quicker, which means lower latency. It’s a network architecture completely designed around delivering data throughout the network efficiently.
I mentioned earlier that there’s an obvious emphasis in almost all the new 4G networks on data first, voice second. That’s primarily a result of the fact that data use has passed an inflection point and continues to explode, whereas voice use has remained relatively constant. As a result, there’s no actual voice support in 3GPP Release 8 (the current LTE deployment). Voice support only comes in Release 9. As an aside, only 3GPP Release 10 enables LTE Advanced which fulfills those ITU rules that originally defined “real” 4G (100Mbps mobile, 1Gbps stationary).
Further, voice on LTE doesn’t really exist right now even in VoIP form because handover in practice isn’t entirely perfect. There’s an odd second or so long pause that happens sometimes as the data session context is transferred from eNodeB to eNodeB (or back to the serving gateway, then across), though in practice handovers should be around 50ms. Remember that in LTE, all handovers are hard handovers. As network deployment continues, base station and equipment manufacturers will no doubt make handover more and more refined and networks will begin to be robust enough for VoLTE to make practical sense. For the immediate future, however, voice still will require a hard handover to 3G UMTS in the 3GPP camp, or 1x voice on the 3GPP2 camp.
In the current deployment, incoming call signaling happens on LTE to the handset, the phone does a hard handover to 3G for the call to take place, and after the call is finished the handset hands back up to 4G. Verizon’s 4G LTE deployment uses good ‘ol 1xRTT for voice, and 3GPP camp UMTS network operators (like AT&T) will have devices fall back to 3G UMTS or even 2G GSM for voice. The HTC Thunderbolt is a special case since it has an interesting dual modem architecture that enables simultaneous 1x voice on one modem (the baseband on the MSM8655) and 3G or 4G data on another baseband (MDM9600).
LTE Network Architecture with 3GPP Interworking, but pre-release 8, or very similar to
how AT&T's initial LTE architecture might look. Source: Alcatel Lucent LTE Poster
I guess that also serves as a decent segue into how data coexists with 3G networks in a 4G LTE network. Since LTE is a 3GPP camp technology, the situation is relatively simple for earlier legacy 3GPP networks. In such a mixed environment, UTRAN (from WCDMA) coexists with EUTRAN from LTE through some emulation at the mobility management entity. Access to the internet happens through the PDN gateway for both legacy network access and LTE 3GPP. This is how AT&T and most of the world’s carriers running GSM/UMTS/LTE will initially deploy. As things evolve and the 3G side of 3GPP also gets updated to Release 8, there’s better native support for UTRAN and EUTRAN to both maintain their own respective connections without a PDN gateway.
LTE Network Architecture with 3GPP Interworking, but pre-release 8, or very similar to
how AT&T's initial LTE architecture might look. Source: Alcatel Lucent LTE Poster
On the other side of the spectrum is how Verizon and other 3GPP2 camp networks will deploy LTE. This is what we tested and what’s already out in the wild right now. The situation here is a bit more interesting since the network architecture has to be able to hand data sessions from LTE back to EVDO/1x and maintain IP addresses and data sessions across both. To do so, it uses EHRPD (as opposed to HRPD—High Rate Packet Data used in EVDO). Again, all this really means is that traffic from both LTE’s EUTRAN and 3G EHRPD are anchored to the internet through the PDN Gateway.
To make 3GPP and 3GPP2 coexist, publicly routable IP addresses are no longer assigned directly to clients. The result is that clients don’t get their own truly public IP address. The downside then becomes that clients can’t run servers of their own. That’s great for carriers trying to enforce their subscriber contracts, but terrible for end users trying to use LTE for applications that require them—VoIP, VPN, and matchmaking based multiplayer games are easy examples. You can make a trade off and still use HRPD with routable IP addresses on most modems, but LTE and the default coexist 3G/4G modes on user equipment mandates EHRPD.
The last part of the puzzle is what speeds are like, in theory. This is a bit more complicated since LTE includes support for a variety of different channel sizes (1.4, 3, 5, 10, 15, and 20MHz) in both TDD and FDD modes and user equipment categories (with different MIMO and modulation scheme) support. As a result, what maximum theoretical speed is truly the right one depends on how much spectrum carriers support in each market, and how fancy of a device you have. LTE only really shows speeds way beyond 3G when given wider channels, if you assume the same set of features (higher order modulation schemes, MIMO, etc.) right now.
Verizon as mentioned earlier is currently running 10MHz channels FDD, AT&T is trying to acquire enough AWS to enable 20MHz channels, and Clearwire has enough spectrum up between 2.5 and 2.6GHz for 20MHz FDD which they’re currently already running trials of in Phoenix. I’d love to test that out.
All of Verizon’s launch LTE devices we’ve tested are built around Qualcomm’s MDM9600, which is a category 3 device with 2x2 MIMO and downlink beamforming.
Only the LG VL600 is based on a different cellular baseband, LG’s own L2000 chipset for LTE and the MSM6800A for CDMA2000/EVDO. In Verizon’s open access specifications, it notes that all 4G LTE devices are category 2 or 3. In addition, Verizon’s LTE deployment sits on 3GPP band 13, which again uses 22MHz of FDD paired spectrum in the upper C band. That works out to two 10MHz channels allocated for uplink and downlink. For comparison, AT&T currently has mostly lower B licenses, with some markets having both lower B and C.
As a result, the correct maximum theoretical downlink speed for Verizon LTE is really 73Mbps down and 36Mbps up for category 4 devices, and 50Mbps down, 36Mbps up for category 3 devices on Verizon’s 4G LTE.
Of course, those two speeds are achievable only in best case signal environments with two spatial streams. In reality, real world speeds will be lower as we’ll show shortly. That said, the speed gains still are an order of magnitude above the current state of 3G speeds. In addition, it seems highly likely that carriers will initially shape traffic to simulate loaded cell performance and not create unrealistic performance expectations.
LTE also supports both IPv4 and IPv6. I was always assigned an IPv6 address on 4G LTE, but sadly didn’t test this enough to come to any definitive conclusions about it. On EVDO, only an IPv4 address is assigned. The IPv6 address that I was assigned did seem to be public.
Finally and importantly, Verizon’s 4G LTE implementation stores both USIM and CSIM data on their UICC. As a result, there’s no longer a need to call support, hand them an ESN, and wait to change between devices. I successfully took Verizon's preprovisioned SIM from the UML290 and stuck it in the Samsung SCH-LC11 hotspot, and immediately got online without any issue. I'd expect full device portability to be a non-issue between devices that attach to similar service plans. This is a huge feature if you intend to swap between phones a lot or are in a situation where sometimes a USB modem makes most sense, sometimes a hotspot.
Pantech UML 290
So enough about that, how about the devices we’ve talked about? First up are the two data cards I’ve already mentioned, the Pantech UML290 and Verizon/Novatel USB551L. There’s a third card as well, the LG VL600, but we haven’t had any time to get hands on with it.
The Pantech UML290 has a flip out design to preserve its compact size when being transported, and allow an orthogonal antenna. Swing the black portion 90 degrees clockwise and it’ll flip open, revealing the USB connector which pops out. There’s a sticker inside which mentions that the device should be left flipped open for best reception. There’s an entertaining typo on the sticker as well that gave me a moment’s pause after a friend pointed it out.
This swivel-out black area does conceal an orthogonal antenna which I’ll show later. The whole data card can rotate and swivel about the USB connector as well.
Flipped open, at the top of the UML290 is the chiclet-shaped LED status indicator. It blinks a fast blue when attached to the cellular network, and red when connected to the computer but with no network attach. In all honesty, because the status LED doesn’t blink on data activity it isn’t entirely useful. It’s more of a binary connected yes/no status indicator than a real activity indicator.
On the bottom of the device are two black removable covers that hide optional external GPS and cellular antenna connectors. There’s a test junction above the one for cellular connectivity. The other is clearly marked for GPS and you can hook an antenna up to it. While we’re talking about GPS, the UML290 as it originally shipped does not have GPS support.
The official feature status last I checked was “not at launch,” and although the latest firmware update did enable a COM port, I still have never successfully gotten GPS working. In windows, this COM port is even labled NMEA for GPS, so I assume this is coming very soon but at a later date.
On the side of the device is the SIM card slot. Verizon uses full size USIMs that come in a big card just like you’d expect them to. Punch them out from the larger card, and you’re good to go. Each LTE device we’ve tested has come with the exact same Verizon 4G SIM card with some literature and the punch card. Other contents in the UML290 box include an extension USB cable with laptop clip, and all the requisite paperwork which you can check out in the gallery.
This wouldn’t be AnandTech without me disassembling something. After I performed all my required testing on the UML290, I decided to bust it open.
Inside you can clearly see just how many antennas are involved in making LTE work. There are two clearly visible on the front and back side of the PCB (though this is likely for GPS), along with two U.FL connectors and pigtails which snake through a port leading to the black swivel antenna. There’s a small black thermal pad that makes contact with the back side of one EFI can.
Further disassembly proved too challenging, and I was unable to get the EMI cans off the PCB. That said, were we to remove them, we’d likely see the MDM9600 and some adjacent NAND, as well as a bunch of power management ICs. Impressive however that something as simple as a data card has 4 antennas inside.
Build quality on the Pantech UML290 is surprisingly good. Short of the connector swivel being a bit loose, there’s nothing really noteworthy about the device’s physical construction. It’s certainly a bit larger than other 3G data cards from the previous generation, but again this is a first generation 4G modem.
Verizon USB551L
The second card we tested is a much newer arrival, the Verizon USB551L, made by Novatel Wireless. The USB551L is the first of Verizon’s LTE data cards to come with support for OS X out of the box (though the Pantech UML290 and LG card now also do so with a firmware update), and it displays that support proudly on the box. More on this in a second.
THe USB551L is considerably lighter and cheaper feeling than the UML290. Something about the whole device simply just lends it a less durable feeling.
There’s a simple plastic cover over the SIM card slot.
Unlike most devices, the USB551L holds the SIM in with a plastic clip, and requires a small stick to eject it—you press the plastic part down, then slide it out. It’s definitely the first time I’ve seen this mechanism used to hold a SIM in place.
The USB connector rotates out and seems to be held at a particular location using some spring loaded mechanism. It’s a half height USB connector that has exposed gold contacts without the full surrounding USB plug.
Considering that most of the time it’s shielded in the closed position, that’s not a big deal. The problem with this construction is that there’s some other mechanism pressing against a metal contact on the connector which rotates. If you push the data card back far enough, it’ll lose the data connection entirely when the mechanism stops pressing against metal. It also always wants to sit at one orientation due to the spring loading.
If your notebook has USB ports on the back, pushing the laptop display back too far could rotate the card far enough to cause disconnect. It’s a potentially frustrating configuration. I ran into this when testing on my uber-old Inspiron 8500, for example.
On one side of the USB551L is an external antenna jack hidden under a rubber plug. This is most likely cellular only since the USB551L advertises no GPS support unlike the Pantech.
The majority of the USB551L is a black soft touch material, though the top and bottom are slippery glossy plastic. There’s a small vertical window to the right of the Verizon logo where the status LEDs are. Blue indicates LTE, green indicates EVDO, and blinking of either indicates activity.
Side by side, the USB551L and UML290 are roughly the same outline, though the UML290 is thicker and feels more dense. Both also come with appropriate USB extension cables.
I opened the USB551L after I was done with testing to investigate how it manages connectivity with a simpler, smaller package.
The USB551L seems to be held together using some one-way plastic tabs and adhesive. I pried it open no problem and got it back together as well, but it’s just a bit more challenging. Inside, you can see one antenna runs along the length of the device, the other is orthogonal and up at the top.
The one that runs the length of the device is held in contact with a pogo pin, the other with a gold pad.
The EMI cans on the USB551L pop on and off easily, allowing us an awesome opportunity to grab a shot of the MDM9600 running the show. You can also easily see its adjacent NAND, though I couldn’t make out any of the markings.
That USB connector situation is more visible now as well. I’m not entirely sure what’s going on here, but you can see two pads on both sides and a metal strip that clearly runs across the USB connector. Again this flip open system is spring loaded and likes to sit in one position. More pics in the gallery below:
Data Card Software: VZAccess Manager
The other part of the picture is software. Both modems use Verizon’s VZAccess Manager program on OS X and Windows. I guess that brings me to my primary complaint about both of these data cards—they don’t have internal storage partitions with the software stored on them.
In the past, data cards I’ve used seem to always have at very least a read only partition, sometimes even a microSD card drive letter, where drivers and management software were stored. The obvious reason for this was that you could then have a guaranteed way of getting online even if you didn’t have the driver CD with you. Without this, you can potentially be stuck in a maddening Catch-22 situation where you could conceivably get online, but only if you had the foresight to install drivers beforehand. I literally have a small pile of various modems for EVDO and UMTS that all have this simple feature.
So for some reason, all of Verizon’s launch 4G LTE modems lack an internal storage partition with the drivers, which is incredibly confusing. I sincerely hope newer modems have the feature, and that the reason this is lacking is purely due to the current architecture of these cards and not some legal preclusion.
I started using the UML290 before its first firmware update, and likewise before it had official OS X support. The UML290 needed a firmware flash from the Windows version of the VZAccess Manager; then I could download the OS X version of VZAccess Manager and use the UML290 without issue.
My initial testing results from the UML290 were taken on a Latitude XT running Windows 7, and later on an Inspiron 8500. However, I spent equal amounts of time just testing the two on a 2010 MacBook Pro and 2011 MacBook Pro.
VZAccess Manager on OS X 10.6.7
In Windows, VZAccess manager has a bit more configuration detail and offers different options than on OS X, but overall the organization and functionality is the same. There are tabs for both connectivity management, statistics, SMS/text messaging, and finally WiFi locations. At the bottom is a persistent bar displaying connectivity status, duration, activity, current cellular number (the data card still has a phone number for SMS), and signal strength. Hovering over the signal strength bars creates a bubble with more details about the current cell environment. When connected to 1x or EVDO, you get the current RSSI (Received Signal Strength Indication) for both 1x and EVDO. When connected to 4G LTE, you get both RSSI and SINR (Signal to Interference Noise Ratio).
I’m glad to see that SINR is being reported, since it’s very important on LTE to have a measure of how much noise there is in addition to inter-symbol-interference due to loss of orthogonality. This can come from reflections that exceed the cyclic prefix length, and more importantly from doppler shift, which causes loss of carrier orthogonality when the user equipment is moving quickly. Thankfully OFDM is engineered to work with this.
In LTE SINR is probably the most important quality metric; however, I feel like SINR is being reported incorrectly in the VZAccess Manager. Based on the definition of SINR and the data I collected, it seems that there’s either a missing negative sign, they report the inverse, or something else is wrong. Higher SINR is better and should result in better throughput.
VZAccess Manager by default pulls in your current usage statistics over SMS when you first connect. This is somewhat of a problem because it can force you to drop back to EVDO while the SMS is received. I disabled this in the client and never experienced any problems. On Windows, you can also send AT commands by hitting control-T, which is pretty much what you have to do to reconfigure proper APNs if you used the UML290 on OS X before the official firmware flash and update went out.
Inside the speeds tab is a graph of upstream and downstream throughput. There are fields for average and maximum over the sampling window down below, and running totals for the current data session. To the right are the local IPv4 address and IPv6 address. Back before an update, the graph for speeds used to have some strange sampling that would show a very odd throughput profile with periodic dips. This is fixed in the newer version and is much smoother, reflecting the actual throughput profile.
One thing that’s a bit frustrating is that inserting a new card requires you to click detect before the card can be connected. This can take a while sometimes unless you always use the same datacard. I’ve put together a full gallery with screenshots from the Windows and OS X versions of the access manager suite. There’s a majority of stuff which is virtually identical between the two versions.
Probably the most frustrating thing right now however is that there’s a disparity in what software versions the cards use on OS X. As of this writing, all the cards use 7.6.3 on Windows. However, on OS X, the USB551L uses 7.2.3 instead of the newer and more stable 7.2.5 that the older VL600 and UML290 datacards use. As a result, the newer USB551L (which is the only one with OS X support noted on the box) actually has worse OS X support than the two older cards that gained it from a firmware flash.
Further, I experienced a substantial number of kernel panics while using the USB551L on both a 2010 and 2011 MacBook Pro. Both computers had no kernel panics running 7.2.5 and the UML290. I experienced no instability or dropped connections with either card on Windows XP or 7. In that scenario, everything is perfectly stable. I have no doubt that all of these problems with the USB551L will be worked out with firmware updates and another software release that will bring everything up to version parity, but right now the situation is frustrating on OS X.
The Pantech UML290 as of this writing is going for $49.99 on a two year contract ordered online, or $249.99 without. The Novatel USB551L is selling for $99.99 on a two year contract ordered online, or $249.99 without. It’s odd to me that the UML290 is being discounted to $50 cheaper than the USB551L and VL600, especially considering that the UML290 in my opinion has vastly better build quality and current OS X support. It’s entirely possible however that it isn’t selling as well due to its rather large flip-out design.
Samsung SCH-LC11 WiFi Hotspot
The next item on the list is the first LTE-enabled portable WiFi Hotspot, Samsung’s SCH-LC11. Samsung managed to beat Novatel to market with the first LTE enabled portable hotspot.
The SCH-LC11 comes in a small box matching the style of virtually all of Verizon’s LTE gear. Inside is the hotspot, a smartphone sized 5.55Wh battery, and 0.7A USB charger which appears to be the same as I’ve seen shipped with the Galaxy S phones. It’s a pretty big battery in the SCH-LC11, especially compared to the very ubiquitous MiFi 2200 which comes with a 4.25Wh battery.
The SCH-LC11 is almost the exact same outline shape and size as the MiFi 2200; it’s clear that Samsung had sights set on at least emulating some of this design.
Thickness wise, the Samsung hotspot is about 2mm thicker due to the larger battery that runs the length of the device. Along those lines the Samsung hotspot is a bit heavier as well, at 81.5g compared to the original MiFi’s 58.2g.
Removing the battery cover is easy thanks to a thumb slot at one side. Peeling that off reveals a sticker which has the device’s default SSID and password. It’s subtle, but putting the default SSID and password under the battery cover makes a huge amount of sense, and I’m glad Samsung did this. I purchased a Virgin Mobile MiFi 2200 when it first came out to take advantage of its originally unlimited data plan. We’ve used it at conferences a few times, and I originally had the intention of reviewing it before getting so backed up. Anyhow, one of the things that’s always bugged me about that device is that the default SSID and password are on a sticker right on the back of the device, which seems like an incredibly shortsighted and awful way of safeguarding the defaults that I wager a lot of users just stick with. Anyhow, it's a subtle difference. With the battery removed, you can see the slide in SIM slot and two test points for cellular RF chains.
On the top are the device’s status indicator LEDs; there’s one for 4G, 3G, WiFi status and activity, and of course power. What’s really nice about these is that they actually change color based on signal strength. Green means strong singal, yellow means weak, red means no signal.
Clockwise from top left: Green, Yellow, Red. Yellow is hard to tell apart from green on the SCH-LC11.
WiFi glows green when no clients are connected, turns blue when clients are associated, and blinks on activity. Though newer MiFi designs use an e-ink display with signal bars, the Samsung hotspot’s use of LEDs that change colors gets the job done. This is a huge improvement from the original MiFi where checking status required a trip into the internal config pages.
The only major problem with these LEDs is the power LED, which is green between 20-100%, yellow between 6-20%, and red between 1-5%. As a result, it’s impossible to tell when the hotspot is fully charged, since it shows green over so much of the dynamic range. This definitely created problems for me multiple times, and sadly the only way to get a battery percentage is to head into the config pages and check.
The SCH-LC11’s design and build quality are both excellent. The majority of the device is a light soft touch material, with the exception of the glossy black strip where the status LEDs are. On the side is the device’s microUSB port which has a door similar to Galaxy S.
There’s not too much more to talk about regarding the Samsung hotspot’s physical appearances. It feels solid and isn’t too heavy, and maintains roughly the same outline as what people now expect portable WiFi hotspots to have. Like all of the LTE gear I’ve touched this far, the SCH-LC11 is also built around Qualcomm’s MDM9600 for EVDO and LTE.
SCH-LC11 Continued
The next important thing is how the SCH-LC11 fares from the network side and other details about connectivity. One of the things the original MiFi was really criticized for was that by default, when connected over USB, the MiFi would stop working like a wireless access point and instead present itself to the host computer like a standard USB modem. There was an internal configuration file you could edit and then transfer back to the device to enable it to just charge, but most people didn’t bother. As a result, this proved confusing and drew criticism.
On the Samsung SCH-LC11, the microUSB port serves only as a charging port. There’s no ability to use the device as a straight up datacard. I can understand why Samsung made that choice—it’s to avoid the same sort of confusing situation the MiFi was involved in—however, at least for me, this is a big downside.
If you’ve ever been to a conference in the last couple of years, chances are you’ve experienced the huge nightmare that 2.4GHz WiFi becomes in almost any talk, large venue, press conference, you name it. The MiFi has drawn its fare share of criticism for contributing to the 2.4GHz spectrum crowding problem, as it well should. At almost every conference, when that happens, there’s no option other than to completely stop using WiFi. However, when that happens, I generally plug the MiFi into my notebook and continue using it as a modem. In addition, there are other workplace related restrictions that might occasionally disallow AP creation where using the device as a USB modem might be incredibly useful.
With the Samsung as it is now, you can’t do that. We’re going to take a look at the MiFI 4510L as well shortly, but Novatel has also made the same design choice and by default doesn’t allow the device to work as a USB modem.
The other part of the situation is WiFi connectivity itself. Again, 2.4GHz is incredibly crowded at conferences, leading to a complete standstill both on tradeshow floors and during every major event. Meanwhile, 5GHz ISM offers much more spectrum with many more nonoverlapping channels than the 2.4GHz ISM band, yet neither the Samsung SCH-LC11 hotspot nor the upcoming MiFi 4510L offer 5GHz wireless support. Again, this is just another case where (at least for my use profile) I’m still left looking for the ideal device.
The SCH-LC11 does have 802.11n support, however it ships by default in only 802.11b/g mode. I’d prefer to see “802.11g only” selected by default, if only because doing so does help mitigate the situation at conferences slightly. Moving away from DSSS in 802.11b and over to the OFDM schema in 802.11g/n does help the spectrum crowding problem slightly. Generally the situation there is further worsened by clients that immediately fall back to DSSS 802.11b rates (1, 2, 5.5, and 11Mbps) when SNR is bad, creating a feedback loop of problems. I could go on and on about this issue, but that’s really for another day. [Ed: In other words, tradeshows are a pain.]
Thankfully the SCH-LC11 does ship with channel selection set to auto, which likely does a quick poll of nearby APs and selects the least occupied channel. I did notice that the SCH-LC11 does periodically do channel reselection even when clients are connected, which leads to interrupted sessions and can be very frustrating. I ended up picking a channel manually and going with it. I also encountered better stability with things set to WPA2 only (AES).
I performed all of my battery life testing with 802.11b/g selected, and then did testing in 802.11b/g/n mode. In b/g/n mode, I saw clients associate with MCS Index 7 at 73Mbps, showing that it does support the short guard interval and is using 20MHz channels. WiFi AP range on the SCH-LC11 seems inordinately large, as I was able to stay connected to the AP everywhere in my house which surprised me. I’d like to see an option going forward to decrease WiFi AP power significantly both for battery savings and again to mitigate the insanity at conferences.
There's a five user maximum even with the address pool set larger than 5.
Note up at the top the green circle shows the current number of associated clients.
The SCH-LC11 allows a grand total of five devices to connect. More on that in a moment. By default the SCH-LC11 sets up a 192.168.1.1/24 network and enforces five devices by only having a DHCP address pool between 192.168.1.2 and 192.168.1.7. The hotspot status and configuraion page is at 192.168.1.1. The default configuration page password is also the AP password.
The configuration pages are actually nicely put together. Up top are signal strength indicators and to the right is a battery visualization. I’d like if these had a roll-over that would show battery percentage and RSSI, but you can check that in Configuration under Diagnostics. Everything under the Network and Security tabs is self explanatory. There are options for port forwarding and MAC filtering under Security. Under Configuration you have the option to enable roaming and automatic modem connect, “Privacy Separator Enable” (which is just client isolation mode), and VPN passthrough. Strangely there’s no ability to manually set preference for 3G EVDO or 4G LTE data; the SCH-LC11 will always prefer LTE over EVDO when possible.
Under Diagnostics is my favorite area. In here are the global traffic counters which are resettable and survive restarts. It’s amazingly easy to burn through data, as an aside. I burned through over 1GB in one day of testing.
The network page that lets you set the DHCP address range limits you to five devices. Up at the top you can see the current total number that are active. Interestingly enough, the actual address range you can submit is only enforced by some JavaScript. Disable JavaScript and the page won’t render. Use something like tamper data for firefox, and you can set the address pool to any number. Unfortunately there’s still some enforcement being done elsewhere that continues to limit you to five devices.
If you click modem status, you actually get real actionable information about battery and signal strength. That means signal for both EVDO and LTE (RSSI), which appears to be missing a negative sign, but otherwise should be in units of dBm. Sadly there’s no SINR reported for LTE, which I’ve found again and again gives a vastly more realistic indicator of current connection quality. Down below it is battery status in actual percentage form as well. It’s a bit surprising to me you have to dig this far to actually get battery status in numeric form, but here it is.
More SCH-LC11
So this brings me to my other chief complaint about the SCH-LC11. Even when connected to the beefiest of chargers, I found that the hotspot actually discharges faster than it can charge when it's connected to 4G LTE and actively serving clients with data. Remember that the USB charger that the hotspot ships with is 0.7A, 5V (3.5W). I tried the beefy 10W iPad charger, my similarly beefy 10W car charger, and a more generic 5W USB charger, and all failed to keep the SCH-LC11 from discharging while plugged in and transacting data. That’s positively mind-blowing to me.
For a while now, I’ve been secretly (well, not really secretly) running a fourth test on phones and wireless devices to measure WiFi hotspot battery life. Historically, WiFi tethering has been brutal on battery life, and I devised a test that I think is reasonably representative. For this test, I have two tabs of our standard page load test, and another two with flash, for a total of four tabs loading through a few dozen pages every 10 seconds. In addition, I have a 128kbps MP3 audio stream from smoothbeats.com playing in the background to keep data active constantly. There’s just one wireless client with a 802.11n WiFi card connected, though all this traffic reasonably approximates a few wireless clients all transacting data.
My initial battery life test results for the SCH-LC11 were actually thrown off because I expected it to have fully charged the device while I was using it and plugged in. About a half hour of use, and I was down to 82 percent in spite of being plugged in. I’ve seen the SCH-LC11 also discharge on EVDO, though it doesn’t happen nearly as rapidly as it does on LTE. I didn’t do enough testing to find out whether the device will eventually settle on some equilibrium, or discharge all the way to 0, which could make things frustrating if you intend on using the SCH-LC11 as your primary connection. Again I’m just confused as to why both the default charger is so small at 0.7A and why the device doesn’t fully follow the USB charging spec and draw more current from my other 10W USB chargers.
Under power is the ability to set different auto power off timeouts, and a much appreciated "never" setting. I’ve put together a full gallery of the configuration pages for the SCH-LC11.
SCH-LC11 Disassembled
I did decide to open up the SCH-LC11 as well, to see what WiFi chipset and device architecture is behind the web interface. Construction of the SCH-LC11 lends itself to easy disassembly, with no void stickers atop screws. Just four screws and some easy plastic tabs, and the thing comes open.
There’s a bottom plastic ring that has an antenna flex cable for WiFi and two gold pads. That antenna runs some length down the side of the device.
There’s one monolthic PCB that spans the entire device and immediately makes it pretty apparent why the Samsung hotspot is larger than the older MiFi which packed the PCB alongside the battery. There’s a modular antenna assembly that snaps onto the PCB and two gold contacts. Again two antennas are required for celluar right now thanks to the MIMO requirement in LTE. The EMI cans on the top and bottom of the board are super easy to remove and just snap on or off.
Inspecting the PCB we can see at the very center the Qualcomm MDM9600 which runs the show. Top left is a Qualcomm PM8028 PMIC which does power management and status LED control. Left of it is the hotspot’s internal configuration reset switch, and to the right is microUSB. To the right of the MDM9600 is a Qualcomm RTR8600 multi-band/mode RF transceiver for LTE bands, and down below it sits an Avago ACFM-7107 quadplexer.
Finally at the other side of the board is the SCH-LC11’s WiFi chipset, the Qualcomm WCN1312, which provides 802.11b/g/n support on the 2.4GHz band.
Finally on the reverse side of the board is a 1Gb (128MB) NAND for the MDM9600. There's more shots of everything in the gallery.
The Samsung SCH-LC11 is almost entirely Qualcomm design wins. I’m a bit surprised that the MDM9600 can drive the entire device management config and network routing without use of an external application processor, since I was under the impression that support for device manufacturers to use the onboard ARM core would only come in the next refresh of Qualcomm’s MDM lineup with MDM9615 and MDM8615. There is a marking difference however betwteen the markings on the MDM9600 inside the USB551L datacard and the SCH-LC11; the former is marked A2W126.0, the latter A2W206.0, which looks something like a hardware version system. Perhaps support for using the ARM processor came later and Samsung leverages that. Either way, to me it’s impressive that NAT routing and firewall rules are being handled at speeds sometimes over 20Mbps with no dedicated application processor.
Network performance on the SCH-LC11 is on par with other LTE gear overall, though a bit slower as I’ll show in the next section. Where the device really shines however is stability. I’ve found that both of the USB modems intermittently disconnect and sometimes require an unplug, replug, and search for device to regain connectivity. The SCH-LC11 consistently clung to 4G LTE everywhere I tested while in the 4G market, and clients were essentially always able to transact data. Samsung has done a nice job abstracting the sometimes erratic modem connectivity that happens during handover away in the device.
4G LTE Performance Comprehensively Explored
So I’ve managed to do an absolute ton of drive testing with virtually every device. I’ve recorded 542 data points in total, and run countless more that I didn’t record just for subjective measure. My testing was done primarily in Phoenix, AZ over a number of weeks, though the dates are somewhat different for each device. In Phoenix, I drove a complete loop around the 202 while running continuous tests, and also tested downtown in central Phoenix, in Scottsdale, Mesa, Gilbert, Chandler, Tempe, and inside every major mall. I also tested the Pantech UML290 in Houston in the George Bush Intercontinental Airport, in Newark Liberty International Airport, and in Washington Dulles International Airport. Remember that even though in some markets Verizon has more 700MHz spectrum, right now Verizon’s LTE network only uses 10MHz channels, so performance really is comparable everywhere right now.
This first set of tests were taken using the Speedtest.net flash test to the Phoenix, AZ server. I tested across Windows 7 and XP on a Latitude XT, and later on a 2010 and 2011 MacBook Pro. I didn’t notice any throughput differences or other big differences between Windows or OS X beyond the software stability issues mentioned earlier. I’ve divided things among devices, though I haven’t collected as much data with the USB551L as I have the Pantech UML290. That said, given the same chipset you’d expect similar performance, and I’ve tested with both at the same locations and found them virtually identical.
First up is the all-important downstream measure. I’ve created histograms here, to show the complete performance distribution.
There’s been a lot of discussion centering around whether the speeds of these launch devices are representative of what loaded cells will look like. It’s really impossible for us to say for certain, however network operators have been known in the past to shape traffic so as to not create unrealistic expectations when cells start being loaded. I think there’s likely some of that going on, in addition to some genuine slowdown. I started the UML290 testing immediately after launch (1/22/11), and the data for it represents effectively the earliest set of samples I have. Testing on the Thunderbolt and Samsung SCH-LC11 were done later (mid April) and show a bit of a drop in throughput with a nice gaussian distribution starting to form around 10Mbps. With even more testing, I fully expect we'd see that shape even clearer. It’s obvious to me at least that some loading has happened, but speeds are still impressive. Keep in mind that Verizon’s advertised down speeds for their 4G LTE network are between 5 and 12Mbps.
My highest measured downstream throughput from speedtest.net on 4G LTE was actually on the Thunderbolt, at 32.96Mbps. I did see a peak of over 39Mbps briefly at one point on the Pantech UML290, as shown below.
Next is upstream, of which again I’ve created histograms. Note that the Thunderbolt is a 2x1 device while the others are 2x2, which explains some of the upstream throughput distribution difference. Verizon advertises upstream speeds of between 2 and 5Mbps, which seems justified if a bit less conservative than their downstream throughput advertising.
Finally is latency, which speedtest.net doesn’t do a super perfect job of characterizing, but does provide representative measures.
Sub-100ms latency most of the time is impressive, and makes LTE more than suitable for online gaming. I’ve put together a short video showing that playing CS:S over LTE is more than bearable.
I have found that often longer than expected latency is due to the fact that the PDN Gateway I connected to is located in California. This is very familiar to the Home Agent behavior in EVDO where cellular data is anchored to the internet somewhere very different from your actual physical location.
On the Pantech UML290 I also collected RSSI and SINR alongside each test as it started. This was somewhat laborious but I collected 218 tests alongside signal characteristics to illustrate how performance varies at cell center and cell edge. Note that again these numbers are from the VZAccess Manager, and I believe that SINR is being reported incorrectly, as higher SINR should correlate with better throughput. What I've plotted is data straight from the VZAccess Manager. If you'd like, I've zipped the data from the PantechUML 290 with RSSI, SINR, downstream, upstream, and latency up in CSV format here for anyone wanting to play around with it.
Throughput
Latency
Speedtest.net is great but I also ran a number of tests that were much longer just to prove performance isn’t limited to a rather small and quick speed test. I’m a fan of downloading huge things over both FTP and well seeded legal torrents. I have a 100Mbps synchronus box seeding Ubuntu torrents, which is generally one of the most well seeded torrents around and easily saturates 300+Mbps connections. I’ve seen speeds between 10 and 28Mbps downloading Ubuntu 10.10 over that connection on 4G LTE.
I’ve put together a short video showing me downloading some Ubuntu torrents, running speedtests, and browsing online while connected using the Pantech UML290.
As I mentioned before, the other part of the picture is how connectivity and handovers work between EVDO and LTE. I’ve timed the Thunderbolt taking 3-5 minutes to hand up to LTE from EVDO when driving into the Phoenix market with no activity, which is a bit long. I’ve also seen very fast handovers happen in the rare circumstance that signal is lost.
Sometimes there’s a brief pause in connectivity, but all in all behavior is pretty good and will improve in time. The screenshot above illustrates how sometimes throughput pauses as the context is transferred over; in this case I was streaming the YouTube performance test video at maximum resolution. The data cards I’ve tested are sometimes a bit finnicky about handing back up if you get stuck on EVDO; to do so they really want to go into EVDO dormancy state first instead of while a data transaction is ongoing. This is exactly how I’ve seen 1x to EVDO handovers happen in the past, and isn’t anything new.
Conclusion and Final Thoughts
We've been over a lot throughout the course of this review, but there's still so much more to be said about both LTE and the intricacies of Verizon's implementation. We're going to be doing a larger and more educational anthology piece later in the summer as well. Further, we'll no doubt give the same treatment to other carriers launching new LTE networks and faster HSPA+ ones.
A photo of a Verizon 4G LTE eNodeB in a field. Longer antennas on the tower are for 700MHz LTE.
The end result of all this testing over the course of literally months is that I'm insanely jealous of everyone in a market with 4G LTE already turned on. I've driven back and forth countless times, and each time I've grown a bit more spoiled with just how fast LTE is.
The performance picture is equally clear. Worst case speeds on LTE are about as slow as EVDO is fast, around 1.5-2Mbps. When you're in a good coverage area and have good SINR, the speeds are easily an order of magnitude faster, just as advertised. That's definitely a generational improvement, and things will only get faster once we get category four modems and higher 4x4 MIMO enabled with category 5. Again current implementations are category 3 and 2x2. The next round of things to look for will be second generation LTE modems which will bring both improved performance (MDM9625, category 4) and reduced power profile (MDM9615).
The other part of the picture is how Verizon's LTE deployment will stack up against other carriers' planned deployments. Verizon is in an extremely competitive position thanks to its nationwide upper C block license that grants it 22MHz of 700MHz spectrum everywhere in the US. In a number of markets, it also has lower A and B block licenses, giving it even more spectrum to deploy more channels when capacity starts picking up. That means Verizon has up to 34 or even 46MHz of 700MHz spectrum to play around with. AT&T on the other hand has a sprinkling of lower block B and C licenses that are both 12MHz. AT&T also purchased Qualcomm's licenses to blocks D and E, which are both 6MHz unpaired, though it's not entirely clear how AT&T will integrate both blocks of unpaired spectrum. All total that gives AT&T between 12 and 36MHz of 700MHz spectrum depending on market. That's really the moral of the story here—performance and coverage are going to vary greatly depending on what licenses each carrier has in your market.
The next part of the story are the launch devices themselves and pricing. 4G LTE data cards and hotspots on Verizon come with a data allowance of either 5 or 10GB, for $50 and $80 a month, respectively. Go over those data caps, and you'll be billed $10/GB. I found it shockingly easy to burn through massive amounts of data on every LTE card—I ate up 10GB testing the Pantech UML290 on a weekend, and then over 17GB later on. Don't even ask about the Samsung hotspot. I shudder when I think about the bills I would incur were these devices in my pocket all the time.
As of this writing, the HTC Thunderbolt gets you "unlimited" LTE data for $29.99/month, though a 5GB soft cap no doubt applies. It's also confusing to me that maximum allowed user caps are still applied to LTE WiFi hotspots, and even more puzzling is the fact that five is the number of users when the Thunderbolt allows eight, and the Droid Charge a rumored ten.
I guess that brings me to to the WiFi hotspot situation. Until the hotspots have 5GHz ISM band support for WiFi, I cannot do without hotspot-as-modem functionality. The situation at conferences is extreme, and continuing to deploy devices that set up reasonably powerful 2.4GHz WiFi APs for a few nearby devices only exacerbates the problem. Deploy all of this into the 5GHz band with its greater number of nonoverlapping channels and everyone will be able to breathe easier—for at least another couple of years until that too becomes saturated.
Both data cards function perfectly on Windows, but OS X stability needs major amounts of work. The irony of the situation is that the newer USB551L with OS X support marked on the box in my testing was considerably less stable than the older Pantech UML290. Firmware updates and an update coming to the VZAccess Manager software should fix these stability problems.
For that reason, probably the best way to get on LTE right now is either through the Thunderbolt or one of the LTE WiFi hotspots. The Samsung SCH-LC11 was consistently stable and clung to 4G LTE impressively well, where the data cards require an occasional disconnect and reconnect.
If you made it this far, I hope you gained something from this article. Looking at the evolution of the PC, high speed internet access went hand in hand with improvements in processing power. The smartphone industry has seen a tremendous amount of improvement in CPU and GPU power over the past three years and it looks like LTE will be sufficient to support that trend for the foreseeable future.