My friends often think I've lost my mind given how many times I run speedtests on handsets in the process of reviewing them. The reason I'm compulsive about testing cellular network throughput and conditions is because it's massively important to the overall smartphone experience - remove the cellular connection, and you've got a very expensive personal media player at best. We've taken a good look at Verizon's 4G LTE a number of times, however we haven't yet given AT&T's LTE the same treatment, though it's coming soon.

We're wrapping up CES 2012 in Las Vegas, and the whole time I've been obsessively running throughput tests using Ookla's speedtest.net application on an HTC Vivid and CDMA/LTE Galaxy Nexus, two UE Category 3 devices on AT&T and Verizon Wireless LTE, respectively. I thought it worthwhile to share how AT&T LTE has stacked up next to Verizon Wireless 4G LTE during CES thus far. I've been running tests on both phones at the same time in both the Las Vegas Convention Center and Venetian CES venues in between meetings, as well as up and down the Strip during the convention. This isn't meant to be super scientific, but gives a decently fair gauge for how things have performed thus far.

The results are pretty interesting, and show that AT&T's LTE was slightly faster than Verizon 4G LTE during the show, though the two end up being pretty close at the end of the day. I've done the same statistical breakdown we do for smartphone reviews for the results, and made some color-coded histograms as well. I ended up running just north of 200 tests total, so ~100 per device.

 
AT&T LTE Results
 
Downstream Stats (Mbps)
Avg: 17.681; Max: 45.708; Min: 3.78, StDev: 9.000
 
Upstream Stats (Mbps)
Avg: 6.575; Max: 14.188; Min: 0.105, StDev: 3.234
 
Latency Stats (ms)
Avg: 107.549; Max: 134; Min: 65, StDev: 8.999
 
Verizon Wireless LTE Results
 
Downstream Stats (Mbps)
Avg: 11.185; Max: 33.571; Min: 0.184, StDev: 6.965
 
Upstream Stats (Mbps)
Avg: 7.133; Max: 16.44; Min: 0.135, StDev: 4.038
 
Latency Stats (ms)
Avg: 126.41; Max: 439; Min: 83, StDev: 51.554
 
 
 

The data ends up being pretty intriguing, and neither network honestly comes out dramatically ahead of the other. This isn't surprising considering both are running 10 MHz FDD-LTE on the 700 MHz band right now. AT&T also can deploy a 5 MHz FDD-LTE carrier on AWS in Las Vegas, though I'm not sure it is currently lit up, and I can't check if it's in use on the HTC Vivid like I can the Samsung Skyrocket. AT&T LTE posted better downstream numbers than VZW LTE overall and came in with a higher maximum throughput of just over 44 Mbps down, but is admittedly nowhere near as loaded with devices as Verizon's year-old 4G LTE network.

Subjectively, Verizon's 4G LTE coverage is also better than AT&T's in Las Vegas. I couldn't take down numbers on the handsets, but needless to say there were a number of areas where AT&T LTE didn't propagate where VZW LTE was perfectly fine. We got better 4G LTE signal in more casinos and hard-to-reach areas in the Venetian with Verizon than AT&T, though admittedly Verizon has been deploying and improving its network for much longer than AT&T. 

Comments Locked

24 Comments

View All Comments

  • 3DoubleD - Friday, January 13, 2012 - link

    I love high bandwidth network access as much as the next guy, but I recognize where it's needed. For example, I use a gigabit switch and cat 6 cable for all my internal home networking, but then my router is still 10.100 and connected by a cat 5e cable.

    Same goes with cell phones. Do you need 10+ mbps to improve your phone experience? Generally I don't find a great difference between 3G and wifi connected to a 26 mbps cable modem. Whenever Anandtech writes about LTE I keep thinking it's like they are comparing 9-9-9-18 RAM with 8-8-8-18 RAM... the real world benefit just isn't tangible. Anandtech usually does a great job in this area. If you believe that faster cell phone network connections really improve the cell phone experience, show us in real world test and not a synthetic benchmark like speedtest.

    To me it seems LTE is largely marketing hype. Yes, it is good the technology is being implemented and it is largely a step forward. But, I refuse to get excited about it until there are no drawbacks compared with 3G since there are certainly aren't many tangible benefits. When battery life is competitive and there is wider coverage, then LTE will be more attractive.
  • phantom505 - Friday, January 13, 2012 - link

    I agree with you entirely.

    I was really excited to see LTE take off and I thought it would mean you could get high speed internet out in the boon docks finally and then I saw how they priced it and realized that rural America is still hosed.

    Not only are they going to take forever to put LTE out there, they want a king's ransom for it too.
  • Pozz - Friday, January 13, 2012 - link

    Well the value of network speed greatly depends on usage, i suppose that if you tether a lot and/or use a lot of data intensive applications you could find some use for lte. Still I agree that if you just browse facebook 3g is plenty fine.

    On a side note... Any eta for the galaxy nexus review? :D
  • phantom505 - Friday, January 13, 2012 - link

    With the per GB fees tethering is expensive as hell. Sure you can play games on perhaps, too bad it'll cost you a small fortune to do so.
  • joncat - Friday, January 13, 2012 - link

    I'd agree with you to an extent if all I did on my smartphone was check facebook or lightly browse the web. But I want to be able to stream video, music, and remote desktop from anywhere I go, including in the car, train, etc. 3g just doesn't cut it for those use cases. At&t's 3g almost does, but it's still not consistent enough to watch movies without occasionally buffering. Verizon and Sprint's 3g is simply just too slow. (I've used both and currently have both an AT&T and sprint smartphone)
  • collegeguypat - Friday, January 13, 2012 - link

    I've got the Thunderbolt and live in NYC. I can tell a huge real wold difference when out using my phone with 4g vs 3g.

    Opening emails, having instantaneous picture loading vs 3-4 seconds for them to pop up on 3g. Little stuff like that.

    I think its more what you use your phone for. Calls and texts won't see a difference, but web pages, downloading apps, email its all a lot more enjoyable with 10x the bandwidth to download the content. It's similar to always being on Wifi.

    The closest thing I can compare it to is going from a slow DSL to a cable modem. Things are just overall more seamless and you don't have to sit and wait for anything to load.

    Battery life is the biggest thing they need to improve upon IMHO.
  • name99 - Friday, January 13, 2012 - link

    The issue that MATTERS is not so much the raw speed a particular user sees as how efficiently the limited total bandwidth is shared between multiple users.

    The fact that I can get say 30Mb/s today on LTE is nice, but that's not going to last as soon as LTE becomes standard. I see 30Mbps when I'm the only person using the cell, but soon enough there will be 2, then 4, then 16 people sharing the cell with me.
    So the big picture is that efficiency matters because we all win when resources are used more sensibly. Averaged across a wide range of environments, the 3G has a spectral efficiency of around .5b/Hz; 3.5G (throw MIMO in the mix) pulls that up to about 1. LTE increases that to about 1.6 --- which is pretty much what Brian is seeing.

    The fundamental LTE technologies are
    - all internet architecture (so lower latency, less obsolete overhead)
    - OFDM (better utilization of large available frequency bands, with the ability to easily notch out frequency bands that are still off limits or DSF)
    - MACs (MU-MIMO and OFDMA) that are more efficient than the CDMA MAC of 3G
    - MIMO (for higher throughput in certain environments, or further range in other environments).
    These are ALL good things.

    Coming up, the next stage of LTE deployment will focus on reducing interference through tower antennas that can use beam steering, along with dynamic negotiations between towers over frequency and beam angle allocation --- also good things.

    I do wish that Brian's reporting would focus more on the underlying tech --- what's making this work, what have the carriers deployed in each location, what new chip sets are available at the carrier level, how many of the newest 3D directional antennas are being sold, etc. Likewise --- how common are LTE devices these days, and how much data is LTE data vs 3G data? How oversubscribed currently are cells in a large city? Do the carriers plan to switch soon from FDD to the more efficient (because you have better open-loop channel knowledge) TDD. Are they using the best available algorithms in running their new kit (eg waterfilling across all the OFDM channels) or are they using just the basic algorithms to get the thing going?
    etc etc
    Talking to the carriers as a journalist, they might tell him some of this stuff, and more of it is available in the trade press.

    The obsession with "this is how fast my phone goes" rather than "this is the total capacity available to most users in [random large city]" is somewhat puerile, but, to be fair, it is an easy measurement to make, and it does say SOMETHING interesting --- in this case it states that the info we've been given regarding the expected spectral efficiency of this wave of LTE is actually right on the money.
    (And depressingly low. 10MHz with 64QAM, 2x2 MIMO and 5/6 FEC would get you a nominal 10*6*(5/6)*2 =100Mbps throughput. In a clean WiFi environment you'd get close to that. [You'd see half that because of the WiFi MAC, but that's not relevant here, since the cell phone MAC is so different.]
    We actually see an average of about 15Mb/s. It's not that WiFi is so much smarter designed, it's that the cell phone job is just so much harder.
    You can't use a wimpy 5/6 FEC, you have to use half your bits for error correction. Although 64QAM is in the spec (because it's easy) you have to be pretty close to the tower to use it. Most of the time you're using 16QAM or even 4QAM.
    And your 2x2 antennas, much of the time have to be used for diversity, not spatial multiplexing, to cope with fading.)
  • WiWavelength - Friday, January 13, 2012 - link

    "Do the carriers plan to switch soon from FDD to the more efficient (because you have better open-loop channel knowledge) TDD[?]"

    No. Sprint/Clearwire is the only carrier in the US that currently has large scale plans for TD-LTE because it is the only carrier that has large swaths of unpaired spectrum (BRS/EBS 2500-2600 MHz, upwards of 100 MHz bandwidth per market).

    Both AT&T (just acquired from Qualcomm) and Dish Network also hold unpaired spectrum (Lower 700 MHz D/E block 6 MHz licenses). But their licenses are in much smaller blocks, hence less than ideal for TD-LTE. So, they plan to utilize their unpaired spectrum with their other paired spectrum for LTE-Advanced carrier aggregation (supplemental downlink).

    All other commonly held mobile spectrum (Lower/Upper 700 MHz, Cellular 850 MHz, SMR 800/900 MHz, PCS 1900 MHz, AWS 2100+1700 MHz) is FDD paired spectrum, for which FCC regulations generally prohibit TDD unpaired operation.

    AJ
  • name99 - Friday, January 13, 2012 - link

    Why does FCC prevent TDD unpaired operation for these spectrum allocations? What possible point is there to this?
  • WiWavelength - Friday, January 13, 2012 - link

    The reason is simple: possibly disastrous adjacent channel interference between transmitters and receivers.

    To illustrate, see the AWS 2100+1700 MHz band plan:

    http://wireless.fcc.gov/services/aws/data/awsbandp...

    Imagine that, in a given market, carriers assigned the AWS A 20 MHz and AWS C 10 MHz licenses, respectively, follow standard FDD operation. The uplink is in each 1700 MHz segment, while the downlink is in each 2100 MHz segment. But the carrier assigned the AWS B 20 MHz license decides to forgo standard FDD operation and, instead, to utilize TDD operation in both 1700 MHz and 2100 MHz segments.

    To continue, mobile station X operating in FDD mode attempts to receive from its serving base station in the AWS A 2100 MHz downlink, and the received signal at the mobile is -100 dBm. Concurrently, mobile station Y operating in TDD mode attempts to transmit/receive to/from its serving base station in the adjacent AWS B 2100 MHz segment (which is now operated as both uplink and downlink). Also, mobile station Y transmits at 23 dBm and is only 10 ft away from mobile station X. Do you now see the potential for adjacent channel interference?

    Certainly, bandpass filters are supposed to keep adjacent channels from interfering with one another. However, when adjacent channels are at wildly different received power levels (upwards of 100 dB difference in my example above), even the best bandpass filters may not be able to attenuate adjacent channel emissions enough to prevent substantial interference. For this reason, traditional spectrum management policy almost always places uplink operations adjacent to other uplink operations of comparable power levels, and downlink operations adjacent to other downlink operations of comparable power levels, not to mention guard bands in between uplink and downlink operations.

    As an aside, the LightSquared-GPS interference issue is a similar matter of wildly different received power levels in adjacent spectrum.

    AJ

Log in

Don't have an account? Sign up now