Qualcomm's Modem Workshop - Lots of Comparisons
by Brian Klug on November 8, 2013 9:41 AM EST- Posted in
- Smartphones
- Qualcomm
- Mobile
- Tablets
- Gobi
Last year Qualcomm flew me out to their San Diego headquarters to talk about modems and transceivers, and what ended up being of the biggest public disclosures made from them on the modem side. Since then the multimode LTE space has warmed up considerably, we have Intel shipping their first multimode LTE product (XMM7160) in retail devices, NVIDIA’s Tegra 4i with the first fruits of its Icera acquisition nearing launch, Broadcom showing off their multimode LTE products, and a few others (Hisilicon and Samsung) making noise with multimode LTE products of their own. Modem is now a somewhat less veiled part of the mobile story, and especially as players see the AP (CPU and GPU) side of the dialog fleshed out, there’s been a surge in discussion about modem as one of the major differentiators.
Qualcomm invited press and analysts out to a modem workshop this week, and the attempt is to create for modems an event roughly analogous to the blogger benchmarking events we’ve seen for APQ8064 and most recently MSM8974. Since benchmarking and testing modems is a lot more daunting than AP, Qualcomm setup a number of demos with comparisons to competitor products and also talked about their own capabilities. With the benchmarking events we usually have an opportunity to see final silicon before its arrival in a shipping device, unfortunately that wasn’t the case here, we didn’t catch any glimpses of the upcoming MDM9x25 Category 4 modem (although it’s functionally the same as what’s in MSM8974) or hear an announcement about its successor, nor were there any details about WTR1625L or its coprocessor, the upcoming successor to WTR1605L.
First Qualcomm showed its modem throughput performance in both FDD (Frequency Division Duplex) versus a competitor device, and TDD (Time Division Duplex) versus another competitor device. In the FDD scenario, two devices were connected to separate base station emulators (the same Anritsu MD8475A I had loaned to me for a while) running an LTE network on Band 3 with 10 MHz channels and AWGN (Additive White Gaussian Noise) set to two conditions, 27 dB and later a more challenging 14 dB situation. The devices under test were commercial devices with names covered (although it was immediately obvious given the industrial designs what devices these really were), and then tethered over USB to a client. I believe the larger tablet was a Galaxy Tab 3 (10.1) with Intel’s XMM7160 LTE solution, and the smaller one a Note 8 LTE with Qualcomm’s MDM9x15. In both, the Qualcomm based solution showed higher throughput necessary to stream a 4K video without stuttering, while the Tab 3 paused a few times to re-buffer. I didn’t catch the initial throughput but saw that the 14 dB scenario showed Qualcomm’s solution at around 16 Mbps and the other solution at 12 Mbps.
Update: I asked Qualcomm for specifics about the FDD-LTE testing throughput since I didn't catch all the details exactly the first time around, apparently AWGN wasn't used, and there was EVA70 Hz (mobility) fading.
Scenario 1· 27 dB SNR· 4K Video: Landscape Scenery· Fading: EVA70HzResults:· QCOM: 40 Mbps· Competitor: 28 MbpsScenario 2· 14 dB SNR· 4K video: Sony Bravia· Fading: EVA70HzResults:· QCOM: 20 Mbps· Competitor: 15 Mbps
Next was a comparison on a TDD LTE network, running Band 38 with 20 MHz channels. In this comparison a similar test was run, although this time it was a WiFi hotspot with the competitor solution (I believe a Huawei with Balong 710 inside) versus the Qualcomm solution in what looked like a Galaxy S 4. Here the same video scenario was played, with the competitor stalling to rebuffer occasionally while the other played fine. I noted average throughputs of 38 Mbps on Qualcomm’s demo and 30 on Huawei’s.
Up next was a demo of voice call setup time in traditional 3G WCDMA versus 4G LTE with CSFB back to WCDMA. The point was that call setup is longer but not appreciably slower on LTE. Here two HTC Ones were cabled up, the LTE connection on Band 4 and WCDMA on Band 2, and normal WCDMA on Band 2. The LTE call took 4.799 seconds to setup, while the WCDMA call took 3.989 seconds.
A new feature Qualcomm showed off for the first time is a transmit antenna preprocessing function which apparently will ship in a device shortly called TruSignal Uplink. This feature is designed to mitigate the kind of unintended attenuation we saw with the iPhone 4, specifically signal loss from physical impediments like a hand. Qualcomm is being tight lipped about this feature or how it works, saying little more than that this is a kind of transmit diversity that doesn’t require any network-side interaction to work. The demo showed off a comparison between two test devices both setup with a loss of 15 dB and a file upload running on both devices. Although the demo shown was on WCDMA I’m told there’s no reason this uplink processing can’t also apply in LTE.
Finally Qualcomm showed off power consumption comparisons between the same tablet shown in the FDD-LTE throughput comparison (likely a Tab 3 (10.1) LTE with XMM7160) and an LG G2 (which has MSM8974). This is no doubt a response to claims made by Intel that its solution is lower power than current competitors, although I suspect those claims might’ve been made in the context of the then-current MDM9x15 rather than the freshly-new MSM8974 (and soon to come MDM9x25 it shares an IP block with).
Regardless, the comparison looked at power consumption on a system level the battery terminals similar to how we have. The tablets were off and connected to a 3.7V power supply and National Instruments data acquisition routine and shown first running an LTE data transfer, and later a 3G CS Voice call.
In the LTE data transfer scenario, the Qualcomm solution drew an average of 184mA, while the competitor drew 234 mA. On the voice call, Qualcomm pulled 115mA versus 137mA for the competitor.
It’s interesting to see modem discussion start to get serious now that more LTE competitors are starting to enter the space once occupied essentially solely by Qualcomm. I’d love to run some of these and more comparisons myself as well and there’s a good chance we’ll be able to do so in the near future.
18 Comments
View All Comments
FwFred - Friday, November 8, 2013 - link
This is a tough one as we saw only pre-selected scenarios. In OEM controlled setups, I find the results more credible if you guys get to run your own testing rather than using theirs. I'm sure it's not as easy as running already prepared CPU/GPU benchmarks in an OEM lab on a short notice.mike8675309 - Friday, November 8, 2013 - link
All we need now are some terms in conflict (BAUD vs baud vs bps) and a battle over file transfer protocols (xmodem vs zmodem vs ymodem) and it'll be just like my childhood.Interesting stuff none the less.
sheh - Friday, November 8, 2013 - link
Baud isn't the same as bps, even if in some setups they are equal.As for protocols, I think Kermit will be the one to prevail.
mike8675309 - Monday, November 11, 2013 - link
so one would think... but if you were into modems in the 80's around the transition from 1200bps to 2400bps modems lively debates would put a bump in FidoNet transmit times. Blame the modem manufactures that went from 1200 to 2400 but left baud next to the number. It only got worse with the 9600 modems.skiboysteve - Friday, November 8, 2013 - link
very cool. curious- what NI hardware were they using?PC Perv - Friday, November 8, 2013 - link
"there’s a good chance we’ll be able to do so in the near future."You forgot to add "with Intel's help" at the end.
Can you deny? XD
hobagman - Friday, November 8, 2013 - link
Kudos to Qualcomm. Nice job guys. Nice to see such a no frills, raw numbers comparisons, even if the devices were selectively picked beforehand. This is what all marketing should be ... are you listening, everyone else?B3an - Friday, November 8, 2013 - link
But disappointing to see they're still using Win XP. Theres no excuse for that.stacey94 - Friday, November 8, 2013 - link
The second shot looks like Windows 7.DanNeely - Friday, November 8, 2013 - link
They're showing shots with both. Most are W7; the last 4 are XP. Those 4 are all showing power meter levels; at a guess the power meter they're using is an older model; and lacks driver support for anything newer (a common curse of high end scientific instruments and industrial equipment). When the cost of the device being controlled is much higher than the computer controlling it; keeping a relic PC around is much more cost effective than replacing the much more expensive, but not broken, hardware it controls.