NVIDIA Tegra 4 Architecture Deep Dive, Plus Tegra 4i, Icera i500 & Phoenix Hands On
by Anand Lal Shimpi & Brian Klug on February 24, 2013 3:00 PM ESTRound Two, Still Quad-Core
I have to give NVIDIA credit, back when it introduced Tegra 3 I assumed its 4+1 architecture was surely a gimmick and to be very short lived. I remember asking NVIDIA’s Phil Carmack point blank at MWC 2012 whether or not NVIDIA would standardize on four cores for future SoCs. While I expected a typical PR response, Phil surprised me with an astounding yes. NVIDIA was committed to quad-core designs going forward. I still didn’t believe it, but here we are in 2013 with NVIDIA’s high-end and mainstream roadmaps both exclusively featuring quad-core SoCs. NVIDIA remained true to its word, and the more I think about it, the more the approach makes sense.
In the PC industry we learned that there’s no real downside to quad-core as long as you can power gate individual cores, and turbo up to higher frequencies when fewer than four cores are active, there’s no real tradeoff other than cost. You get good multithreaded performance when you need it, and single threaded performance doesn’t suffer. Tegra 3 complicated things because it was on an older, more power hungry process when Qualcomm introduced its first Krait parts. Tegra 4 on the other hand comes to market on the absolute latest and greatest 28nm HPL process from TSMC. And like Tegra 3, each Cortex A15 core in Tegra 4 can be independently power gated.
Like most of the evolution in the mobile space, NVIDIA skipped the silly transitional period between dual and many core and just ended up exactly where it knows the story ends. Heavily threaded apps are still rare on mobile OSes, but with each core independently power gated the user shouldn’t pay a penalty for them being there as long as NVIDIA and the device vendor don’t configure the DFVS tables improperly.
The downside is cost, not to the end user, but to NVIDIA. Economically, NVIDIA was able to make Tegra 3 work for itself with a die size somewhere around 80mm^2. The move to 28nm allowed NVIDIA to increase transistor count, without straying from that die size. Tegra 4 is a bit larger than Tegra 3, but it’s still somewhere in that 80mm^2 range.
Wafer costs for 28nm HPL are undoubtedly higher than 40nm LPG at TSMC, not to mention any differences in yield between T3 and T4, so without a doubt Tegra 4 will cost NVIDIA more than Tegra 3. All of that being said however, NVIDIA still seems to take a conservative approach to die sizes in mobile, which gives it the flexibility to significantly undercut Qualcomm in costs to OEMs. I do believe this was a key part of NVIDIA’s success last year with Tegra 3 ending up in both the Nexus 7 and Microsoft’s Surface RT. Long term, simply selling your SoCs for less than the competition isn’t a path to market dominance, but being able to do so helps buy NVIDIA time while it gathers the remaining missing pieces of the mobile platform (integrated baseband, RF front end, WiFi, etc...). Tegra 4 isn’t the sort of drive the industry forward type of silicon we’re used to seeing from NVIDIA, but it’s sized appropriately given NVIDIA’s position in the market. From a business standpoint, NVIDIA is making the right decisions to ensure the Tegra business at least has a chance of succeeding.
75 Comments
View All Comments
tipoo - Sunday, February 24, 2013 - link
Under 500 in Sunspider, about twice as fast as anything else ARM. But then again, it's a few months newer than that, and actually still not shipping. And as usual with Nvidia they're early to each party (first to dual core, first to quad core), but not always the best performing. We'll see if other Cortex A15 designs beat it.I'd love to see four of those cores paired with SGXs upcoming 600/Rogue series.
jeffkibuule - Sunday, February 24, 2013 - link
SunSpider is so software sensitive that a Tegra 3 @ 1.2 Ghz on Windows RT beats a Snapdraon S4 Pro @ 1.5Ghz on Nexus 4 using Chrome. It's a terrible benchmark because its so dependent on underlying kernel optimizations in the Android phone market.tipoo - Sunday, February 24, 2013 - link
True, other benchmarks are similarly impressive though.karasaj - Sunday, February 24, 2013 - link
Psh it has nothing on my desktop! 125ms on sunspider... Nvidia so behind.Anyways, still looks impressive. I really want to see some Krait 600/800 benchmarks.
tipoo - Sunday, February 24, 2013 - link
The fact that they're getting well below an order of magnitude slower than desktops is impressive in itself too. Even with iPad 2 level performance I still was reluctant to do most of my web browsing on a tablet for the performance. Maybe with Tegra 4 and beyond hardware speed that will change.Mumrik - Sunday, February 24, 2013 - link
As someone with heavily tabbed browsing habits, I don't think I'll ever make that jump (and I own a tablet).tipoo - Sunday, February 24, 2013 - link
Also true, that's my other thing. I like to open a bunch of background tabs and have them ready as I go through each one. Right now, tablets don't do background loading, as far as I know, and if they did they wouldn't be powerful enough to keep the main tab smooth while doing it.Tarwin - Monday, February 25, 2013 - link
Tablets DO do background loading, as long as they're android. The only performance I've seen is from lack of RAM on my phone and lack of bandwidth on the phone and tablet but those things affect any computer as well. One observation to ne made, they do load in the background but things like audio and video playback will pause if you switch to another tab.von Krupp - Monday, February 25, 2013 - link
Even Windows Phone 7.5 and 8 do background loading. I haven't used it, but I'd wager that RT does as well, if even the gimpy mobile OS can.tuxRoller - Sunday, February 24, 2013 - link
As someone who had, until recently, over 40 tabs open on my chrome browser (Nexus 4), the critical problem has been memory. With enough memory, and good enough task management, these problems tend to go away.Of course, maybe you are than 0.00001% who has hundreds or thousands of tabs open in which case I pity any computer you are likely to own.