NVIDIA's GPU Technology Conference 2013 Keynote Live Blog
by Brian Klug, Anand Lal Shimpi & Ryan Smith on March 19, 2013 12:04 PM EST02:33PM EDT - Volta, Kayla, and VCA
02:33PM EDT - And that's a wrap for NVIDIA's GTC 2013 keynote. We'll have more details on various announcements later today
02:32PM EDT - Wrapping things up. Products using GRID cards now going to market from Dell and other OEMs
02:31PM EDT - Requires software licenses, $2400 and $4800 a year respectively
02:31PM EDT - Max concurrent users is equal to the number of GPUs
02:30PM EDT - $39,900 for a full pack. Half pack is 8 GPUs, full pack is 16 GPUs
02:30PM EDT - GRID VCA starts at $24,900 for a half base pack
02:29PM EDT - Total latency according to their metrics looks to be under 40ms
02:27PM EDT - Video stream flucuating between 40Mbps and 120Mbps
02:26PM EDT - Demoing path tracing in Octane, reiterating that this is being done remotely. Hard to get an idea of just how good the image quality is like from this vantage point
02:23PM EDT - Live demo. Server rack is in LA
02:22PM EDT - Product announcement: Octanerender, Cloud Edition
02:20PM EDT - Continual progression of GPU power, and now can access that rendering power remotely
02:17PM EDT - Discussing CGI pre-visualization process
02:16PM EDT - The crowd is noticably silent when the Fantastic Four is mentioned
02:15PM EDT - Otoy CEO Jules Urbach and Fantastic Four director Josh Trank now on stage
02:14PM EDT - Discussing creating characters with CGI
02:14PM EDT - Discussing the movie Life of Pi, which was CGI-heavy due to the tiger
02:14PM EDT - Now on to movies
02:08PM EDT - Virtually configuring an Audi R8. Using a tablet as the interface, while the work occurs behind the scenes on a GRID server
02:06PM EDT - Jen-Hsun is a well-known car guy, so he's having a field day
02:06PM EDT - Now doing a live demo of RTT's point of sale configuration tech that was shown in the Audi video
02:01PM EDT - CEO of Realtime Technology AG
02:01PM EDT - Ludwig Fuchs now on stage
02:00PM EDT - VCA press release just hit the wire. Reiterates that this is an application-centric appliance rather than remoting whole desktops
01:58PM EDT - "That was not fake. It's all real"
01:57PM EDT - How Audi thinks they'll use something like GRID in car purchasing
01:56PM EDT - Audi concept video now rolling
01:55PM EDT - Discussing the benefits of having access to the performance of a high powered workstation on a laptop
01:53PM EDT - One of the very first GRID users
01:53PM EDT - James Fox, CEO Dawnrunner now on stage
01:53PM EDT - Gian Paolo Bassi and Jen-Hsun discussing the benefits of remote computing for them
01:52PM EDT - VGX 1 now GRID K1 (4xGK107). VGX 2 now GRID K2 (2xGK104)
01:50PM EDT - GRID VCA product page is up: http://www.nvidia.com/object/visual-computing-appliance.html
01:50PM EDT - NVIDIA VGX product page was changed at some point to be "GRID VGX", so indeed it looks like VGX has been renamed. We'll try to get confirmation later today
01:47PM EDT - Not clear at this time what the relationship is between the GRID cards and NVIDIA's VGX cards. May be a rename
01:46PM EDT - Solidworks' VP of R&D, Gian Paolo Bassi, now on stage
01:45PM EDT - Each workspace is running nice and smooth (having a dedicated GPU really helps here)
01:44PM EDT - This is being demoed as a LAN solution (as opposed to an over-the-internet solution like GeForce GRID)
01:43PM EDT - Macbook Pro running the client software to connect to a GRID VCA server. 3 workspaces. Each running a seperate professional application
01:42PM EDT - Demo time
01:42PM EDT - GRID VCA will be used to drive remote workstations as part of NVIDIA's larger remote computing initiative
01:40PM EDT - Based on power requirements it's likely these are GK104, not GK110
01:40PM EDT - Clients machines need specialized VCA software
01:39PM EDT - Hypervisor supports 16 virtual machines (1 GPU per)
01:39PM EDT - 2x 8 Core Xeon processors. 192GB of sytem memory. 8 GRID video cards ("each with 2 of our most advance Kepler GPUs")
01:38PM EDT - 4U system (server racked)
01:38PM EDT - GRID VCA
01:37PM EDT - Brand new product. It's an appliance, not a server
01:36PM EDT - "The world's first visual computing appliance"
01:33PM EDT - VGX can be configured with various modules. It can be a Quadro card or a GeForce card depending on user needs and what modules are licensed
01:32PM EDT - So now they're ready to discuss users and products featuring this hardware
01:32PM EDT - NVIDIA also announced their VGX line of cards at GTC 2012. Multiple GK107 GPUs on VGX1, with virtual addressing to allow many users to share one VGX card
01:31PM EDT - Starting off this discussion with GRID for professional uses as opposed to consumer
01:30PM EDT - We haven't seen much of it; GeForce GRID fizzled some when NVIDIA's partner was acquired Gaikai
01:30PM EDT - Last year NVIDIA introduced their GRID technology
01:27PM EDT - Now discussing remote computing
01:27PM EDT - New GPU implies 1 SMX Kepler part (GK107 is 2 SMXes)
01:26PM EDT - CUDA 5 capable. Implies a Kepler family GPU. No further details
01:25PM EDT - Real time demo of Kayla running a raytracer
01:24PM EDT - mITX-like board with Tegra 3 and a new, small GPU
01:24PM EDT - New ARM based product: Kayla
01:23PM EDT - In reference to Tegra 2 versus Parker. Persumably peak performance. Otherwise SoCs are already power limited
01:22PM EDT - "In 5 years' time we're going to increase performance of Tegra by 100 times"
01:22PM EDT - Will be using FinFET (3D) transistors. Fab isn't mentioned, but we're assuming TSMC
01:21PM EDT - GPU will be Maxwell
01:21PM EDT - Parker will be the first SoC with a Denver ARM CPU
01:21PM EDT - After Logan, Parker in 2015
01:20PM EDT - Logan demos this year, production early next year
01:20PM EDT - Integrates a Kepler GPU, so compute capability 3.x
01:20PM EDT - Logan will be the first Tegra SoC with CUDA capabilities
01:19PM EDT - Logan is next. Wayne was Tegra 4
01:19PM EDT - What's next?
01:17PM EDT - Reiterating benefits of 4+1 core layout
01:17PM EDT - "First Tegra did not turn out that well"
01:16PM EDT - Now on to Tegra
01:16PM EDT - No date attached to Volta. Currently NVIDIA keeps parity with TSMC nodes, in which case it's not clear when the next high performance node after 20nm will become available
01:14PM EDT - Titan is just under 300GB/sec with GDDR5 on a 384bit bus
01:14PM EDT - Volta: 1TB/sec of bandwidth
01:14PM EDT - Jen-Hsun now going into detail on what DRAM stacking is. Stacked DRAM will mean they can have at least some RAM very close to the GPU, instead of having to go through relatively slow external memory busses
01:12PM EDT - Presumably Maxwell and Volta will go hand-in hand with future NVIDIA SoCs. Not just Tegra, but whatever Denver is paired with
01:12PM EDT - Volta is credited with the invention of the battery, BTW
01:11PM EDT - Volta will use stacked DRAM. This is the same route Intel is going with Haswell GT3e
01:11PM EDT - Slide guys are a bit ahead of where Jen-Hsun is. Reiterating the benefits of Kepler
01:11PM EDT - Maxwell was announced back in 2011 and is on track for 2014. Will be introducing unified virtual memory
01:10PM EDT - New NVIDIA GPU roadmap. Volta comes after Maxwell
01:09PM EDT - Up next: the "next click" of NVIDIA's roadmap
01:08PM EDT - Now discussing image processing in general
01:03PM EDT - Running the Cortexica app
01:03PM EDT - Actual image processing is of course server-side. Tablet is just uploading the image
01:02PM EDT - Also returns similar looking clothing (geometric patterns, etc)
01:02PM EDT - Going to ind clothing matching a photo of Kate Hudson
01:02PM EDT - Now showing a real-time demo running off of a tablet
01:01PM EDT - NVIDIA's Mike Houston now on stage with a copy of In Style magazine
12:59PM EDT - Coretexica is using a model of the human brain to try to do image search like a human
12:58PM EDT - How do humans recognize images as being alike? How can computers be made to do the same thing?
12:57PM EDT - Visual shopping, even
12:57PM EDT - Up next: virtual shopping
12:51PM EDT - Jason Titus, CTO of Shazam, taking the stage now to talk about how GPU computing helps do their work
12:51PM EDT - Shazam has to search 10M queries per day, search among 27M other songs and try to identify the song you're listening to
12:51PM EDT - 10M queries per day
12:51PM EDT - 300M users of Shazam, adding 2M users per week
12:50PM EDT - Now talking about GPUs and audio search
12:49PM EDT - Salesforce.com saw a 35x speedup in moving their twitter-mining algorithms to GPUs
12:47PM EDT - GPU used in datamining tweets it sounds like
12:46PM EDT - 500M tweets a day
12:46PM EDT - Talking about Twitter's GPU usage
12:45PM EDT - Talking about all of the different companies exhibiting here at GTC
12:40PM EDT - More details on Piz Daint will come at the end of the presentation when the PR announcement hits the wire
12:40PM EDT - Piz Daint supercomputer (tallest mountain in Switzerland), going to be used for weather prediction/simulation
12:39PM EDT - Swiss Supercomputer Center announced that they would also use NV in building Europe's Fastest GPU Supercomputer
12:39PM EDT - 40M CUDA processors came together to solve a singular problem in the Titan supercomputer
12:39PM EDT - Not only the highest theoretical perf supercomputer, also recently did the world's largest solids mechanical simulation - sustained 10 PFLOPS
12:38PM EDT - Kepler Top 500 computing performance already exceeds 2012 Fermi performance, and K20 has only been shipping for 4 months
12:38PM EDT - Talking about Oak Ridge Titan Supercomputer
12:37PM EDT - "We are close to the tipping point"
12:37PM EDT - Note that it's not clear whether NV means calendar year 2013 or fiscal year. The latter seems most likely
12:36PM EDT - This year, 430M CUDA capable GPUs, 1.6M CUDA downloads, 50 supercomputers, 640 university courses, 37K academic papers
12:36PM EDT - 2008 - 60 universities were teaching using CUDA
12:35PM EDT - In 2008, we had 100M GPUs that were CUDA compatible, 150K CUDA downloads, 1 supercomputer that was powered by Tesla
12:35PM EDT - "the GPU has a day job, it's called computer graphics"
12:33PM EDT - Moving on to GPU computing
12:32PM EDT - This looks really good
12:32PM EDT - Asking digital Ira questions and having him answer them
12:31PM EDT - This has to be somewhere around 100W just of face rendering
12:31PM EDT - 2 TFLOPS spent on rendering a face, that's just awesome
12:30PM EDT - Pores look realistic
12:30PM EDT - I have to admit, digital Ira's expressions are pretty convincing
12:30PM EDT - 2TFLOPS, half of the perf offered by Titan, to render digital Ira
12:30PM EDT - 8K instructions, 5 FLOPS per instruction, 40K OPS per pixel
12:29PM EDT - 8000 instruction long program to articulate the geometry and all of the pixel processing necessary for each pixel
12:29PM EDT - Titan's Dawn
12:29PM EDT - Meet digital Ira
12:29PM EDT - Now showing Face Works
12:29PM EDT - We compress all of it into a new way of rendering facial expression
12:29PM EDT - 32GB is too much to work with in real time
12:28PM EDT - "3D meshes that we articulate using our GPU"
12:28PM EDT - Takes 32GB of expression info, compresses it even further into about 400MB
12:28PM EDT - NV created a tech called Face Works
12:28PM EDT - 32GB of expression data, allows you to programmatically display any human expression
12:28PM EDT - Light Stage of 156 cameras used to capture geometry as well as expressions
12:27PM EDT - Take video of 30 different human expressions, extract from it the smallest library of mosaics that represent how you move
12:25PM EDT - Dawn is flapping her wings, smiling, being a little awkward
12:24PM EDT - "if we could have Dawn do a performance please"
12:24PM EDT - Talking about sub-surface light scatter and how it impacts the realism of Dawn
12:23PM EDT - It took us nearly 20 years to create Kepler Dawn
12:23PM EDT - Kepler Dawn
12:23PM EDT - Showing Dawn
12:23PM EDT - "this is an endeavor worth while"
12:23PM EDT - We've been working on rendering faces for some time, ever since GeForce 256
12:21PM EDT - As we improve and increase the realism of robots, we become more familiar with them as they become more human like in the way they look and the way they move, at some point it gets sufficiently real, it falls off a cliff and gets creepy
12:21PM EDT - Talking about the uncanny valley
12:21PM EDT - We see water simulation a lot both in graphics and compute. It's computationally intensive and maps well to GPUs
12:20PM EDT - "simulating a face is harder"
12:20PM EDT - "simulating the ocean is hard"
12:20PM EDT - Simulating close to full hurricane conditions in this ocean simulator running on GeForce GTX Titan
12:19PM EDT - wind impacts waves, smoke from the ship, spray around the ship
12:19PM EDT - 20 sensors around the hull of the ship, talking about the interaction of the ocean with the ship in the simulation
12:17PM EDT - Running on GeForce GTX Titan
12:17PM EDT - Wave Works
12:17PM EDT - Speed of the wind impacts simulation reaction of the ocean
12:17PM EDT - Showing real-time Beaufort-Scale ocean simulation
12:16PM EDT - GK110 in general seems to be supply constrained right now. NVIDIA has previously told us they're selling every Tesla K20 and Titan card they can make
12:14PM EDT - Our review here: http://www.anandtech.com/show/6774/nvidias-geforce-gtx-titan-part-2-titans-performance-unveiled
12:14PM EDT - 2600 CUDA cores, 4.5 TFLOPS, 7.1B transistors, the largest semiconductor device, most complex semiconductor device ever made
12:14PM EDT - Talking about Titan
12:14PM EDT - Let's get started
12:14PM EDT - 5. A new product announcement (!!!)
12:13PM EDT - 4. An update on remote graphics
12:13PM EDT - "A glimpse into the next click of NVIDIA's technology roadmap"
12:13PM EDT - 3. Roadmap (ooh!)
12:13PM EDT - 2. Update on GPU Computing
12:13PM EDT - 1. Breakthroughs in computer graphics that we've made in the last year
12:12PM EDT - Talking about 5 things today
12:11PM EDT - "and the GPU is the engine of this medium"
12:11PM EDT - "The beauty and power of the interactivity of this medium, allows us to connect with ideas in a way that no other medium can"
12:11PM EDT - "Over the last 20 years this medium has transformed the PC from a computer for information and productivity, to one of creativity, expression and discovery"
12:11PM EDT - "Visual Computing is a Powerful and Unique Medium"
12:11PM EDT - Jen-Hsun Huang is taking the stage
12:10PM EDT - NVIDIA *loves* GPUs
12:10PM EDT - Watching a video illustrating all of the things GPUs enable
12:08PM EDT - Cue the hip music, we're a-go
12:07PM EDT - Anand, Ryan, and myself are seated and waiting for the keynote to get underway, WiFi isn't quite holding up but cellular is working fine at the moment.
22 Comments
View All Comments
PrayForDeath - Tuesday, March 19, 2013 - link
Had no idea this was a thing. What kind of news are we expecting?mayankleoboy1 - Tuesday, March 19, 2013 - link
Hey guys, did you meet with the reporters from Tomshardware ?RetroEvolute - Tuesday, March 19, 2013 - link
Just a thought, if there's some way that you could have a chat sidebar during these live blogs, that'd be awesome. I'd love to be able to see everyone else's reactions and discuss as it's all happening.mayankleoboy1 - Tuesday, March 19, 2013 - link
Talking about Volta, when we know next to nothing about Maxwell is stupid.And we dont even know if Kepler will be followed by a Kepler refresh, or a series derived from GK110.
Its the usual vagueness and pomp that Nvidia is good at.
Kevin G - Tuesday, March 19, 2013 - link
Maxwell is Kepler with significant enhancements to how memory is addressed. Basically nVidia's version of AMD's HSA. I'd expect Maxwell to come in two variants: x86 focused and ARM focused for mobile.Volta is creating buzz on the merit that stacked DRAM is going to be a massive jump in bandwidth and a likely a small reduction in latency.
Pretty much everything else about Maxwell and Volta will be the traditional increases in parallelism and clock speed new process nodes have permitted.
mayankleoboy1 - Tuesday, March 19, 2013 - link
Talking about T5 and T6, when we dont even know if T4 will be present in any Smartphone or not. Or when will it finally start appearing in actual devices.ziedaniel1 - Tuesday, March 19, 2013 - link
What? Piz Daint isn't even close to the tallest mountain in Switzerland, in either elevation or prominence. Compare http://en.wikipedia.org/wiki/Piz_Daint and http://en.wikipedia.org/wiki/Monte_Rosa.varad - Tuesday, March 19, 2013 - link
You might want to get rid of the . at the end of your 2nd linkMez Toofan - Tuesday, March 19, 2013 - link
The bottom Line is the money coming into the company. Show me the money ! The NVDA stock is stuck in the Tegra 3 mode and cannot move up & it is holding the stock price at $12.50 for 2012-2013. Nvdia Stock has become like $0.05 to $.20 daily trading vehicle. All these technologies are great, BUT it is the CEO who needs to show the market the tractions of Tegra Parker, new businesses, B2B and passing Qualcom & Intel with the Speed of GPU on stroid.tipoo - Tuesday, March 19, 2013 - link
All I could think about the whole time was that during the Volta slide it looked like he had a jetpack on. Therefore:http://techreport.com/r.x/2013_3_19_Nvidias_Volta_...