HBM2
The demand for high bandwidth memory is set to explode in the coming quarters and years due to the broader adoption of artificial intelligence in general and generative AI in particular. SK Hynix will likely be the primary beneficiary of the HBM rally as it leads shipments of this type of memory, holding a 50% share in 2022, according to TrendForce. Analysts from TrendForce believe that shipments of AI servers equipped with compute GPUs like Nvidia's A100 or H100 will increase by roughly 9% year-over-year in 2022. However, they do not elaborate on whether they mean unit shipments or dollar shipments. They now estimate that the rise of generative AI will catalyze demand for AI servers, and this market will grow by 15.4% in 2023...
NVIDIA Announces PCIe A100 Accelerator: 250 Watt Ampere In A Standard Form Factor
With the launch of their Ampere architecture and new A100 accelerator barely a month behind them, NVIDIA this morning is announcing the PCIe version of their accelerator as part...
30 by Ryan Smith on 6/22/2020NVIDIA Ampere Unleashed: NVIDIA Announces New GPU Architecture, A100 GPU, and Accelerator
While NVIDIA’s usual presentation efforts for the year were dashed by the current coronavirus outbreak, the company’s march towards developing and releasing newer products has continued unabated. To that...
128 by Ryan Smith on 5/14/2020Micron to Launch HBM2 DRAM This Year: Finally
Bundled in their latest earnings call, Micron has revealed that later this year the company will finally introduce its first HBM DRAM for bandwidth-hungry applications. The move will enable...
14 by Anton Shilov on 3/27/2020Rambus Develops HBM2E Controller & PHY: 3.2 Gbps, 1024-Bit Bus
The latest enhancements to the HBM2 standard will clearly be appreciated by developers of memory bandwidth-hungry ASICs, however in order to add support of HBM2E to their designs, they...
42 by Anton Shilov on 3/6/2020SK Hynix Licenses DBI Ultra Interconnect for Next-Gen 3DS and HBM DRAM
SK Hynix has inked a new broad patent and technology licensing agreement with Xperi Corp. Among other things, the company licensed the DBI Ultra 2.5D/3D interconnect technology developed by...
9 by Anton Shilov on 2/11/2020JEDEC Updates HBM2 Memory Standard To 3.2 Gbps; Samsung's Flashbolt Memory Nears Production
After a series of piecemeal announcements from different hardware vendors over the past year, the future of High Bandwidth Memory 2 (HBM2) is finally coming into focus. Continuing the...
24 by Ryan Smith on 2/3/2020GlobalFoundries and SiFive to Design HBM2E Implementation on 12LP/12LP+
GlobalFoundries and SiFive announced on Tuesday that they will be co-developing an implementation of HBM2E memory for GloFo's 12LP and 12LP+ FinFET process technologies. The IP package will enable...
13 by Anton Shilov on 11/5/2019Samsung Develops 12-Layer 3D TSV DRAM: Up to 24 GB HBM2
Samsung on Monday said that it had developed the industry’s first 12-layer 3D packaging for DRAM products. The technology uses through silicon vias (TSVs) to create high-capacity HBM memory...
11 by Anton Shilov on 10/7/2019SK Hynix Announces 3.6 Gbps HBM2E Memory For 2020: 1.8 TB/sec For Next-Gen Accelerators
SK Hynix this morning has thrown their hat into the ring as the second company to announce memory based on the HBM2E standard. While the company isn’t using any...
23 by Ryan Smith on 8/12/2019Samsung HBM2E ‘Flashbolt’ Memory for GPUs: 16 GB Per Stack, 3.2 Gbps
Samsung has introduced the industry’s first memory that correspond to the HBM2E specification. The company’s new Flashbolt memory stacks increase performance by 33% and offer double per-die as well...
25 by Anton Shilov on 3/20/2019JEDEC Updates HBM Spec to Boost Capacity & Performance: 24 GB, 307 GB/s Per Stack
JEDEC this week published an updated version of its JESD235 specification, which describes HBM and HBM2 DRAM. The new version of the standard allows memory manufacturers to increase capacities...
15 by Anton Shilov on 12/19/201816GB NVIDIA Tesla V100 Gets Reprieve; Remains in Production
Back in March at their annual GPU Technology Conference, NVIDIA announced the long-anticipated 32GB version of their flagship Tesla V100 accelerator. By using newer 8-Hi HBM2 memory stacks, NVIDIA...
21 by Ryan Smith on 5/24/2018NVIDIA’s DGX-2: Sixteen Tesla V100s, 30 TB of NVMe, only $400K
Ever wondered why the consumer GPU market is not getting much love from NVIDIA’s Volta architecture yet? This is a minefield of a question, nuanced by many different viewpoints...
28 by Ian Cutress on 3/27/2018NVIDIA Bumps All Tesla V100 Models to 32GB, Effective Immediately
Update 05/24: NVIDIA has since reached out to us, informing us that their previous statement about 32GB cards replacing 16GB cards was in error, and that the 16GB V100...
7 by Ryan Smith on 3/27/2018AMD to Ramp up GPU Production, But RAM a Limiting Factor
One of the more tricky issues revolving around the GPU shortages of the past several months has been the matter of how to address the problem on the GPU...
34 by Ryan Smith on 1/31/2018Samsung Starts Production of HBM2 “Aquabolt” Memory: 8 GB, 2.4 Gbps
Samsung this week announced that it had started mass production of its second-generation HBM2 memory code-named “Aquabolt”. The new memory devices have 8 GB capacity and operate at 2.4...
17 by Anton Shilov on 1/11/2018SK Hynix: Customers Willing to Pay 2.5 Times More for HBM2 Memory
SK Hynix was the first DRAM manufacturer to start producing HBM Gen 1 memory in high volume back in 2015. However, the company is somewhat behind its rival Samsung...
23 by Anton Shilov on 8/4/2017Samsung Increases Production Volumes of 8 GB HBM2 Chips Due to Growing Demand
Samsung on Tuesday announced that it is increasing production volumes of its 8 GB, 8-Hi HBM2 DRAM stacks due to growing demand. In the coming months the company’s 8...
34 by Anton Shilov on 7/19/2017SK Hynix Advances Graphics DRAM: GDDR6 Added to Catalogue, GDDR5 Gets Faster
SK Hynix has added GDDR6 memory chips to its product catalogue, revealing their general specifications and launch timeframe sometimes in Q4 2017. As expected, the new GDDR6 ICs will...
17 by Anton Shilov on 5/20/2017