Cerebras
TSMC has been offering its System-on-Wafer integration technology, InFO-SoW, since 2020. For now, only Cerebras and Tesla have developed wafer scale processor designs using it, as while they have fantastic performance and power efficiency, wafer-scale processors are extremely complex to develop and produce. But TSMC believes that not only will wafer-scale designs ramp up in usage, but that megatrends like AI and HPC will call for even more complex solutions: vertically stacked system-on-wafer designs. Tesla Dojo's wafer-scale processors — the first solutions based based on TSMC's InFO-SoW technology that are in mass production — have a number of benefits over typical system-in-packages (SiPs), including low-latency high-bandwidth core-to-core communications, very high performance and bandwidth density, relatively low power delivery network impendance, high performance efficiency, and redunancy. But...
Cerebras to Enable 'Condor Galaxy' Network of AI Supercomputers: 36 ExaFLOPS for AI
Cerebras Systems and G42, a tech holding group, have unveiled their Condor Galaxy project, a network of nine interlinked supercomputers for AI model training with aggregated performance of 36...
1 by Anton Shilov on 7/21/2023Cerebras Completes Series F Funding, Another $250M for $4B Valuation
Every once in a while, a startup comes along with something out of left field. In the AI hardware generation, Cerebras holds that title, with their Wafer Scale Engine...
25 by Dr. Ian Cutress on 11/10/2021Cerebras In The Cloud: Get Your Wafer Scale in an Instance
To date, most of the new AI hardware entering the market has been a ‘purchase necessary’ involvement. For any business looking to go down the route of using specialized...
11 by Dr. Ian Cutress on 9/16/2021Hot Chips 2021 Live Blog: Machine Learning (Graphcore, Cerebras, SambaNova, Anton)
Welcome to Hot Chips! This is the annual conference all about the latest, greatest, and upcoming big silicon that gets us all excited. Stay tuned during Monday and Tuesday...
3 by Dr. Ian Cutress on 8/24/2021Hot Chips 33 (2021) Schedule Announced: Alder Lake, IBM Z, Sapphire Rapids, Ponte Vecchio
Once a year the promise of super hot potatoes graces the semiconductor world. Hot Chips in 2021 is set to be held virtually for the second successive year, and...
33 by Dr. Ian Cutress on 5/18/2021Cerebras Unveils Wafer Scale Engine Two (WSE2): 2.6 Trillion Transistors, 100% Yield
The last few years has seen a glut of processors enter the market with the sole purpose of accelerating artificial intelligence and machine learning workloads. Due to the different...
136 by Dr. Ian Cutress on 4/20/2021Cerebras Wafer Scale Engine News: DoE Supercomputer Gets 400,000 AI Cores
One of the more interesting AI silicon projects over the last couple of years has been the Cerebras Wafer Scale Engine, most notably for the fact that a single...
8 by Dr. Ian Cutress on 8/21/2020Hot Chips 2020 Live Blog: Cerebras WSE Programming (3:00pm PT)
Hot Chips has gone virtual this year! Lots of talks on lots of products, including Tiger Lake, Xe, POWER10, Xbox Series X, TPUv3, and a special Raja Koduri Keynote...
0 by Dr. Ian Cutress on 8/18/2020Hot Chips 32 (2020) Schedule Announced: Tiger Lake, Xe, POWER10, Xbox Series X, TPUv3, Raja Koduri Keynote
I’ve said it a million times and I’ll say it again – the best industry conference I go to every year is Hot Chips. The event has grown over...
65 by Dr. Ian Cutress on 7/8/2020Cerebras’ Wafer Scale Engine Scores a Sale: $5m Buys Two for the Pittsburgh Supercomputing Center
One of the highlights of Hot Chips 2019 was the presentation of the Cerebras Wafer Scale Engine - an AI processor chip that was as big as a wafer...
12 by Dr. Ian Cutress on 6/9/2020Hot Chips 31 Live Blogs: Cerebras' 1.2 Trillion Transistor Deep Learning Processor
Some of the big news of today is Cerebras announcing its wafer-scale 1.2 trillion transistor solution for deep learning. The talk today goes into detail about the technology.
28 by Dr. Ian Cutress on 8/19/2019