Nvidia is utilizing deceptive practices and abusing its marketplace power to quash nan competition, according to Cerebras Systems CEO Andrew Feldman, aft nan patient unexpectedly announced its latest GPU merchandise roadmap successful October 2023.
Nvidia outlined caller graphics cards group for yearly merchandise betwixt 2024 and 2026 to adhd to nan manufacture starring A100 and H100 GPUs presently successful specified precocious demand, pinch organizations crossed nan manufacture sphere swallowing them up for generative AI workloads.
But Feldman labelled this news a “predetary pre-announcement” speaking to HPCWire, highlighting nan patient has nary responsibility to spot done connected releasing immoderate of nan components it’s teased. By doing this, he’s speculated it’s only confused nan market, particularly successful ray of nan truth Nvidia was, say, a twelvemonth precocious pinch nan H100 GPU. And he doubts Nvidia tin spot done connected this strategy, nor mightiness it want to.
Nvidia is conscionable ‘throwing soil up successful nan air’
Nvidia teased yearly leaps connected a azygous architecture successful its announcement, pinch nan Hopper Next pursuing nan Hpper GPU successful 2024, followed by nan Ada Lovelace-Next GPU, a successor to nan Ada Lovelace graphics card, group for merchandise successful 2025.
“Companies person been making chips for a agelong time, and cipher has ever been capable to win connected a one-year cadence because nan fabs do not alteration astatine a one-year pace, Feldman countered to HPCWire.
“In galore ways, it has been a unspeakable artifact of clip for Nvidia. Stability AI said they were going to spell connected Intel. Amazon said nan Anthropic was going to tally connected them. We announced a monstrous woody that would nutrient capable compute truthful it would beryllium clear that you could build… ample clusters pinch us.
“[Nvidia’s] response, not astonishing to me, successful nan strategy realm, is not a amended product. It’s… propulsion soil up successful nan aerial and move your hands a lot. And you know, Nvidia was a twelvemonth precocious pinch nan H100.”
Feldman has designed nan world’s largest AI spot successful nan world, nan Cerebras Wafer-Scale Engine 2 CPU – which is 46,226 square-mm and contains 2.6 trillion transistors crossed 850,000 cores.
He told nan New Yorker that monolithic chips are amended than smaller ones because cores pass faster erstwhile they’re connected nan aforesaid spot alternatively than being scattered crossed a server room.
More from TechRadar Pro
- Check retired our roundup of nan best graphics cards for each purposes
- These are nan best graphics cards from AMD
- AMD vs Nvidia: who makes nan champion GPUs?