Share this

Can these AI chip startups succeed in challenging Nvidia?

2026-04-06 05:13:19 · · #1

There's a line in the science fiction novel *Dune* that goes, "Whoever controls the GPU controls the universe." In today's tech world, if any company wants to make a mark in the field of AI, it needs to buy Nvidia GPUs.

Analysts are distilling the AI ​​dream into two categories: "GPU-rich" and "GPU-poor," with the former possessing vast amounts of GPUs and the latter having relatively few. To demonstrate their capabilities, tech company executives are boasting about the sheer number of GPUs they've stockpiled.

Fueled by the AI ​​wave, Nvidia's market capitalization surged to $2 trillion. On May 22, Nvidia released its first-quarter financial report, showing revenue of $26.04 billion, a year-on-year increase of 262%; and net profit of $14.88 billion, a year-on-year increase of 628%.

So, is it possible to design AI-specific chips without GPUs? Of course, that's what many companies, large and small, are doing, as they want to challenge Nvidia.

Nvidia-dominated GPUs also have flaws.

Essentially, a CPU processes tasks one at a time, while a GPU is different. It has thousands of processing engines (or cores), capable of handling thousands of simple tasks simultaneously. When a GPU runs a large AI model, it's equivalent to running thousands of copies of the same task concurrently. Figuring out how to rewrite AI code to run on a GPU is the core issue, and it's precisely this research that has led to the rapid advancements in AI.

However, GPUs also have limitations; their processing speed is not fast enough when data is moving back and forth. Modern large-scale AI models often require a large number of GPUs and memory chips, all interconnected. The faster the data moves between GPUs and memory chips, the better the performance. When researchers train large AI models, some GPU cores are idle, spending almost half the time waiting for data.

Andrew Feldman, founder of the California-based startup Cerebras, explained that waiting for data is like shoppers lining up at a grocery store during a shopping festival. He said, "Everyone is in line, the parking lot is blocked, the aisles are blocked, the checkout counters are blocked, and GPUs are similar."

What did Cerebras do? They combined 900,000 cores and massive amounts of memory into a single, cohesive unit. This reduced the complexity of connecting multiple chips and accelerated data movement. With cores interconnected on the chip, it runs hundreds of times faster than a combination of discrete GPUs, and due to the tighter connections, consumes half the power of Nvidia products.

Groq, based in Mountain View, California, has taken a slightly different path. It's also developing an AI chip called a Language Processing Unit (LPU) capable of training and running Large Language Models (LLMs). The chip contains storage elements and also functions as a router, transferring data between interconnected LPUs. Combined with intelligent routing software, it eliminates latency and reduces data wait times. This unique design significantly improves efficiency and speed; Groq claims its LPUs run LLMs 10 times faster than existing systems.

MatX, a California-based company, took the opposite approach. They argued that GPUs contain various functions and circuitry suitable for graphics processing but useless in the LLM (Liquid Module) domain. MatX eliminated these unnecessary components, resulting in better performance when the chip was performing certain tasks.

Many other companies are also working quietly, such as Israel's Hailo, which raised $120 million in funding in April this year; Toronto's Taalas and the US's Tensorrent, which develop AI chips using the open-source RISC V architecture; and the UK's Graphcore, which entered the market early but failed and is now considering selling itself to SoftBank.

Tech giants are also developing their own AI chips; companies like Google, Amazon, Meta, and Microsoft have created custom chips for cloud-based AI. AMD and Intel, directly competing with Nvidia, already possess GPU AI chips.

Challenging Nvidia's monopoly is not easy

Newcomers are commendable for challenging established leaders, but they may be too aggressive and go too far.

Stanford University computer scientist Christos Kozyrakis said that designing a chip takes 2-3 years, which is a long time considering that large AI models are evolving rapidly.

For startups to succeed, the best approach is to seize the opportunity to design chips for future models in advance, chips that outperform some of Nvidia's GPUs, thus finding a breakthrough. However, this path is also very risky; companies designing dedicated chips for future models may ultimately find they've made the wrong bet.

Reiner Pope, co-founder of MatX, believes they see the future: the latest state-space models are becoming increasingly popular, and MatX is perfectly suited to them. Andrew Feldman, founder of Cerebras, believes that modern AI is essentially still "sparse linear algebra," and their chip can adapt quickly.

Another obstacle to challenging Nvidia is the software layer. CUDA has effectively become an industry standard, and although it is cumbersome to use, its standard status is difficult to shake.

Christos Kozyrakis believes that software is king, and Nvidia has a clear advantage in this regard, given its years of experience refining its software ecosystem. AI chip startups that want to succeed must convince programmers to optimize their programs for new chips. Companies need to provide software tools and compatible mainstream machine learning frameworks. The challenge lies in optimizing software to fit the new architecture—a task that is both difficult and complex.

The customers for AI chips are relatively simple: one group consists of large model developers, such as OpenAI, Anthropic, and Mistral; the other group consists of tech giants, such as Amazon, Microsoft, Meta, and Google. These companies are interested in acquiring promising AI chip startups to acquire technology and enhance their own competitiveness. Chip startups might choose to forgo competing with Nvidia and instead try to sell themselves to one of these two types of companies.

MatX has ambitious goals; it wants to sell its chips to companies like OpenAI and Google, and of course, selling itself is also an option. MatX stated, "We welcome various exit strategies, but we still believe that as an independent company, our business can continue." Cerebras, on the other hand, says it is preparing to go public.

Overall, although many startups have tried to challenge Nvidia, none have been able to pose a threat so far, and only time will tell.


Read next

CATDOLL 135CM Nanako (TPE Body with Hard Silicone Head)

Height: 135cm Weight: 24.5kg Shoulder Width: 33cm Bust/Waist/Hip: 62/57/69cm Oral Depth: 3-5cm Vaginal Depth: 3-15cm An...

Articles 2026-02-22