Bitget App
Trade smarter
Buy cryptoMarketsTradeFuturesEarnSquareMore
From Nvidia to TSMC, Tech Giants are Crossing Boundaries

From Nvidia to TSMC, Tech Giants are Crossing Boundaries

金融界金融界2026/04/16 11:06
Show original
By:金融界

If you often follow technology news, you must be amazed at the changes in the semiconductor industry over the past two years.

In the past, everyone played their own part—who designs chips, who manufactures chips, who sells chips, who buys chips—each had their own place: Nvidia made gaming graphics cards, Arm collected humble but steady royalties, TSMC only turned other’s blueprints into wafers without picking customers or asking about usage.

However, the AI boom has changed this division of labor. Large model training requires thousands of GPUs, and edge AI demands that mobile phones and PCs are able to run local large models with billions of parameters. Computing power has become the most scarce and expensive resource in the AI era; whoever controls it holds the pricing power.

Clear boundaries thus blur—GPU vendors have become “AI arms dealers,” IP sellers are making their own chips, and chip giants are diving headfirst into data centers... The walls between upstream and downstream are being hammered down by AI, one blow after another.

Let's now discuss how those most representative companies have changed before and after the AI wave.

Nvidia:

From "Gaming GPU Giant"

to "Chief Architect of AI Infrastructure"

Nvidia is the earliest and biggest winner in the AI wave transformation.

In the past, Nvidia was synonymous with gaming graphics cards. Its GPUs dominated most of the PC gaming and graphics rendering market, and every PC enthusiast knew the catchphrase: “For GPUs, I only choose an N card.”

The prototype of today’s Nvidia was sparked by a neural network called "AlexNet." In 2012, a research team at the University of Toronto used two Nvidia GPUs to train it. This model beat all competitors in an image recognition competition with a landslide victory—its error rate was less than half of the second place.

The industry went wild. People realized that thousands of computation cores running in parallel perfectly matched the massive paralleling that neural networks demanded. CPU calculations that took a month could now be solved in an afternoon on a GPU. AlexNet’s innovative use of multiple GPUs to train networks laid the foundation for today's Nvidia.

But Nvidia’s journey hasn’t been smooth. When the DGX-1 was released in 2016—the world’s first AI supercomputer system—its market response was “icy cold.” Jensen Huang mentioned on “The Joe Rogan Experience” that no one wanted it at the time: “I received zero purchase orders. Not one. Except for Elon.”

At that time, Elon Musk was running a non-profit AI organization that was desperate for such a computing platform. So Jensen Huang personally drove the DGX-1 to San Francisco and handed it to Musk. That AI organization was, in fact, OpenAI’s early team.

The rest is history—ChatGPT exploded globally, the era of large models arrived, and whether it’s OpenAI, Google, or Meta, they all train their models with Nvidia's GPUs.

Today, Nvidia is far more than a chip company; it’s become the “arms dealer” of the AI world. Its product line has expanded from GPUs to CPUs (Grace) and networking chips (BlueField, NVLink), building a full-stack AI computing power solution. Its software ecosystem, CUDA, has also become the “common language” for AI development—a barrier as high as rebuilding Windows itself for those who hope to bypass it.

Nvidia’s goal is clear: by 2027, total orders for related chips must surpass one trillion dollars.

AMD:

The Comeback of the “Eternal Runner-Up”

In the GPU world, if Nvidia is first, AMD is eternally the “runner-up” for comparison.

For thirty years, AMD has made Radeon graphics cards to fiercely compete with Nvidia, and Ryzen/EPYC processors to battle Intel. For players, AMD’s impression is often “great value, but the drivers could be better.” For a long time, AMD was not in a good position. Around 2008, AMD’s stock price once fell to less than $2.

It wasn't until 2020 that AMD targeted the massive data center market, acquiring Xilinx and bringing FPGA and adaptive computing under its umbrella. At the time, AMD’s market cap was just $90 billion (Nvidia’s was $300 billion)—AMD paid a high price for Xilinx.

A quick tidbit: Xilinx is the world’s largest FPGA vendor. In 2018, it acquired China’s rising star in AI chip—DeePhi Tech, nicknamed “China’s Nvidia.”

This was an ambitious merger—AMD combined its CPUs and GPUs with Xilinx’s FPGA, adaptive SoC, and AI engine technology to build a complete CPU+GPU+FPGA product matrix.

Subsequently, AMD launched the Instinct MI series accelerator cards, which quietly appeared in some supercomputers and AI training clusters. Industry insiders took note when AMD’s MI250X could match the A100 in some scientific calculations. The MI300X went further, integrating CPU+GPU+memory on a single chip—HBM capacity hit 192GB, surpassing Nvidia’s H100 144GB at the time.

It became clear that AMD could also run large models. This perception spread quickly in 2024. Microsoft, Meta, and Oracle began purchasing them in batches to deploy in their AI cloud services.

Now, data center business has become AMD’s brightest growth engine. Its Q4 2025 revenue reached $10.3 billion, with data center revenue making up over 52% of total revenue. Media reviews state: AMD had a key structural transformation in 2025, shifting from a component manufacturer to a full-stack data center and AI infrastructure architect.

CEO Lisa Su set a target: by 2027, Instinct series AI accelerator cards should generate several billion dollars in annual revenue. To fill software gaps, AMD acquired Nod.ai and Mipsology, and brought in core team members from Nvidia CUDA. At the same time, AMD is strengthening its software ecosystem, aiming to build its own “hardware+software” full stack capability.

Arm:

Breaking Its 35-Year Principle With Its Own Hands

In 1990, Arm was founded in a barn near Cambridge, with only 12 team members.

For a long time, this company licensed IP and compute subsystems (CSS) to partners. Over 35 years, cumulative chip shipments exceeded 350 billion, with an average of 160 Arm chips per household worldwide. With its light-asset IP licensing model, gross margins could reach as high as 97%.

The advent of AI forced Arm to reconsider its position. On September 14, 2023, SoftBank's Arm went public on Nasdaq at a $65.2 billion valuation, becoming that year’s largest IPO globally. (See English DoNews for a recap: “The World’s Largest IPO This Year Is Masayoshi Son’s Second Half.”) Arm needed to tell a new AI story on a bigger stage.

Finally, in 2026, under the wave of AgenticAI, Arm took a brand new step beyond its traditional IP and compute subsystem business.

On March 24, Arm launched its first self-developed chip—Arm AGI CPU—a data center processor designed especially for agentic AI (Agentic AI), with a TDP of 300 watts and manufactured using TSMC’s 3nm process—clearly targeting x86 servers in data centers.

When asked by media whether this would bring competitive pressure, Arm CEO Rene Haas first emphasized the huge market opportunity, enough to accommodate many players. More importantly, he put forward a core argument: demand remains far from being satisfied.

Arm has already secured first-tier customers: Meta is a deeply involved co-development partner; OpenAI, Cerebras, Cloudflare, SK Telecom, etc., have also confirmed collaboration. Meta will deploy Arm AGI CPU together with its self-developed MTIA accelerators.

In the future, Arm will take a parallel approach in the data center field — “IP + compute subsystem + chip” — providing customers more diverse product and solution choices while further enhancing its industrial voice in the AI era.

Rene Haas gave a clear revenue target during a meeting: by 2030, Arm will have two major business units, with IP business annual revenue exceeding $10 billion, and chip business annual revenue reaching $15 billion. Overall company revenue will grow from about $5 billion to $25 billion.

Qualcomm:

From “King of Mobile” to “Data Center”

Over the past fifteen years, Qualcomm has been synonymous with Android flagships—its Snapdragon processors power almost every Android flagship. In fiscal 2024, revenue reached $38.96 billion—about two-thirds from mobile chips.

But the mobile market has long been saturated, with global shipments oscillating between 1.1 and 1.3 billion units over the past five years. The rapid advance of AI has forced Qualcomm to expand beyond phones.

Qualcomm is clear about its next focus—edge AI—running AI models on end-user devices rather than processing everything in the cloud.

In March 2021, Qualcomm acquired chip design company Nuvia, founded by a former Apple chip engineer, for $1.4 billion, filling the last gap in Qualcomm’s self-developed high-performance CPU cores. After the acquisition, CEO Cristiano Amon announced that Nuvia’s CPU designs would be integrated into smartphones, PCs, digital cockpits, and ADAS systems.

In 2024, the Snapdragon X platform based on Nuvia technology was officially unveiled. Qualcomm is evolving from a simple mobile communication chip company to a full-scenario, edge AI architect covering PC, automotive, XR, IoT, and more.

On October 27, 2025, Qualcomm launched two data center AI inference chips—AI200 (for commercial use in 2026) and AI250 (for commercial use in 2027)—and announced that the AI rack solution will be deployed in Saudi AI company HUMAIN’s data center from 2026. On the day of the announcement, Qualcomm’s stock surged 20%, adding nearly $28 billion in market cap in a single day.

Qualcomm is playing a differentiated game—not touching Nvidia’s dominion in training, but focusing on AI inference, with emphasis on energy efficiency and total cost of ownership.

Qualcomm CEO Amon views AI chips as the key extension for the company’s diversification strategy. Besides inference chips, Qualcomm is developing data center CPUs and plans to launch a third AI chip in 2028. On the edge, Qualcomm will also advance its AI PC and AI phone chips, forming a “cloud to edge” full-link coverage.

TSMC:

The “Foundation” of the AI Era

TSMC is a pure-play foundry—the only one of its kind.

Before the global AI boom, TSMC was already the undisputed leader in semiconductor manufacturing, almost making the “heart” of every flagship smartphone worldwide.

Previously, TSMC’s largest revenue generator was smart phone–related income. From Apple’s A series to Snapdragon, every leap in smartphone performance was backed by TSMC. The business focus was clear—pushing process nodes from 7nm, to 5nm, then conquering 3nm. The only goal: make phones faster and more efficient.

However, the AI computing boom pushed TSMC to a new strategic height. AI chips—especially Nvidia’s GPUs—have the highest requirements for manufacturing and packaging, and TSMC is unrivaled on both fronts.

On one hand, advanced nodes contributed to 77% of TSMC’s wafer revenue (7nm and below), with 3nm alone capturing 28% and 5nm 35%. 2nm entered successful mass production in Hsinchu and Kaohsiung in Q4 2025, with yields between 60% and 70%.

There’s also CoWoS advanced packaging—without it, all high-end AI chips are just “piles of expensive wafers.” Currently, more than half of CoWoS production capacity has been booked by Nvidia (TSMC CEO C. C. Wei revealed in 2025 that Nvidia is TSMC’s largest customer), followed by Broadcom and AMD. CoWoS is the “hot commodity” of the AI era, directly affecting giants’ shipment capabilities.

From 2025 onwards, TSMC underwent a dramatic structural transformation in its business portfolio.

In that year, company revenue reached $122 billion, up 35.9% year-on-year. The annual gross margin was 59.9%, net margin reached 50.8%. High-performance computing (HPC) became the largest revenue source at 58% of total—surpassing smartphones for the first time.

In Q4 2025, HPC revenue made up 55%, up 48% year-on-year. Smartphones retreated to second place at 32%. TSMC CEO C. C. Wei said bluntly, “AI demand is stronger than we expected.”

TSMC has become the only “essential path” for physical realization of AI chips. Every notable player’s most advanced chips cannot avoid TSMC’s lines. It is no exaggeration to say TSMC is blazing a unique trail, and the arrival of the AI wave is widening and smoothing this path.

......

If we broaden our view, we’ll see similar “boundary-crossing” across the industry supply chain. Google develops its own TPU, Amazon launches Graviton processors, Meta develops MTIA accelerator chips, and even OpenAI is rumored to be working on its own chips… These internet/cloud service companies, which used to “obediently” purchase chips, are speeding up their move upstream.

The most profound change AI brings to semiconductors isn’t just multiplying a company’s market cap, but upending the familiar business landscape and reshuffling it entirely.

However, old boundaries are being torn down—most likely in preparation for a new set of rules. The truly pressing questions are: When will the new boundaries appear? Where will they be drawn? Who will hold the right to speak? These are what will shape the next decade of the semiconductor industry.

0
0

Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.

Understand the market, then trade.
Bitget offers one-stop trading for cryptocurrencies, stocks, and gold.
Trade now!