Company

TSMC

The single foundry every AI chip routes through — 3nm/N2/A14 logic plus CoWoS advanced packaging is the actual bottleneck of the entire AI buildout.

1. Core Product / Service

Taiwan Semiconductor Manufacturing Company is the world's largest pure-play foundry. For AI accelerators, two layers matter:

Logic process nodes:

  • N5 / N4 / N4P — workhorses for H100 (4N), MI300X, Trainium 2, TPU v5p
  • N3 / N3E — 25% of Q1 2026 revenue [5]; Apple A19, NVIDIA B200, microsoft-maia 200, AMD MI355X
  • N2 — in production now; >20 customer tape-outs received, >70 in pipeline [1]; first NVIDIA Rubin generation expected here
  • N2P / N2U — derivatives, ramping 2027–2028
  • A16 / A14 — sub-2nm class. A16 originally targeted late 2026, now slipped to 2027 [1]. A14 / A13 announced at 2026 symposium for 2028+ [1][7]

Advanced packaging (the actual bottleneck):

  • CoWoS-S / CoWoS-L / CoWoS-R — 2.5D packaging for HBM-on-interposer-on-substrate; the only way to ship a B200, MI355X, TPU Trillium, Maia 200 etc.
  • SoIC — 3D stacking
  • Roadmap: 5.5-reticle CoWoS shipping in 2026 with >98% yield; 14-reticle (10 dies + 20 HBM stacks) targeted 2028; 14+-reticle in 2029 [8]

2. Target Users & Pain Points

TSMC's customer list reads like a list of every 7-figure-revenue chip designer on Earth. AI-relevant:

  • nvidia — by far the largest single AI-revenue customer; 4N for Hopper, 4NP/3NP for Blackwell, N3 for Rubin
  • amd — N5/N4P/N3 for MI300/325/355
  • google-tpu (via Broadcom partnership) — N5/N3 for v5p/v6
  • aws-trainium / aws-inferentia — N5 → N3
  • microsoft-maia — N3 for Maia 200
  • cerebras — wafer-scale on N5/N3
  • Apple — A-series and M-series; non-AI but pays for the cutting-edge ramp
  • huawei-ascend — historically before US controls cut the channel; ~2M dies were delivered before enforcement [SemiAnalysis]

Pain solved: nobody else has comparable advanced-node yield, capacity, or packaging. Pain not solved: CoWoS allocation — there is simply not enough advanced-packaging capacity to satisfy every AI-chip designer's 2026–2027 plan.

3. Competitive Landscape

Foundry Leading-edge node AI-chip share Notes
TSMC N3 (mass), N2 (ramp), A16 (2027) ~90%+ of leading-edge AI logic Reference
Samsung Foundry 3nm GAA <5% AI; Tesla Dojo, some Google Yield issues persist
intel Foundry 18A (2025), 14A (2027) 0% external AI accel as of 2026 Gov't-supported strategic alt; no hyperscaler win yet
SMIC N+2/N+3 (~7nm-class) China only (huawei-ascend) Yield-poor, no EUV

4. Unique Observations

  • CoWoS is the actual bottleneck of the AI buildout — not silicon. TSMC executives publicly said: "Our CoWoS capacity is very tight and remains sold out through 2025 and into 2026" [2]. Capacity ramp: ~35K wafers/month (late 2024) → ~130K wafers/month (late 2026), a near-quadrupling [3]. Industry-wide AI accelerator demand still exceeds supply by ~1.4–1.6× through 2026 [2]. Every "we can't get enough B200/MI355X" headline is really a CoWoS allocation story.
  • CoWoS ASP approaching 7nm wafer levels. Per recent industry reports, TSMC has raised CoWoS pricing such that a packaged CoWoS wafer ASP approaches 7nm logic wafer levels [4] — advanced packaging is now positioned as a structural profit driver, not just a yield-enabling service. This was not the historical economic profile of OSAT-class packaging.
  • N2 ramp is the key 2026 milestone. TSMC claims N2 ramps with better defect-density reduction than N3 did [1]. >20 tape-outs received and >70 in pipeline is unprecedented at this stage of a new node — meaning N2 will see immediate demand from every leading-edge customer. Apple typically anchors year 1; AMD/NVIDIA/hyperscaler captive silicon piles in year 2.
  • A16 slipped from 2026 to 2027 [1]; A14 (NanoFlex Pro) targeted for 2028. The cadence is roughly 2-year per node — meaning A14 in production won't carry AI products until ~2028–2029. Anything that needs to ship in 2026 ships on N3/N3P/N3E; anything that needs to ship in 2027 ships on N2.
  • Geopolitical exposure is the single biggest tail risk. ~90% of advanced-node AI logic flowing through Taiwan is the load-bearing assumption of the entire NVIDIA/AMD/hyperscaler-captive-silicon stack. TSMC Arizona (Fab 21), Japan (JASM), and Germany are buildouts in progress but each runs years behind Taiwan in node capability. A Taiwan disruption is the single non-priced risk in every AI-chip valuation.
  • The Huawei TSMC-violation episode. ~2.9M Ascend logic dies were delivered to Huawei via shell-company orders before US enforcement caught up huawei-ascend; this kept Ascend 910B/C ramps alive in 2024–2025 and triggered escalating extraterritorial US controls in 2025–2026.

5. Financials / Funding

  • Public: TWSE: 2330; NYSE ADR: TSM
  • Q1 2026 revenue: $35.71B, +35.1% YoY [5]
  • Q1 2026 gross margin: 66.2% (vs 62.3% prior) [6]; operating margin 58.1% [6]
  • HPC segment (which includes AI + 5G): 61% of Q1 2026 revenue, up from 55% Q4 2025 and 51% a year earlier [5][6]
  • Advanced node mix: 5nm 36%, 3nm 25%, 7nm 13% — ~74% from 7nm and below [5]
  • Full-year 2026 guidance: revenue growth >30% YoY USD; long-term AI-accelerator CAGR raised from 50% to 56–59% [6]
  • Capex 2026: ~$45–50B range
  • Market cap: ~$1T+ ADR-equivalent in 2026

6. People & Relationships

  • Founder: Morris Chang (chairman emeritus)
  • Chairman: Dr. C.C. Wei (also CEO post-2024 transition)
  • Senior VPs (advanced packaging / R&D): Y.J. Mii (R&D), Kevin Zhang (business development)
  • Customers (AI accelerator): nvidia (largest), amd, google-tpu / Broadcom, aws-trainium / aws-inferentia / Annapurna, microsoft-maia, cerebras
  • Customers (non-AI but anchor): Apple (largest single customer overall)
  • HBM partners (in CoWoS stack): SK hynix, Samsung, Micron
  • Equipment: ASML (EUV), Applied Materials, KLA, Lam Research
  • Foundry competitors: Samsung Foundry, intel Foundry, SMIC (China-only)
  • Geographic expansion: Arizona Fab 21 (US, 4nm/3nm/2nm phased), JASM (Japan, 12/16/22/28nm), Germany (Dresden), Taiwan F18/F20/F22 (advanced-node + CoWoS)
Last compiled: 2026-05-10