Company

Broadcom (AVGO)

Custom AI ASICs for hyperscalers + Tomahawk/Jericho switching silicon — the dual-engine "silent architect" of every non-NVIDIA AI cluster.

1. Core Product / Service

Broadcom (NASDAQ: AVGO) is a hybrid semiconductor + infrastructure-software giant assembled by Hock Tan through serial M&A (LSI, Avago, Symantec, CA, VMware). Two reporting segments:

  • Semiconductor Solutions — networking, server storage, broadband, wireless, industrial
  • Infrastructure Software — VMware (2023 acquisition), CA, Symantec enterprise security

The AI story sits inside Semiconductor Solutions, split into two AI businesses:

  1. Custom AI accelerators (XPUs / ASICs) — Broadcom designs application-specific accelerators for hyperscalers. Confirmed programs: Google TPU (all generations through v7+), Meta MTIA, OpenAI (10 GW co-development announced 2025, ramping 2H 2026 → full deployment 2029), Anthropic (1 GW initial → 3 GW by 2027), plus a smaller named hyperscaler-4
  2. AI networking siliconTomahawk family of low-latency top-of-rack / spine switches (Tomahawk 5 at 51.2 Tbps; Tomahawk 6 at 102.4 Tbps, the first product at that radix, no direct competitor); Jericho3-AI deep-buffer routing; SerDes at 200 Gbps/lane; 400G/lane optical DSP with a roadmap to 200T switches

Broadcom does not sell switches to end users (that's arista and Cisco) and does not sell accelerators retail (that's nvidia). It sells silicon plus design services to OEMs and hyperscalers. The strategic position is everything inside the box that isn't an NVIDIA GPU.

2. Target Users & Pain Points

  • Hyperscalers building custom silicon — Google, Meta, OpenAI, Anthropic, ByteDance — to escape NVIDIA's ~70% GPU gross margin and to optimize for specific workloads
  • Switch OEMsarista, Cisco, Juniper, Dell, white-box ODMs (Celestica, Quanta, Accton) — all building products around Broadcom Tomahawk/Jericho silicon
  • Optics vendorscoherent, lumentum, innolight all consume Broadcom 400G/800G/1.6T DSP ASICs

Pain solved: two ways out of NVIDIA dependency. Either build a custom XPU that's purpose-built for your workload (escape ~$30K/H100 economics on inference) or build an Ethernet fabric on Tomahawk silicon that doesn't require Mellanox InfiniBand (escape NVIDIA's networking lock-in). Both routes flow into Broadcom's revenue line.

3. Competitive Landscape

Company Approach Positioning vs Broadcom
nvidia Merchant GPU + InfiniBand bundled Direct architectural competitor; Broadcom is the open-ecosystem alternative
Marvell Custom ASIC design (Amazon Trainium/Inferentia, Google Axion CPU) Smaller ASIC share; Broadcom holds ~70% of custom AI accelerator market
Astera Labs / Credo Connectivity ICs (retimers, AECs) Adjacent, not directly competitive on switch ASIC
MediaTek / Alchip / GUC ASIC design services in APAC Smaller scale; some Chinese-customer overlap
Cisco Silicon One Internal switch ASIC Cisco's attempt to break Broadcom dependency; mixed traction

Broadcom's edge: scale of design wins (Google + Meta + OpenAI + Anthropic in custom ASIC), process-node leadership through TSMC (2nm on next Meta MTIA), best-in-class SerDes (200G/lane in production), and >65% operating margin that lets it absorb design-service investments competitors can't match.

4. Unique Observations

  • Section 4 focus — 1 MW build-cost share: Broadcom isn't billed line-item on a DC build sheet; its content is embedded inside switches (arista, Cisco) and inside custom GPUs (Google TPU, Meta MTIA). For a Google TPU-based cluster, Broadcom can capture 30-50% of the silicon BOM through TPU ASIC + Tomahawk switching + optical DSPs — far more than any other supplier. On a $20M/MW custom-silicon cluster that's $2-4M/MW of Broadcom content; on a generic NVIDIA H200 cluster Broadcom's content is maybe $1M/MW (mostly the switch silicon embedded inside Arista/Cisco hardware).
  • Q1 FY2026 financials: AI revenue $8.4B (+106% YoY) [2][3]. AI semiconductor share of the company is now the dominant growth engine. Q2 FY2026 AI revenue guide: >$10B [2]. Management projects AI chip sales to surpass $100B in FY2027 [2].
  • AI backlog $73B as of end-2025, primarily covering deliveries over the next 18 months [2]. Within that, AI switch backlog alone is >$10B on Tomahawk 6 booked at record rates [3]. Total company backlog ~$110B per Q1 commentary.
  • FY2025 full-year: revenue $63.9B (+24%); AI revenue $20B (+65%); free cash flow $26.9B; adjusted EBITDA margin 68% [1]. This is software-company margin on semi-and-software revenue.
  • AI share of company revenue: FY2025 AI was ~31% of revenue ($20B / $63.9B). FY2026 projection puts AI at >50% as the $8.4B Q1 run-rate scales. Few companies of this size convert mix this quickly.
  • Customer concentration is the structural story: HSBC estimates Google TPU = ~78% of Broadcom's ASIC revenue today [7]. Meta's MTIA ramp + OpenAI's 10 GW program will dilute Google over 2027-2029. The exclusive deals are: 7-year+ contracts where Broadcom is the single design-service partner for that hyperscaler's silicon.
  • OpenAI 10 GW deal — most important new program: announced Oct 2025, full TSMC-fabbed XPU + system co-development; Broadcom leads roll-out starting H2 2026, full deployment by end of 2029 [4]. This is the single largest AI silicon contract ever signed publicly.
  • Bottleneck — TSMC + CoWoS + HBM allocation: like everyone else, Broadcom's ceiling is set by tsmc advanced-node wafer capacity and CoWoS packaging. ASIC ramps for Meta/OpenAI/Anthropic all compete with nvidia for the same fab + packaging slots. Demand is unlimited; supply is rationed quarter by quarter.
  • AI Lab binding is now formal: Google (TPU since 2016), Meta (MTIA since 2024), OpenAI (10 GW formal contract 2025), Anthropic (1 GW initial 2026, 3 GW by 2027). All four frontier-model labs have a Broadcom dependency for their non-NVIDIA silicon path. This is a structural shift from the H100 era when NVIDIA was the only seat at the table.
  • Token cost chain — the "shovel seller": Broadcom is the deepest "below the GPU" supplier of any AI infrastructure stock. For every Google TPU-served token, Broadcom captured the ASIC margin. For every non-NVIDIA Ethernet fabric, Broadcom captured the switch silicon. The structural answer to "who replaces NVIDIA?" runs through Broadcom whether the eventual winner is custom silicon or open Ethernet.

5. Financials / Funding

  • Listed: NASDAQ: AVGO (legacy Avago); market cap ~$1T+ [3]
  • FY2025 revenue: $63.9B (+24% YoY) [1]
  • FY2025 AI revenue: $20B (+65% YoY) [1]
  • FY2025 adjusted EBITDA margin: 68% [1]
  • FY2025 free cash flow: $26.9B [1]
  • Q1 FY2026 AI revenue: $8.4B (+106% YoY) [2]
  • Q2 FY2026 AI revenue guide: >$10B [2]
  • AI backlog: $73B (end-2025) [2]
  • AI switch backlog (subset): >$10B (Tomahawk 6) [3]
  • Total backlog: ~$110B [calibration: L1a_equipment_suppliers.md]
  • FY2026 Q1 operating margin: ~65.5% [calibration: L1a_equipment_suppliers.md]
  • FY2027 chip-sales projection: surpass $100B [2]

6. People & Relationships

  • President & CEO: Hock Tan (since 2006 Avago era)
  • CFO: Kirsten Spears
  • HQ: Palo Alto, California
  • Major acquisitions shaping the company: LSI (2014), Avago/Broadcom merger (2016), Brocade (2017), CA Technologies (2018), Symantec enterprise (2019), VMware (2023)
  • Confirmed custom-AI customers: Google (TPU), Meta (MTIA), OpenAI (10 GW XPU), Anthropic (1 GW → 3 GW), plus unnamed hyperscaler-4
  • Switch silicon customers: arista, Cisco, Dell, HPE, Juniper, white-box ODMs (Celestica, Quanta, Accton)
  • Optics customers: coherent, lumentum, innolight (Broadcom 400G/800G DSPs)
  • Strategic supplier dependency: tsmc (advanced node + CoWoS packaging + HBM)
  • Strategic context: viewed by 2026 markets as the only AI semiconductor pure-play with NVIDIA-class margins that is not NVIDIA itself
Last compiled: 2026-05-11