Marvell executives discuss the evolving infrastructure requirements for AI and cloud computing at the hyperscale level. Will Chu, SVP and GM, Custom Cloud Solutions Business Unit; Radha Nagarajan, SVP and CTO, Optical and Cloud; Xi Wang, SVP and GM, Connectivity; Rishi Chugh, VP and GM, Data Center Switching; Matt Kim, AVP, Custom Cloud Solutions Business Unit; and Annie Liao, Product Management Director examine how hyperscalers are moving beyond XPU customization to reimagine entire system trays, including memory, storage, security, and networking components.
Marvell Industry Analyst Day Showcase
Meanwhile, download the 2025 edition of our AI in Networking Report, covering the role of AI in network operations. We're also working on the update to our data center networking report for release in Jan 2026 (stay tuned). And yes, we will cover Marvell in that report, along with other leading vendors in data center networking. Drop us a line if you want to be included at research@avidthink.com.
Highlights from Marvell Execs at Industry Analyst Day
Reader feedback on AI in Networking 2025 — Harnessing the AI Deluge: “well-written,” “tracks the cutting edge in AI,” “thoughtful.” Grab your copy now and give it a read today!
Interviews with Marvell Executives at Industry Analyst Day
The following media comprises interviews and other content related to Marvell's Industry Analyst Day 2025. Views expressed are those of the presenting individuals and companies and may not necessarily represent views of Converge! Network Digest or AvidThink. Marvell is a sponsor of NextGenInfra.io
Optical Interconnects for Scale Up, Scale Out, Scale Across
Radha Nagarajan, SVP and CTO, Optical and Cloud at Marvell, outlines three networking architecture categories: scale up (under 30 meters) for memory disagregation with customized formats, scale out (30-300 meters) using Ethernet-based PAM4 for interoperability, and scale across (2-2,000 kilometers) employing coherent interconnects with DWDM and 16 QAM modulation. Marvell addresses all three segments through its Celestial acquisition for scale up, internal development for scale out, and an established business in scale across, including 1.6 terabit per second single wavelength links currently in development.
Marvell's Reliant Software & Golden Cable
Xi Wang, SVP and GM for Connectivity at Marvell, introduces the company’s Reliant software suite and Golden Cable program designed to remotely monitor and manage AI infrastructure connecting hundreds of thousands of GPUs and XPUs by collecting data from DSPs in cables and optical modules. Marvell is offering the Golden Cable reference design—combining DSP hardware, cable/module design, and Reliant software—as a free industry standard to enable partners and customers to test complete solutions and accelerate deployment of next-generation technologies.
Custom XPU Solutions: Memory, Security & Networking
Will Chu, SVP and GM, Custom Cloud Solutions Business Unit at Marvell, discusses the company’s expansion into XPU attach solutions, where Marvell customizes all components within the XPU tray beyond the XPU itself. These custom solutions include CXL-enabled memory for expansion and near-memory compute, security devices for AI infrastructure management, and high-performance NICs leveraging Marvell’s Certis and other IP technologies.
Solving Multi-Kilowatt AI Chip Power Delivery
Matt Kim, AVP, Custom Cloud Solutions Business Unit at Marvell, presents PIVR (Package Integrated Voltage Regulator) technology as a solution to power delivery challenges in AI and cloud workloads entering the multi-kilowatt chip era, where moving voltage regulators directly into the XPU package increases current density performance by up to 2x and reduces power transmission losses by as much as 85%. Marvell is partnering with the broader ecosystem to prevalidate and bring PIVR solutions to market, enabling hyperscalers to build 4-kilowatt or greater compute platforms with optimal performance, power efficiency, and total cost of ownership for accelerated infrastructure.
Next: PCIe 7.0 & 8.0
Annie Liao, Product Management Director at Marvell, discusses the company’s PCIe retimer products for generations 5.0 and 6.0, and explains how PCIe 7.0’s 128 Gbps per lane data rate creates challenges for electrical signal conditioning that will drive adoption of optical solutions. She notes that PCIe 8.0 at 256 Gbps per lane will likely require even more optical technology, potentially using pluggable optical modules or co-packaged optics as AI applications continue to expand.