George Tchaparian, CEO of OCP, discusses the organization's remarkable growth and strategic evolution at OCP Global Summit 2025. With attendance surging from 7,000 last year to over 11,000 participants this year, the Summit reflects the strong worldwide interest in AI infrastructure. Tchaparian shares OCP's newest adds to both their board of directors and advisory board to bring in new perspectives and expand outreach. Check out his highlights video and also give a listen to the other thought leaders below in the showcase.
OCP Global Summit Showcase
					NextGenInfra was on site to capture highlights from speakers, vendors, and community leaders. We appreciate our multi-year partnership with OCP and thank them for letting us capture video. Check out the videos on hot networking, optical, cooling, data center technologies!
While you're here, feel free to download our latest report on AI in Networking (entering the agentic era). We're also updating our report on Data Center Networking for AI - scale up, scale out, scale across, scale anywhere - slated for Dec launch. Contact us to inquire about sponsoring!
UNLOCKED REPORT — no personal info needed to download
Perspectives on data center innovations from industry thought leaders
 
							 
							 
							 
							 
							 
							 
							 
							 
							 
							 
							 
							 
							 
							 
							 
							 
							 
							Reader feedback on AI in Networking 2025 — Harnessing the AI Deluge: “well-written,” “tracks the cutting edge in AI,” “thoughtful.” Grab your copy now and give it a read today!
OCP Global Summit 2025 Highlights from Thought Leaders
The following media comprises interviews and other content related to 2025 OCP Global Summit held in San Jose. Views expressed are those of the presenting individuals and companies and may not necessarily represent views of Converge! Network Digest or AvidThink.
 
																		Scale Out, Scale Up
Alan Weckel, Founder and Technology Analyst at 650 Group, discusses the significant bandwidth increases observed at OCP 2025, where networks are experiencing 10x to 100x more bandwidth than historically seen due to GPU and XPU connectivity requirements. He highlights new certifications including 24 gigs equating to 1.6T for switches and transceivers, along with demonstrations of 448 gigs technology for both scale-out and new scale-up applications that deliver 10x the bandwidth of scale-out domains.
 
																		Smarter Fabrics for AI
Sanjay Kumar, VP of Products and Marketing at Arrcus, presents the company’s ACE AI distributed networking fabric that provides congestion-free, lossless Ethernet connectivity for AI workloads spanning from data center training to edge inferencing. Kumar announces a collaboration with Quanta Cloud Technologies to enable their Tomahawk 5 switches for AI-ready rack solutions, creating turnkey offerings that reduce deployment friction for customers implementing AI at scale.
 
																		Broadcom's Rack-Scale Innovations for AI
Manish Mehta, VP Marketing, Optical Systems Division at Broadcom, discusses the explosive demand for AI innovation and the need for collaborative forums like OCP to unite community partners at the Open Compute Summit in San Jose. He highlights Broadcom’s contributions to next-generation AI networks through rack scale architectures with Celestica, ESON partnerships, and optical innovations that provide hyperscalers with confidence in innovation velocity and supply reliability.
 
																		Building Europe's Quantum Hub: CESQ's Vision for Hybrid Computing Future
Lesya Dymyd, Business Development Lead at CESQ, discusses the company’s work on creating a quantum hub in France’s Grand Est region with partners across France, Germany, and Switzerland to become a center of excellence for quantum and hybrid computing. She highlights CESQ‘s focus on combining high-performance computing with quantum applications and announces their upcoming quantum week event in France, while expressing interest in collaborating with the Open Compute Project’s quantum and hybrid working group.
 
																		Connecting the AI Data Center Fabric
Helen Xenos, Senior Director, Portfolio Marketing at Ciena, presents the company’s comprehensive AI infrastructure solutions at OCP, including their industry-first single carrier 1.6 terabit per second coherent solution and ultra-low power metro DCI with 400 gig coherent pluggables. She also discusses Ciena’s recent acquisition of Nubis Communications, which enables linear redrivers for active copper cables and XT optical engines for co-packaged optics applications.
 
																		How to Maximize GPU ROI in AI Infrastructure Buildouts
Lisa Spelman, CEO of Cornelis Networks, discusses how companies struggle to extract sufficient value from their AI infrastructure investments, particularly regarding compute cycles, power efficiency, and data center space utilization to make their economic models viable. She explains that Cornelis Networks addresses these economic challenges by developing technology that increases GPU utilization and delivers enhanced performance, with their current 400 gig technology already providing ultraethernet compliance features like credit-based flow controls and adaptive routing.
 
																		OCP-Compliant AI Inference at Rack Scale
Max Sbabo, Senior Staff Engineer at d-Matrix, presents the company’s rack scale AI inference accelerator at the OCP Summit, highlighting their collaboration with OCP through ODSA workgroups, Bunch of Wires standard implementation, and pioneering block floating-point numerics. He announces d-Matrix’s SquadRack product developed with Arista, Super Micro, and Broadcom following OCP’s Open Rack Specification V3 standards, along with support for the new EON (Ethernet scaleup network) on their roadmap.
 
																		Why High-Performance Networks Choose Ethernet Now
Dudy Cohen, VP, Product Marketing at DriveNets, discusses Ethernet’s growing dominance across all networking use cases at the OCP Global Summit, explaining how modern high-performance Ethernet solutions now surpass InfiniBand performance through technologies like scheduled fabric. He emphasizes that Ethernet is successfully pursuing leadership in the demanding scale-up domain with its improved low latency and predictability capabilities, making this an exciting time for AI infrastructure networking.
 
																		NeoCloud Networking, Simplified
Marc Austin, CEO of Hedgehog, discusses the emerging Neocloud working group at OCP Global Summit and explains how his company addresses critical infrastructure needs for this growing market that requires high-performance AI infrastructure at competitive prices. Austin highlights Hedgehog’s AI network software solution that enables Neoclouds like FarmGPU and enterprise customers like Zipline to operate high-performance AI networks with minimal operational expenses using cloud operations teams rather than specialized network engineers.
 
																		HPE Demos Ultra Ethernet Transport & RoCE v2
Mahesh Subramaniam, Sr. Director of Product Management at Hewlett Packard Enterprise, demonstrates the Ultra Ethernet Consortium specification implementation at the OCP Innovation Village, showcasing advanced networking technologies for AI data centers. The demonstration features Hewlett Packard Enterprise’s QFX 50240 switches handling both Ultra Ethernet Transport and RoCE v2 traffic simultaneously, along with an advanced packet trimming feature for congestion management.
 
																		Transforming Data Center for AI with HPE
Amit Sanyal, Senior Director, Data Center Product Marketing at HPE, presents the company’s AI native networking solutions at OCP, which are designed from the ground up to solve operational challenges and optimize networking performance for AI applications. HPE showcases key innovations including liquid cooled switches, UltraEthernet consortium solutions with packet trimming capabilities, and support for OCP-specified OB3 racks, reinforcing the company’s commitment to open networking solutions purpose-built for AI.
 
																		100% Heat Capture for High-Power AI Infrastructure
Neil Edmunds, VP of Product Management at Iceotope, presents advanced thermal management solutions at OCP 2025 that use dielectric fluids and cold plates to capture 100% of thermal load from high-power AI infrastructure, reducing cooling energy consumption. The Iceotope approach enables data centers to allocate more power to GPUs and compute resources while efficiently managing thermal challenges from 1 megawatt racks and 8 kW chips.
 
																		Lumentum Optics Powering AI Infrastructure
Michael DeMerchant, Sr. Director, Product Line Marketing at Lumentum, explains how the AI infrastructure buildout drives hyperscalers to focus on optical technologies, with Lumentum addressing network power consumption challenges through optical circuit switches, external laser sources for co-packaged optics, and 1.6T partial retimed optics modules. DeMerchant also highlights Lumentum’s participation in OCP OCS working group standardization efforts to bring industry vendors and hyperscalers together for common standards that accelerate technology adoption.
 
																		Marvell's CPO Switch Demo
Kishore Atreya, Sr. Director, Platform Product Management at Marvell, presents the company’s co-packaged optics reference platform featuring a liquid-cooled 1 OU system with 16 6.4T light engines that integrate silicon photonics technology for electrical-to-optical signal conversion. He addresses the significant deployment challenge of managing over 36,000 fibers when scaling to 32 units per rack, emphasizing the need for industry solutions in backplane design and fiber routing for successful rack-level implementation.
 
																		Unlocking Memory Bandwidth
Khurram Malik, Sr. Director, Product Marketing, CXL at Marvell, presents the company’s Structura CXL product portfolio designed to address growing memory capacity and compute power demands. The portfolio includes Structura A with 16 ARM core processors for deep learning and inference workloads, and Structura X for memory expansion solutions that enable hyperscalers to combine DDR4 and DDR5 memory with compression algorithms.
 
																		GPU Interconnect Testing
Hani Daou, Business Development Manager at Multilane, discusses the critical correlation between interconnect testing and accelerated GPU revenue generation, explaining how current trends including AECs, ACCs, and cable back cartridges enable cost-effective, scalable copper-based GPU connections. He emphasizes that comprehensive testing throughout the interconnect lifecycle is essential for monetizing GPU systems, particularly as the industry transitions to co-packaged optics and near-package optics for hyperscaler applications.
 
																		AI Needs Fully Photonic Networks - No Electronic Switching
Joost Verberk, VP of Marketing and Business Development at Oriole Networks, explains how AI infrastructure requires a complete rethinking of network architecture with significantly more optical components than current implementations. He emphasizes that Oriole Networks believes AI-focused networks must be fully photonic, eliminating electronic packet switching entirely to meet the demanding performance requirements of modern AI workloads.
 
																		UALink for Scale-Up AI Interconnects
Kurtis Bowman, Chairman of UALink Consortium, presents UALink as a scaleup fabric solution that offers technical advantages including lower power consumption, reduced latency, and efficient data movement achieving nearly 200 Gbits per second on a 200 Gbit per second link. He reports that consortium members have IP available for designing switches and accelerators, with initial device availability expected in 2026 and widespread data center deployment anticipated in 2027.
 
																		VIAVI's 800G AI Network Testing
Kevin Chang, Engineering Director, Hardware Platforms at VIAVI, presents advanced Ethernet test appliances at the OCP Global Summit that provide efficient testing for AI networks through B3 and M1 appliances generating 800-gigabit traffic with smaller footprint and lower power consumption than traditional GPU data center methods. VIAVI collaborates with Juniper to demonstrate real-time network performance testing and showcases early Ultra Ethernet Consortium specifications, enabling piece-wise testing of emerging network layers and 800-gigabit NICs as technologies move from specifications to hardware implementations.