Technology Exploration Forum Showcase
While you're here, feel free to check out our previous report on data center networking and infrastructure acceleration covering AI fabrics, SmartNICs/IPUs/DPUs, CXL and other connectivity technologies. We've started work on the 2024 edition, contact us to inquire about sponsoring!
UNLOCKED REPORT — no personal info needed to download
2024 Technology Exploration Forum
“Whether or not GenAI lives up to its hype and becomes the operating system for all business and consumer software, it will likely be a primary consumer of data center resources over the next five to ten years.”
Ethernet Alliance's TEF 2024 Highlights from Thought Leaders
The following media comprises interviews and other content related to the Ethernet Alliance's 2024 Technology Exploration Forum in Santa Clara. Views expressed are those of the presenting individuals and companies and may not necessarily represent views of Converge! Network Digest or AvidThink.
Ethernet Evolution: Powering High-Speed AI Connections
John D’Ambrosia, Chair of the IEEE P802.3dj Task Force and adviser to the Ethernet Alliance board of directors, discusses how Ethernet’s growth supports the AI ecosystem. He introduces the two-day Technology Exploration Forum covering issues impacting faster data rates, focusing on 200 Gbps signaling to enable higher-speed Ethernet connections up to 1.6 Tbps.
Ethernet's Future: Technical Collaboration for Scaling to the AI Challenge
Peter Jones, Chair of the Ethernet Alliance, discusses the Ethernet in the Age of AI Networking event, emphasizing industry collaboration to address challenges in meeting requirements. Jones expresses confidence in Ethernet’s ability to overcome obstacles and anticipates that the event will lead to greater industry alignment in addressing AI networking challenges.
AI Reshaping the Ethernet Landscape
Alan Weckel, Founder and Technology Analyst at 650 Group, examines AI’s influence on the Ethernet market, highlighting the creation of three networks with faster speeds and a vendor race for new products. He discusses the rapid growth in high-speed Ethernet port adoption and the expanding data center switching market, noting Nvidia’s entry into Ethernet semiconductors and systems as a key development.
Unlocking AI's Potential: Connectivity and Collaboration
Tony Chan Carusone, Chief Technology Officer of Alphawave Semi, emphasizes the importance of connectivity and industry collaboration in advancing AI technologies across multiple fronts. He highlights Alphawave Semi’s contributions in chiplets and high-speed series technologies, including custom silicon solutions and IP offerings for die-to-die interfaces and transceivers for AI processor connectivity.
Optimizing Networking for AI Scale and Performance
Christopher Blackburn, System Architect & Director of Field Applications Engineering at Astera Labs, explores the challenges and future of AI networks, focusing on the diverse requirements of different network types and the need for optimization as networks expand. He discusses how Astera Labs’ product line, including PCIe and Ethernet retimers, offers intelligent connectivity solutions to address channel challenges in various AI architectures.
AI-Driven Networks for the Future
Jiri Chaloupka, Principal Engineer Technical Marketing at Cisco Systems, explores the progression of internet technology towards AI-driven networks, emphasizing the need to build upon existing Ethernet standards while addressing new AI-specific challenges. He underscores the importance of industry collaboration and standardization to support the unique requirements of AI networks, including high bandwidth, low latency, and large-scale GPU interconnectivity.
Analyzing the Market Impact of AI Networks
Sameh Boujelbene, Vice President at Dell’Oro Group, examines the rapid evolution of AI workload networks, highlighting accelerated refresh cycles and increasing bandwidth demands. She discusses the competition between InfiniBand and Ethernet in the AI cluster market, noting Ethernet’s growing adoption due to technological advancements and hyperscaler preferences.
Ethernet's AI Era: Navigating 400G Per Lane Challenges
David Rodgers, Technical Business Development Manager at EXFO, explores the complexities and opportunities of Ethernet in the AI era, focusing on the challenges of achieving 400 Gig per lane speeds. He discusses the potential need for specialized testing and emphasizes the importance of collaboration within the industry to overcome these hurdles.
Adapting Ethernet for AI, ML, and HPC Networking Challenges
Kent Lusted, 802.3dj Electrical Track Chair at IEEE, explores the ongoing development of Ethernet to address market demands and customer applications in AI, ML, and HPC. He examines the challenges of balancing component requirements and adapting Ethernet capabilities for future needs, emphasizing the importance of addressing both scale-out and scale-up networks in AI.
Powering AI's Future with Advanced Ethernet
Mike Li, a Fellow at Intel, discusses high-speed networking advancements for AI applications, including 200 Gbps per lane Ethernet specifications and development of 800 Gbps/1.6 Tbps Ethernet. Li emphasizes Ethernet’s importance for GPU acceleration and clustering architectures, while also highlighting ongoing work towards 400 Gbps per lane speeds and future generations reaching up to 6.4 Tbps.
Ethernet Interoperability in the AI Age
Sam Johnson, Link Applications Engineering Manager at Intel, emphasizes the critical role of Ethernet interoperability in networking and AI applications. As chair of the Ethernet Alliance’s highspeed networking subcommittee, Johnson highlights the organization’s efforts to promote collaboration and testing among vendors to advance the Ethernet ecosystem.
Building Ultra Ethernet for AI Data Center Evolution
Uri Elzur, GPU Networks and System Architecture at Intel, outlines the Ultra Ethernet Consortium’s approach to addressing AI-related challenges in data center architecture and networking. He describes the consortium’s collaboration with the Ethernet Alliance to enhance Ethernet’s capabilities for AI applications through optional features like link-level retry and credit-based flow control.
Ethernet Insights from OCP Global Summit
Bijan Nowroozi, Chief Technology Officer of Open Compute Project (OCP), discusses the significance of Ethernet in AI’s future, emphasizing its technical and economic advantages. The recent OCP Global Summit featured numerous speakers highlighting Ethernet’s benefits, with Nowroozi noting that Ethernet is currently experiencing its moment in the industry.
Achieving 400G Ethernet: Industry Challenges in the AI Era
Nathan Tracy, President of OIF, outlines industry efforts to achieve 400 Gbps Ethernet for AI applications at the Ethernet Alliance’s Technology Exploration Forum. Tracy highlights OIF’s 448 Gbps project, which aims to address challenges and unify the industry, while noting that companies like TE Connectivity are already investigating solutions despite ongoing work on 200 Gbps specifications.
Small Form Factors for High-Speed Ethernet
Thomas Palkert, System Architect at Samtec, represents the small form factor group of SNEA, which defines connectors and transceivers for optical and storage industries. Palkert emphasizes the challenges of density, power, and cost for Ethernet in AI applications, highlighting the need for faster speeds while achieving higher density, lower power consumption, and reduced costs.
Scaling Protocols and Connections for AI
Ashika Pandankeril Shaji, Staff System Architect at TE Connectivity, explores the growing need for high-bandwidth, low-latency products that support various protocols in AI applications. She emphasizes the importance of density and scalability in connectivity solutions, from backplanes to internal cabling, to meet the demands of emerging technologies like CXL.
Ethernet's Future in AI Networks
Moray McLaren, Principle Engineer at Google, explores Ethernet’s capabilities in addressing interconnect challenges for machine learning networks, highlighting Google’s use of an Ethernet-based proprietary network in their TPU systems. He suggests Ethernet’s potential dominance in scale-out and host networks for ML applications, while acknowledging the need for alternative solutions in more latency-sensitive architectures.