Nathan Tracy, President of OIF and technologist at TE Connectivity, underscores the growing significance of machine learning and artificial intelligence in data centers, necessitating higher bandwidth and bandwidth density that will impact power consumption and thermal management. To tackle these issues, OIF is developing a 200 Gbps (Gigabit per second) electrical interfaces, a 112 Gbps linear optics specification, and Energy-Efficient Interfaces (EEI), along with working on CMIS, the common management interface specification, and specifications for 400ZR, 800ZR, and 1600ZR, all crucial for next-generation data centers.
2023 AI and Cloud DC Networking Showcase
Download the report for free — courtesy of our sponsors! Join the other enterprise IT, CSP, DC operators, and other thought leaders who've already download the report.
UNLOCKED REPORT — no personal info needed to download
Optics Leads the Way for Next Gen Data Centers
Whether or not GenAI lives up to its hype and becomes the operating system for all business and consumer software, it will likely be a primary consumer of data center resources over the next five to ten years.
Content from Sponsors and Thought Leaders
The following media comprises interviews and other content related to SmartNICs, DPUs, IPUs, AI fabrics, and data center networking. Views expressed are those of the presenting individuals and companies and may not necessarily represent views of Converge! Network Digest or AvidThink.
FEATURED
Data Center Architecture Spanning Edge to Cloud with AI & 5G
Sanjay Kumar, VP of Products and Marketing at Arrcus, introduced solutions for enhancing AI and 5G capabilities, including Arrcus FlexMCN and SRv6 mobile user plane technology. He emphasized ArcIQ’s value for network visibility and discussed Arrcus’s commitment to optimizing GPU use, reducing latency, and lowering total cost of ownership in 5G’s role for AI and ML workloads.
Data Centers Transformed by IPU Offload
Charlie Ashton, Senior Director at Napatech, advocates for IPUs in cloud and enterprise data centers to accelerate workloads and enhance security. Napatech’s IPU solutions, encompassing hardware and software, deliver high bandwidth and accelerated performance by offloading host servers and introducing workload separation.
Future of Data Centers: Chiplets, Bandwidth & AI-Intensive Workloads
Bapi Vinnakota, ODSA Project Lead at Open Compute Project, emphasizes the increasing demand for bandwidth and diverse compute in data centers. He advocates for chiplet-based systems, especially in AI-intensive workloads, and supports open interfaces for seamless integration across vendors.
Democratizing Networking with SONiC
Kamran Naqvi, Broadcom’s Principal Network Architect, democratizes networking by bringing hyperscaler features to all users through enhanced SONiC software. This enterprise-friendly solution, combined with Broadcom’s silicon telemetry, facilitates rapid identification of network or application performance issues.
INDUSTRY PERSPECTIVES
Unleashing CXL Tech's Potential in AI & Data Center Evolution
Alan Weckel, Founder and Technology Analyst at 650 Group, highlights CXL’s competition with NVLink for improving AI servers, noting the rising bandwidth demand and a spending shift from cloud to AI, resembling the architectural shift seen with cloud technology two decades ago.
Ampere's Vision for Power-Efficient, Large-Scale AI Computing
Sean Varley, VP of Product Marketing at Ampere, highlights the importance of power-efficient AI models and the need for large-scale computing. He unveils Ampere’s strategy with the launch of their 192-core AmpereOne CPUs, designed for high computational power with efficiencies between 200 to 400 watts.
CPO Switches are Here!
Broadcom’s Near Margalit, GM and VP of Optical Systems Division, announces success in resolving technical challenges for deploying CPO switches. The focus now is on demonstrating the reliability and cost-effectiveness of core silicon photonics technology, with efforts to minimize the time between CPO technology availability and market launch, addressing concerns about laser component reliability.
The Year AI Skyrockets: Overcoming Bandwidth Bottlenecks
Bill Brennan, CEO of Credo Semi, sees 2023 as a pivotal year for AI due to the interconnect bandwidth bottleneck, spurring demand for higher bandwidth. Credo Semi is addressing this challenge in the AI cluster backend network, leading a global shift to 100 Gig single lane connections and expediting next-gen technology development.
Unveiling the Future of AI: Large-Scale Applications & High-Speed Networks
Sameh Boujelbene, VP of Dell’Oro Group, notes the surge in large AI applications, creating a significant market opportunity for a new AI backend network. Anticipating by 2027, two-thirds of backend network ports will be 1.6T, reflecting the industry’s growing interest, detailed in an upcoming AI Network report.
The Arms Race for AI Clusters
Brad Booth, an independent Ethernet and Optical Technology Advisor, discussed the rapid expansion of AI data centers, driven by increasing capabilities and adoption. He emphasized high demand for GPUs and optical modules, impacting memory devices, and ongoing innovation to meet growing bandwidth demands.
AI/ML Leads to Massive Traffic Across the Network
Mansour Karam, Juniper’s VP of Data Center Products, discusses the booming $32 billion data center market (by 2026), emphasizing the crucial role of AI/ML workloads. He underscores the need for software optimizing job completion time and tuning parameters, a solution Juniper is poised to provide with their expertise in AI and data feeds.
Scaling AI Clusters: Optical Connectivity's Key Role in Data Transport
Radha Nagarajan, SVP and CTO of Optical and Cloud at Marvell, underscores optical connectivity’s importance as AI clusters grow. Marvell introduces a 1.6Tbps DSP to meet the rising demand for optical solutions in AI-centric data centers.
GenAI will Super Charge Network Operators
Shawn Hakl, Microsoft’s VP of 5G Strategy, advocates integrating operations with software-based networking to enhance customer experiences and improve efficiency. Managing the network as an API, Hakl believes, can modernize and monetize it in innovative ways.
Faster Interconnects for AI Data Centers
Hani Daou, Business Development Manager at Multilane, emphasizes the need for data center upgrades to meet rising compute power demands. Multilane is strategically positioned with high-demand 800G systems and copper interconnect test solutions, supporting semiconductor vendors, cloud providers, and interconnect vendors in R&D for bandwidth scaling.
Harnessing CXL for AI/ML Interconnectivity
Siamak Tavallaei, CXL Advisor to the Board at Open Compute Project, highlights CXL’s crucial role in connecting large AI systems, stressing the need for a fundamental standard in interconnectivity. He asserts that CXL can facilitate the interconnection of 10,000 GPU units, petabytes of storage, and terabytes of memory, promoting efficient device-to-device and host-to-host communication.
Optics Leads the Way for Next Gen Data Centers
Nathan Tracy, OIF President, underscores the rising importance of machine learning and artificial intelligence in data centers, driving the need for increased bandwidth and density. To address these challenges, OIF is developing a 200 gigabit per second electrical interface and energy-efficient interfaces, alongside specifications crucial for next-gen data centers, including 400 ZR, 800 ZR, and 1.6T ZR.
Advancing the Open Chiplet Economy
Cliff Grossner, VP Market Intelligence and Innovation at Open Compute Project (OCP), discusses the advancements in the open chiplet economy vision, including standardizations that allow companies to build a chiplet and create an electronic data sheet for others to use in their design tools. Grossner also revealed plans to establish a marketplace for chiplets.
Integrating Chiplets into Complete Systems
David Ratchkov, founder of Thrace Systems and lead of the CDX work stream group under ODSA in the Open Compute Project, has announced the upcoming release of a detailed white paper on chiplet integration workflows. The paper will cover all aspects of chiplet integration and will be available on the Open Compute Project’s website once released.
Zero Gap AI - Vapor.io, SuperMicro, NVIDIA
Vapor’s CEO Cole Crawford and CMO Matt Trifiro have introduced a new service, Zero Gap AI, designed to simplify the deployment of wireless and AI stacks by offering an end-to-end solution in partnership with Super Micro, leveraging the Grace Hopper super chip and the Nvidia MGX platform.