Artificial intelligence and machine learning have moved from the edge of innovation to the center of business strategy. For cloud and neocloud providers, this shift is reshaping how data centers are designed, built, and scaled. The workloads driving today’s breakthroughs, from generative AI to advanced inference engines, place unprecedented demands on physical infrastructure. The connectivity choices we make now will define how successfully data centers can support tomorrow’s AI-powered cloud.
Over the past three years, the pace of change has accelerated beyond anything we’ve seen before. New GPU architectures are being released every 6-12 months, fundamentally altering how cloud service providers approach infrastructure planning. Traditional refresh cycles of three to five years are no longer sustainable; for AI-native back-end networks, refreshes now happen every one to three years. These compressed cycles ripple through every layer of the data center, from compute density and power design to cooling strategies and network architecture.
One of the most pressing challenges is power. In traditional data centers, a rack typically drew 10 to 15 kilowatts. But AI-scale deployments have changed the equation completely. Air-cooled GPU racks now require 30 to 40kW, and direct-to-chip liquid-cooled racks can consume 160kW or more. These demands aren’t theoretical — they are driving fundamental changes in layout and design. To manage thermal and power constraints, providers are spreading servers across more racks, shifting from top-of-rack switching models to architectures that depend on longer fiber runs, high-density pathways, and more complex patching strategies.
At Siemon, we are working directly with leading cloud service providers as they navigate these challenges. Our position as an exclusive NVIDIA partner gives us early access to emerging technologies and training, enabling us to provide our customers with validated design strategies for AI-native workloads. Our team acts as trusted advisors, helping operators evaluate their options and architect infrastructure that balances performance today with flexibility for tomorrow.
Flexibility is critical because the technology landscape is shifting rapidly — and nowhere is this more visible than in the evolving market landscape with Infiniband and Ethernet. Today, Infiniband accounts for roughly 80% of AI back-end networks, thanks to its established role in high-performance computing environments. But Ethernet is catching up quickly. The latest industry forecasts suggest Ethernet volumes will overtake Infiniband by early 2026, driven by new architectures from the Ultra Ethernet Consortium (UEC) and the UALink initiative.
The reality is that both will coexist, and infrastructure must be designed accordingly. For scale-out switch-to-switch connectivity, Siemon recommends fiber-based architectures that remain protocol-agnostic, supporting both Infiniband and Ethernet. For shorter switch-to-server links, we are seeing increased adoption of active copper solutions that can deliver higher speeds more efficiently while reducing total power consumption. This approach ensures operators are not locked into a single technology path and can adapt as standards evolve.
Energy efficiency is also moving to the forefront of the conversation. AI-scale data centers are inherently power-intensive, but strategic choices at the physical layer can make a measurable difference. Direct Attach Copper (DAC) and Active Electrical Cables (AEC) consume around 0.1 watts per connection, compared to 9 to 25 watts for fiber transceivers. Over hundreds or thousands of links, the savings are significant.
Looking three to five years ahead, the trajectory is clear. Network speeds will climb from 800G to 1.6T and even 3.2T, driving unprecedented demands for fiber density, new classes of ultra-low-loss connectivity, and advanced connector designs. Standards bodies are shaping these architectures now, and Siemon is at the table helping define them. We’re proud to be the only structured cabling manufacturer with active participation in the InfiniBand Trade Association—providing a direct line into the standards shaping high-performance computing and low-latency interconnects. Our technical collaboration with industry leaders like NVIDIA, Arista, and Dell means we’re not just reacting to infrastructure trends—we’re helping define them. These partnerships allow us to engineer connectivity solutions that align with next-generation hardware requirements, ensuring our customers are equipped to build AI-ready environments from the ground up.
AI is rewriting the rules of the data center, and the physical layer is no longer simply a means to connect systems — it’s becoming a primary driver of performance, scalability, and efficiency. The decisions made today, from cabling strategies to connector designs, will define tomorrow’s AI-powered infrastructure. At Siemon, we are committed to helping our partners make the right choices now to prepare for the future we all see coming.
To explore these decisions in greater depth, join us for our upcoming webinar, “Paving the Way for Efficient High-Density AI at 400G and 800G Speeds” hosted by Siemon expert Ryan Harris. Drawing on field experience and performance expectations from large-scale AI deployments, Ryan will provide a practical, cabling-led view of how to deploy networks that meet today’s demands and are scalable for the future.
This is the must-see tech talk for anyone planning, designing or deploying high-density AI data centers. Learn more and register
Join Siemon and NVIDIA for an engaging webinar that unlocks the future of AI infrastructure through cutting-edge optical cabling strategies for 400G and 800G deployments. The session explores scalable fiber designs tailored to NVIDIA’s Quantum InfiniBand and Spectrum-X Ethernet platforms, with best practices, performance benchmarks, and real-world GenAI deployment examples to help optimize data centers for next-gen AI workloads.
Gary Bernstein
Sr. Director of Global Data Center Solutions, Siemon
Gary Bernstein is Sr. Director of Global Data Center Solutions at Siemon with more than 25 years of industry experience and extensive knowledge in data center infrastructure, telecommunications, and copper and fiber structured cabling systems. Gary has held positions in engineering, sales, product management, marketing and corporate management throughout his career. Gary has been a member of TIA TR42.7 and TR42.11 Copper and Fiber Committees and various IEEE802.3 task forces and study groups including 40/100G “ba”, 200/400G “bs” and 400/800G “df” and 800G/1.6T “dj”. Gary has spoken on Data Center Cabling at several industry events in North America, Europe, LATAM and APAC including 7x24, AFCOM, BICSI, Cisco Live, Datacenter Dynamics and has authored several articles in industry trade publications. Gary received a Bachelor of Sciences in Mechanical Engineering from Arizona State University, is an RCDD with BICSI and a Certified Data Center Designer (CDCD) with Datacenter Dynamics.
Subscribe to Siemon’s blog for expert insights on data centers, smart buildings, cabling standards, and sustainability. Stay ahead with technical tips, industry trends, and connectivity innovations.