Exhibitor Press Releases

19 Feb 2026

Growing AI Infrastructure: Neoclouds, Power, and Fiber

AFL Stand: C110
Dr. Alan Keizer , Senior Technical Advisor, AFL
Growing AI Infrastructure: Neoclouds, Power, and Fiber
Growing AI Infrastructure
AI infrastructure is evolving across two parallel dimensions. New market participants, particularly Neocloud operators, greatly influence where and how capacity is deployed. At the same time, the physical technologies underpinning hyperscale environments continue to evolve at pace, advancing the limits of density, power, and optical connectivity. 

This third instalment in the series examines both dynamics: the emergence of specialist AI cloud operators and the infrastructure-level developments that will determine how far AI architecture can scale. 

Catch up with the Hyperscale Market Shifts series:  

 

Neoclouds: The Specialist Operators Building AI Factories 

A Neocloud is not a “smaller hyperscale provider”. A Neocloud is a specialist operator built around the primary offering of accelerated compute capacity (for training and inference) sold as a service. Neoclouds function more like AI utilities than general-purpose cloud warehouses, with operations built around maximizing GPU utilization at scale. High utilization forms the foundation of Neocloud economic viability, with any decline in demand quickly amplifying cost pressures.  

Three Neocloud segments with distinct operational profiles: 

Accelerator-First Clouds 
These operators resemble focused cloud platforms built around large GPU fleets and repeatable pod architectures. Scheduling, storage, and managed services target training and high-throughput inference workloads. Differentiation centers on operational maturity, stable network fabrics, fast turn-ups, and disciplined change control. 

These platforms primarily attract AI-native companies, model developers, and software-first organizations that prioritize rapid access to GPUs, managed AI services, and flexible scheduling. Customer demand in this segment is driven by time-to-train, elastic scaling, and developer-friendly provisioning. 

Power-Advantaged Operators 
This segment prioritizes access to large-scale electrical capacity (i.e., ‘going where megawatts are available’). Deployment locations frequently include behind-the-meter facilities or regions that fall outside traditional hyperscale siting patterns. Electrical availability acts as the primary gating factor for site selection, influencing where AI focused campuses can realistically operate. Greater physical distance from network aggregation points increases dependence on long haul fiber routes and coherent DCI as the primary means of external connectivity. As these constraints intensify, power availability increasingly determines overall site topology and the resulting network architecture. 

This segment typically attracts large AI model developers, sovereign AI initiatives, and enterprises seeking guaranteed capacity at scale. Customer demand is linked to long-term power availability, predictable cost structures, and the ability to secure multi-megawatt allocations. 

Converted-Infrastructure Entrants 
Technical capability varies widely across this segment. For example, some organizations start with land, permitting, substations, and cooling expertise before rapidly transitioning into AI operators. While a few will quickly mature operational practices, others will encounter complexity outside of running high-power real estate, such as considerations around AI fabric design, deployment strategies, and lifecycle management. 

These operators often attract opportunistic tenants, regional enterprises, and overflow demand from hyperscale providers facing capacity constraints. Customer demand is influenced by availability and speed of deployment, with buyers accepting variability in operational maturity for faster access to compute resources. 

The aim of the segmentation here is not labels, but rather prediction. These three groups buy differently, build differently, and fail differently. 

Structural Differences Between Neoclouds and Hyperscalers 

Hyperscale cloud platforms can amortize engineering across many services, negotiate supply at enormous scale, and absorb some overbuild. Conversely, Neoclouds operate within narrower economic and operational margins. 

Resources and Engineering Depth 
Hyperscalers engineer redundancy at an organizational level, distributing expertise across large teams and mature processes. Early stage clouds often operate very differently, with operational knowledge concentrated within a small number of individuals (sometimes effectively one person, forming a single point of failure). Additional risk emerges when Neocloud operators depend heavily on a limited set of vendors or on a few specialists who ‘know how the pod really works.’ Many clouds also assemble environments from reference architectures and partner integrations, where execution quality can vary. These factors combine to create operational vulnerability.  

Supply Chain Posture 
Hyperscale platforms influence component roadmaps and supply allocation. Neocloud operators frequently procure available hardware earlier in lifecycle stages and in smaller volumes. This procurement model can accelerate adoption while increasing variability in optics, breakout strategies, and qualification processes. 

Operating Economics 
High GPU utilization is a core economic requirement. Where margins depend on high GPU utilization, downtime is not an engineering problem but rather a pricing problem. 

Geography Defined by Infrastructure Archetypes 

Geographic considerations extend beyond regional market boundaries (e.g., USA vs EMEA) and instead reflect infrastructure archetypes: 

  • Due to separation from network aggregation hubs, remote power campuses require resilient DCI, route diversity, and long-haul fiber connectivity 
  • Sovereignty-constrained regions force in-country capacity and more regional interconnect 
  • Metro inference zones prioritize latency and rapid deployment over large-scale density 

AI capacity distribution is expanding, while fiber infrastructure becomes increasingly central to architectural outcomes (errors at the physical layer propagate rapidly across large-scale AI deployments). 

Financial Risk and Engineering Implications 

Neocloud operators face elevated exposure to financing conditions, GPU pricing volatility, and utilization swings. Market slowdowns introduce risk of paused builds, renegotiation, or consolidation. 

Engineering strategy must reflect these risks by prioritizing modular architectures, standardized interfaces, and block scalable designs that minimize exposure. In contrast, bespoke, tightly optimized layouts that assume uninterrupted capital availability introduce systemic fragility. Within this context, fiber infrastructure becomes part of the uptime surface area rather than simple “plumbing,” underscoring the need for designs that remain robust under operational and financial variability. 

Emerging Technologies: Density, Power Limits, and the Fiber Roadmap 
Multiple emerging technologies are converging to shape the next phase of AI infrastructure. Accelerators and switches typically garner intense interest from industry commentators. However, some of the most critical developments are taking place deeper within the overall fabric, particularly in interconnect density and power delivery efficiency. These advancements set the practical limits for scaling current architectures and reveal opportunities for more efficient, streamlined topologies. 

Smaller Interfaces and Higher-Density Connectivity 

Next-generation hyperscale data center fabrics (moving from 400G and 800G toward 1.6T) place increased emphasis on physical density as well as raw bandwidth. 

Continued reductions in pluggable optics size, along with more compact electrical and optical interfaces, enable higher port counts per switch face and greater termination density per rack unit. Connector ecosystems are evolving to support dense patching through high-fiber-count MPO interfaces and compact multi-fiber connectors like MMC, without expanding patch panel footprints. In practice, transitioning from MPO-based panels to MMC-class connectors can increase fiber termination density per rack unit by approximately 2–4x, depending on panel design and cable management constraints. 

Operational constraints drive this shift, as rack scale AI deployments operate within tightly defined mechanical envelopes influenced by liquid cooling hardware, power distribution systems, and service access requirements. These constraints compress the physical space available for connectivity, increasing the need for architectures that deliver higher bandwidth within reduced footprints. In this environment, structured cabling, pre-terminated assemblies, and disciplined port allocation and breakout strategies move from optional optimizations to core architectural requirements, ensuring that dense, high-performance racks remain serviceable, scalable, and reliable. 

A Broader AI and Optical Supplier Landscape 

NVIDIA continues to drive current AI platform architectures. However, the physical hyperscale ecosystem is expanding to include alternative accelerator types, vendors, and emerging optical component providers. Market entry advancements in scale-out fabrics and interconnect technologies introduce a combination of new opportunities and added infrastructure complexity. 

This increased competition drives innovation in optics, switch architectures, and power efficiency. At the same time, ecosystem expansion introduces greater variability in form factors, lane mappings, and interfaces. Connectivity teams address these challenges by abstracting inter-pod and DCI links where possible while optimizing last-mile equipment connections to adapt to evolving hardware profiles. Infrastructure is therefore segmented into fixed, vendor-agnostic trunks and flexible, generation-specific connections, allowing fiber plants, cable pathways, and terminations to support multiple generations of optical technology without repeated physical rework. 

Power and Cooling as the Primary Constraints 

Power delivery and thermal management efficiency have emerged as dominant constraints on hyperscale data center infrastructure growth. Incremental gains in link efficiency, port density, and thermal behavior carry disproportionate value when rack power levels approach 100 kW and pod-scale deployments extend into multi-megawatt ranges. Optical technologies that reduce electrical power consumption or shorten electrical reach directly influence achievable fiber density and long-term operational reliability. 

These trends push optical interfaces closer to compute and accelerate adoption of Linear Pluggable Optics (LPO) and Co-Packaged Optics (CPO), delivering higher capacity without added space, power, or operational burden. 

Multi-Core Fiber as a Near-Term Disruptor 

Within the fiber technology landscape, Multi-Core Fiber (MCF) stands out as a disruptive option positioned for medium-term adoption. Unlike traditional single-core fiber, MCF integrates multiple independent cores within a single cladding, with each core carrying an independent optical signal (i.e., capacity scales upward without increasing overall cable diameter). 

For AI campuses and pod architectures with extreme fiber density demands, MCF offers clear benefits. Fewer fiber counts between distribution points reduce cableway congestion in trays, conduits, and ducts, making higher-capacity cables more practical than adding new pathways. Compared to equivalent-capacity single-mode fiber bundles, Multi-Core fiber can deliver 3–10× higher fiber density per cable cross-section. 

However, initial adoption may remain selective as connectorization methods, testing standards, and repair procedures continue to mature. Despite this, MCF enables AI campuses and purpose-built pods to extend architectures without increasing cable volume, and the rollout of pluggable optics and CPO with native Multi-Core support could significantly accelerate MCF penetration. 

Long-Term Potential of Hollow-Core Fiber 

Compared to conventional single-mode fiber, hollow-core fiber delivers notable performance advantages (e.g., reduced latency and lower nonlinear effects). However, ecosystem readiness, immature supply chains, and operational complexity mean widespread deployment remains a longer-term prospect for most hyperscale data centers supporting advanced AI and cloud workloads. Early use cases are most likely to emerge in specialized, short-reach applications where latency sensitivity outweighs deployment complexity, rather than as a general replacement for conventional single-mode fiber. 

Anchoring the Stack 

Across ongoing advances in accelerators, switches, and optics, a consistent architectural principle remains. Active technologies evolve rapidly, while the physical fiber layer must persist across multiple technology cycles. Infrastructure designs that prioritize scalable pathways, disciplined physical structure, and forward-compatible fiber selections deliver greater longevity than designs optimized exclusively for current-generation components. 

The emerging technology narrative centers on cumulative progress rather than singular breakthroughs. Compounding gains in density, efficiency, and physical simplicity will define future scalability, with backbone fiber infrastructure serving as a primary determinant of realistic scalability in the coming years. 

Market Shifts Series Roundup 

Across the themes explored in this series, from shifting market structures to evolving architectures and the physical foundations beneath them, a clear pattern emerges. The pace of AI growth is now governed less by theoretical performance ceilings and more by the practical constraints of density, power availability, and fiber capacity. As operators diversify and infrastructure designs mature, the physical layer remains the enduring anchor of scalability. This cumulative progress across power, cooling, optics, and connectivity (i.e., not any single breakthrough) will determine the real limits of hyperscale expansion in the decade ahead. 

View all Exhibitor Press Releases
Loading

Until Tech Show London 2026

Register Now