In a bold move to take on Nvidia’s dominance in AI hardware, some of the world’s biggest tech players have unveiled a new open GPU interconnect standard that could reshape the future of artificial intelligence computing.
The Ultra Accelerator Link (UALink) Consortium, which was quietly formed last summer, has now made its first major play. Comprising industry heavyweights like AMD, Intel, and Microsoft, the group has officially ratified the UALink 200G 1.0 Specification — a major milestone in the race to create a more open and flexible future for AI hardware.
This open standard is designed to allow up to 1,024 AI accelerators to communicate seamlessly inside large computing clusters. And that’s not just a nice-to-have feature — it’s a direct challenge to Nvidia’s NVLink, the proprietary system that currently dominates the AI interconnect space.
“As the demand for AI compute grows, we are delighted to deliver an essential, open industry standard technology that enables next-generation AI/ML applications to the market,” said Kurtis Bowman, board chair of the UALink Consortium and director for architecture and strategy at AMD.
UALink Could Redefine How the Cloud Powers AI
The potential here isn’t just technical — it’s transformative.
With UALink 200G 1.0, data centers can now build multi-node systems with lower latency and far greater flexibility than before. The spec creates what is essentially a switch ecosystem for accelerators, making it easier for massive arrays of GPUs to work together with lightning-fast speed.
Bowman says the tech is a game-changer for companies pushing AI to the edge:
“The groundbreaking performance made possible with the UALink 200G 1.0 Specification will revolutionise how Cloud Service Providers, System OEMs, and IP/Silicon Providers approach AI workloads.”
For an industry where speed and scalability are everything, that’s a massive leap forward. The UALink protocol uses a load/store mechanism that keeps the raw speed of Ethernet, but with the ultra-low latency of PCIe switches — the best of both worlds for AI computing.