Cornelis Networks announced on Tuesday the launch of a new hardware and software package aimed at more efficiently connecting up to 500,000 artificial intelligence (AI) chips. This move aims to solve a fundamental problem in AI data centers: while AI processors operate at extremely high speeds, the network infrastructure between them struggles to keep up, limiting the overall performance of the system.
Cornelis, which spun off from Intel in 2020 and is still backed by Intel’s venture capital arm, aims to solve this data bottleneck. The new CN5000 network chips introduced by the company are based on OmniPath technology, designed specifically for large-scale and high-speed connections. The first customers — including the U.S. Department of Energy — will begin receiving the chips in the third quarter of this year. The AI network inefficiency problem has captured the attention of the entire industry. Nvidia attempted to solve this problem with its InfiniBand protocol by acquiring networking company Mellanox for $6.9 billion in 2020. Other major companies, such as Broadcom and Cisco, continue to develop Ethernet-based solutions, the open standard long used for internet infrastructure. Cornelis, despite its ties to Intel, designed its technology to be vendor-agnostic. The company’s systems are compatible with AI accelerators from Nvidia, AMD, and other manufacturers, and support open-source platforms. Cornelis CEO Lisa Spelman stated that the next generation of chips, due in 2026, will also support Ethernet networks, offering customers greater flexibility.
Spelman told Reuters on May 30:
“These problems are being tackled with 45-year-old and 25-year-old architectures. We want to pave a new way that offers both high-end chip performance and strong economic value.”
With the rapid growth of the artificial intelligence market, Cornelis’ new approach could make the company a key player in next-generation data center infrastructure.