Although they are the shortest class of interconnect technologies in terms of length, die-to-die (D2D) interconnects and other so-called scale-inside technologies for moving data between compute and memory die inside XPUs and chip packaging have an outsized impact on performance, efficiency and total cost of ownership.
The growing scale and complexity of AI workloads, meanwhile, and the expected development of 3D and other multilayer chip designs mean that more data will be moving across more and longer scale-inside links and at faster speeds than ever before. Advances in packaging, high-speed I/O interfaces, and other technologies will be needed to meet the demands of tomorrow’s algorithms.
At OFC, Marvell showcased a recent advancement in scale-inside: a 3nm 40 Gbps D2D interface for linking HBM and compute die within the same chip package. The increase in speed enables designers to dramatically improve performance and latency while diving down power and silicon area needed for interfaces. The wide-open bathtub curve shown on the monitors in the video demonstrates exceptional signal integrity and reliability.
Next-gen die-to-die interconnects in action. The wide bathtub curve indicates exceptional signal integrity and reliability.
The next step: 64 Gbps D2D interfaces with bandwidth densities of over 30 Tbps per millimeter. An industry first, the the bi-directional 64Gbps D2D interface IP from Marvell offers dramatic gains in bandwidth, performance, efficiency, bandwidth density, and die size along with unique reliability features such as redundant lanes and automatic lane repair to improve yield and reduce bit-error rates.1
The Big Picture
How big can the impact be from small connections like this? XPUs can account for over 40% of the energy consumed by AI servers,2 and 50% to 91% of the energy consumed XPUs can get consumed in moving data between high-bandwidth memory, caches and computing cores.3 That means that 20% or more of electricity for AI servers’ energy can get used in transferring data across the scale-inside connections operating in data centers.
Power consumption, of course, will vary by chip architecture, technologies and other factors, but the estimates give the sense of the how minute changes at the chip and packaging level can have profound impacts that ripple across an entire infrastructure. Expect to see more milestones in scale-inside networking from Marvell and its partners.
# # #
This blog contains forward-looking statements within the meaning of the federal securities laws that involve risks and uncertainties. Forward-looking statements include, without limitation, any statement that may predict, forecast, indicate or imply future events or achievements. Actual events or results may differ materially from those contemplated in this blog. Forward-looking statements are only predictions and are subject to risks, uncertainties and assumptions that are difficult to predict, including those described in the “Risk Factors” section of our Annual Reports on Form 10-K, Quarterly Reports on Form 10-Q and other documents filed by us from time to time with the SEC. Forward-looking statements speak only as of the date they are made. Readers are cautioned not to put undue reliance on forward-looking statements, and no person assumes any obligation to update or revise any such forward-looking statements, whether as a result of new information, future events or otherwise.
Tags: AI infrastructure, Optical Interconnect, Optical DSPs, DSP, data center interconnect, AI
Copyright © 2026 Marvell, All rights reserved.