Lightmatter, a leader in photonics and AI infrastructure, is teaming up with Cadence. This collaboration aims to speed up the creation of co-packaged optics (CPO) solutions for large-scale AI and HPC tasks.
On January 26, 2026, Lightmatter announced an agreement. They will integrate Cadence’s high-speed SerDes and UCIe™ IP with their Passage™ optical engine. This will focus on a roadmap that works with advanced-node CMOS technology. This partnership is a big step forward. It aims to develop photonic interconnect solutions that can meet the bandwidth and energy needs of future AI systems.
“The next big leap in AI performance requires a fundamental change in how we move data,” said Ritesh Jain, SVP of Engineering & Operations at Lightmatter. “Cadence’s connectivity IP is an ideal complement to our Passage platform. Together, we are paving the way for CPO deployment by solving the most complex optics-electronics integration challenges, ensuring that the next generation of AI clusters can achieve the energy efficiency and bandwidth density required for the next wave of frontier models.”
What This Collaboration Means for AI and Computing
This partnership mixes photonic interconnect innovation with leading electronic design automation (EDA) and interconnect IP. It shows how next-gen computing needs to move past old silicon-only designs. Here’s why it matters:
1. Breaking the Bandwidth Bottleneck in AI Infrastructure
Modern AI workloads – from large language models to deep learning training — require massive internal data movement between accelerators, CPUs/GPUs/XPUs, and network fabrics. Traditional electrical interconnects become increasingly saturated and inefficient at scale. Co-packaged optics solutions, such as Lightmatter’s Passage, use light (photons) instead of electrons, enabling much higher bandwidth with lower power consumption and fewer thermal limits than copper-based approaches.
By integrating Cadence’s optimized SerDes and Universal Chiplet Interconnect Express (UCIe) technology — a rapidly emerging industry standard — Lightmatter aims to achieve silicon-proven, low-latency, high-speed optical interfaces ready for production-scale deployment.
Also Read: Vertiv Introduces AI Predictive Maintenance for Data Centers
This shift is essential as AI model sizes and cluster scales continue to grow exponentially; recent Lightmatter research has shown optical solutions can increase bandwidth density up to 8× for AI data centers compared to conventional fiber links.
2. Enabling Scalable, Manufacturing-Ready CPO Designs
While optical interconnect concepts have existed for years, the challenge has always been manufacturability at scale – especially integration with standard CMOS processes and packaging ecosystems. By incorporating Cadence’s mature SerDes and UCIe IP, Lightmatter’s Passage optical engine platform can become roadmap ready for volume production, bridging the gap between photonic prototypes and real world data center deployment.
This collaboration puts Lightmatter in a position to help hyperscalers and AI chip designers integrate optical interconnect technology directly on chip packages, reducing signal loss, improving power efficiency, and enabling rapid communication between compute units. Such co-packaged optics (CPO) represent one of the most promising avenues for escaping the “shoreline” I/O limits that restrict conventional chip designs.
3. Supporting the Future of AI Performance Scaling
AI and HPC performance improvements historically relied on increases in transistor density, clock speeds, or parallel compute — approaches now limited by physical and thermal constraints. Optical interconnects help sidestep these constraints by enabling:
Higher bandwidth per link without excessive power draw.
Longer and faster connections between compute nodes (e.g., GPUs, XPUs, and switches).
Improved data center energy efficiency, crucial for sustainability goals.
Merging Lightmatter‘s photonic interconnect tech with Cadence’s connectivity IP will boost data movement efficiency. This improvement can lead to faster training and inference speeds for advanced AI models.
Impacts on the AI and Computing Industry
Accelerating AI Infrastructure Innovation
This partnership helps push the industry toward a new class of AI infrastructure where optical data movement is standard, not experimental. As AI workloads scale, traditional interconnects will increasingly struggle with bandwidth and energy limits — challenges that photonic approaches are uniquely positioned to address.
Hyperscale AI data center firms, such as cloud companies and research organizations, may benefit from the next generation of optical interconnects to enable them to create clusters that are more efficient at working with larger models.
Boosting Ecosystem Collaboration and Standards
The combination of UCIe, an open industry-standard die-to-die interface, and optical interconnects encourages an interoperable world that favors AI hardware design. The industry is therefore one step closer to having a standardized infrastructure stack where CPUs, GPUs, XPUs, photonics, and packaging tools can seamlessly integrate for the good of chip and system architects.
Industry collaboration and cooperation also send a message to investors and equipment manufacturers that photonic architectures are developing into industry solutions.
Driving Commercial Adoption of Photonics
Optical interconnects have been considered the future of high-performance computing, but their commercialization was delayed. With the expertise of Lightmatter in silicon photonics and the IP and EDA tools of Cadence, this partnership will improve the viability of using optical networking in mainstream AI infrastructure.
Customers who will be among the first to adopt these technologies, especially those who operate at hyperscale, will be in a better position to handle large AI workloads, as they will be able to provide faster services and potentially lower total cost of ownership.
Conclusion
The Lightmatter-Cadence collaboration is a major breakthrough in optical compute interconnects for AI infrastructure, marking the beginning of the end of the research and development era of photonics and the start of its deployment era. As AI models continue to scale and compute requirements continue to rise, energy-efficient and high-bandwidth solutions such as co-packaged optics will play a pivotal role in the building of future compute infrastructure.























