Optoelectronic computing for AI-scale workloads

Light-speed data movement for the next computing threshold.

Xizhi Technology develops hybrid photonic-electronic systems that combine optical throughput with programmable silicon, helping data centers move beyond conventional compute and interconnect limits.

Optical fabric compute + interconnect
250+ global employees
170+ research and engineering roles
428 patent assets
80% invention patent mix
Overview

A hybrid path for denser, cleaner compute.

The company focuses on integrated photonics, advanced packaging, and chip-level optical networking. Its product direction spans acceleration cards, evaluation boards, optical interconnect hardware, and rack-scale compute fabric.

New data processing

Photonic compute paths target low latency, high bandwidth, and lower power draw while preserving the control and programmability expected from electronic systems.

New data transmission

Optical links move data across chip, card, server, and rack distances with a design center built around AI infrastructure scale.

Sustainable infrastructure

The architecture is aimed at compute growth without a proportional rise in energy cost, making performance per watt a first-order product target.

Systems

A product stack built around photons and silicon.

Each layer keeps the optical path close to the workload: compute, evaluation, interconnect, and composable data-center fabric.

02

Gazelle evaluation board

A developer-oriented photonic computing board for validation, benchmarking, and early integration work.

03

Risewave optical interconnect

Hardware for optical data movement across high-density compute systems where bandwidth and latency define cluster efficiency.

04

Photowave CXL interconnect

Optical interconnect hardware for memory-rich and composable architectures built on modern data-center protocols.

Platform

From single cards to optical supernodes.

Xizhi Technology frames its work as a compute infrastructure transition: optical links and photonic operation units reduce bottlenecks inside large-scale AI systems, while electronic control keeps the platform compatible with existing software flows.

Distance scale chip to rack
Architecture photonic-electronic hybrid
Deployment target AI compute infrastructure
How it works

The architecture stays legible from physics to deployment.

Optical properties deliver speed and bandwidth; electronic systems provide control; packaging and networking turn the device into deployable infrastructure.

01

Photonic operation

Use light for high-throughput matrix and signal paths where parallelism matters.

02

Electronic control

Keep programmability and system integration anchored in established silicon flows.

03

Optical networking

Move data across boards, servers, and racks without forcing every path through copper.

04

Cluster integration

Package the stack into supernode-class systems for production AI infrastructure.

Updates

Signals from the field.

Recent milestones point to a company moving from photonic computing research into infrastructure-scale implementation.

Supernode technology white paper contribution

The team contributed to industry work on supernode architecture, highlighting the role of optical interconnects in next-generation compute systems.

Commercial optical-switching supernode deployment

The optical interconnect and switching system moved from proof of concept toward a commercial 128-card implementation.

Photonic compute for image recognition

Applied model work demonstrated how optical acceleration can support practical AI workloads beyond lab-scale benchmarks.