Optoelectronic computing for AI-scale workloads
Light-speed data movement for the next computing threshold.
Xizhi Technology develops hybrid photonic-electronic systems that combine optical throughput with programmable silicon, helping data centers move beyond conventional compute and interconnect limits.
A hybrid path for denser, cleaner compute.
The company focuses on integrated photonics, advanced packaging, and chip-level optical networking. Its product direction spans acceleration cards, evaluation boards, optical interconnect hardware, and rack-scale compute fabric.
New data processing
Photonic compute paths target low latency, high bandwidth, and lower power draw while preserving the control and programmability expected from electronic systems.
New data transmission
Optical links move data across chip, card, server, and rack distances with a design center built around AI infrastructure scale.
Sustainable infrastructure
The architecture is aimed at compute growth without a proportional rise in energy cost, making performance per watt a first-order product target.
A product stack built around photons and silicon.
Each layer keeps the optical path close to the workload: compute, evaluation, interconnect, and composable data-center fabric.
PACE 2 accelerator
A hybrid optoelectronic compute card combining programmable control, photonic matrix operations, and advanced packaging for AI acceleration.
- 8-bit optical compute output precision
- 128 by 128 maximum matrix scale
- Designed for accelerator-class deployment
Gazelle evaluation board
A developer-oriented photonic computing board for validation, benchmarking, and early integration work.
Risewave optical interconnect
Hardware for optical data movement across high-density compute systems where bandwidth and latency define cluster efficiency.
Photowave CXL interconnect
Optical interconnect hardware for memory-rich and composable architectures built on modern data-center protocols.
From single cards to optical supernodes.
Xizhi Technology frames its work as a compute infrastructure transition: optical links and photonic operation units reduce bottlenecks inside large-scale AI systems, while electronic control keeps the platform compatible with existing software flows.
The architecture stays legible from physics to deployment.
Optical properties deliver speed and bandwidth; electronic systems provide control; packaging and networking turn the device into deployable infrastructure.
Photonic operation
Use light for high-throughput matrix and signal paths where parallelism matters.
Electronic control
Keep programmability and system integration anchored in established silicon flows.
Optical networking
Move data across boards, servers, and racks without forcing every path through copper.
Cluster integration
Package the stack into supernode-class systems for production AI infrastructure.
Signals from the field.
Recent milestones point to a company moving from photonic computing research into infrastructure-scale implementation.
Supernode technology white paper contribution
The team contributed to industry work on supernode architecture, highlighting the role of optical interconnects in next-generation compute systems.
Commercial optical-switching supernode deployment
The optical interconnect and switching system moved from proof of concept toward a commercial 128-card implementation.
Photonic compute for image recognition
Applied model work demonstrated how optical acceleration can support practical AI workloads beyond lab-scale benchmarks.