optical ai


GenXComm’s Photonic Accelerator Circuits (PACs) for Deep Neural Networks are capable of up to 1000x improvement in Artificial Intelligence performance.

Photonic Accelerator Circuits (PACs) can achieve orders of magnitude performance improvements for AI computation allowing AI processing to move from the cloud to your in-home wireless assistant or directly on your mobile device.  This provides faster, more secure AI processing, keeping your data on your device and eliminating the need to round trip requests from remote datacenters.

Our Photonic Accelerator Circuits (PACs) use low power laser source and 3-D solid state chip designs to overcome the limitations of conventional semi-conductor based CPUs.  Silicon photonics will help chip and systems performance to continue to scale post-Moore's law. Signals travel through these chips at the speed of light to achieve enormous improvements in processing speeds without the heat generation of today’s chipsets.  As today’s CPUs and GPUs reach their physical limits, PACs will become the foundation of next generation AI-enabled devices.



Neural networks are employed in a wide range of different applications ranging from self-driving cars, to cancer detection, to playing complex games. In many applications Deep Neural Networks (DNNs) are now able to exceed human accuracy due to the ability of DNNs to extract high-level features from raw sensory data after using statistical learning over a large amount of data to obtain an effective representation of the input space. However, the high accuracy comes at the cost of high computational latency and power consumption. Such limitations prohibit the integration of AI and machine learning (ML) into devices deployed at the edge. Implementing a neural network fully on an optical processor enables light speed processing of data with significantly less power and in a much smaller form factor than graphics processing unit (GPU)-based artificial neural networks (ANN) today.

  • Greater densities, faster signal transmission, lower power, and reduced heat generation

  • 10x-100x improvement in general purpose inference

  • 1000X improvement in training

Use Cases

•         AI Accelerator [Need text]

•         Neuromorphic Computing  [Need Text]