Photonic computing for Machine Learning at scale
LightOn OPU: the Photonic AI Chip to unlock transformative AI
OPUs are highly integrated with CPUs and GPUs so that it boosts their respective performance.
OPUs can be seamlessly accessed through an Open Source Python API called LightOnML, available here: https://github.com/lightonai/lightonml
Benchmark code to compare the performance of CPU and GPU with our OPU are available on Github
The OPU is the first large-scale hybrid digital / analog computing platform for AI.
On-premises
LightOn Appliance is our offer for on-premises OPU technology. The Appliance is the most advanced photonic AI/HPC co-processor on the market today reaching a maximum capacity of 2.2 PetaOPS at 30 W TDP.
Cloud Computing
Access requests for LightOn Cloud are no longer accepted.
Please refer to the LightOn Appliance page for more information about our OPU technology.
Photonic Quantum Computing
LightOn Qore: A novel Quantum Photonic Processor
LightOn Qore quantum photonic processors are all versatile, powerful, and low-loss platforms designed for the rapidly growing field of NISQ (Noisy Intermediate Scale Quantum) computing. Available: Q2 2022.
Use cases
Machine Learning Techniques
Why is the training of Neural Networks with an OPU important?
Enables OPUs to become a cornerstone in the training process-optical training.
- Great potential for faster training/larger models.
Demonstrated on a large range of tasks — Optical training demonstrated on fully connected networks, graph convolution networks, and GPTs
Background:
- Background work: Direct Feedback Alignment https://arxiv.org/abs/1609.01596
Library: no dedicated library yet
Dataset: widely applicable (RecSys, graphs, NLP, etc.)
- Next-generation photonic core Lazuli 1.5 (Available on LightOn Cloud by Q3 2021)
Deeper Insight
NeurIPS
- Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures
- Hardware Beyond Backpropagation: a Photonic Co-Processor for Direct Feedback Alignment – “Beyond Backpropagation” Workshop oral
ArXiv
- Principled DFA training: 1906.04554
- MNIST+Optical training: 2006.01475
GitHub
Why is the fast class-aware low-D embedding of data
important?
Dimensionality reduction makes the whole processing pipeline faster.
Using information from the labels produces more useful embeddings.
- It is a scalable method.
1.2 to 4x speedup at label dimensionality 100k to 700k
Background:
- Library: LightOnML
- Dataset: dimensionality reduction benchmark datasets
- Nitro photonic core, Aurora 1.5 (LightOn Cloud)
Computer Vision
Why is Fast training of image classification models important?
- The Data Science team spends less time training image classification models.
- Allows the use of lower precision arithmetics for training/inference to further reduce training/test time.
- Re-training SOTA models with little data are essential for businesses.
- For data scientists: More iterations are possible on higher-level tasks.
Up to 10x faster than backprop on GPU!
Background:
- Transfer Learning applied on 2D CNNs such as VGG, Densenet, Resnet models for image classification
- Library: LightOnML
- Dataset: STL10, Skin Cancer, Flowers and other image datasets
- Nitro photonic core, Aurora 1.5 (LightOn Cloud)
Why is Fast training of Video classification models important?
- Training 3D CNNs is extremely time and energy-consuming.
- Can get around the huge memory requirements of CNNs.
- For Data scientists: More iterations are possible on higher-level tasks.
- Much lower variance to the hyperparameters’ choice.
Training time: Up to 9x faster than backprop on GPU at the same level of accuracy!
Background:
- Transfer learning on 3D CNNs such as I3D for video action recognition
- Library: LightOnML
- Dataset: HMDB51, UCF101 and other action recognition video datasets
- Nitro photonic core, Aurora 1.5 (LightOn Cloud)
Why is Fast training of simple models important?
- State-of-the-art performance with theoretical guarantees in some tasks.
- For Data scientists: More iterations are possible on higher-level tasks.
1.3x to 23.6x faster extrapolating the curves to 1.000.000
Background:
- Kernel ridge regression approximation for classification tasks.
- Library: LightOnML
- Dataset: qm7 (quantum chemistry), high energy physics data and others (image classification)
- Nitro photonic core, Aurora 1.5 (LightOn Cloud)
Natural Language Processing
Why is Fast NLP important?
- The Data Science team spends less time building NLP models and get results.
- Re-training SOTA models with little data are essential for businesses.
- For data scientists: more iterations are possible on higher-level tasks.
Background:
- NLP applied on Bag Of Random Embedding Projections (BOREP)
- Library: LightOnML
- Dataset: Grand Débat dataset
- Nitro photonic core, Aurora 1.5 (LightOn Cloud)
Ranking & Recommendations
Why is important?
- To offer a quick and easy baseline for large scale recommender systems.
MovieLens 20M database: 27000 movies x 138000 users, with 0.5%non-zero entries
Background:
- Library: LightOnML
- Dataset: Movielens
- Nitro photonic core, Aurora 1.5 (LightOn Cloud)
Time Series
Why is Change detection in Molecular Dynamics (MD) simulation important?
MD simulations are used in drug design and new materials discovery
- Applying an intelligent layer on top of HPC simulation enables metadynamics
- 15x faster than FastFood on CPU at 50k atoms!
- OPU enables analysis of samples containing a very large number of atoms due to overcoming the memory bottleneck of traditional architectures.
- For 700k + atoms, NEWMA RP on OPU is expected to be 30x faster than NEWMA FF on CPU!
Background:
- Library: LightOnML
- Dataset: Molecular Dynamics simulations (HPC, Anton)
- Nitro photonic core, Aurora 1.5 (LightOn Cloud)
Why is important?
Detect in real-time changes, without the need to store the whole history of the graph
- Applications in community detection, fraud detection, biology, and others
Reduced memory requirements and facilitating the analysis of Very large datasets
Background:
- Library: LightOnML
- Dataset: Facebook graph datasets, any time-evolving graph
- Nitro photonic core, Aurora 1.5 (LightOn Cloud)
Why is Reservoir Computing important?
- It needs only the training of a linear layer compared to Recurrent Neural Networks with similar prediction capabilities on chaotic time-series
- It is energy efficient and can be deployed on the edge
- Using the OPU allows for larger reservoir sizes at no cost in computation or memory
Prediction capabilities on chaotic time-series
Machine Intelligence
Why is important?
- It can tackle high-dimensional control problems in robotics or trading with a more efficient “memory” for exploitation.
- Accelerate early stages of learning of an agent with imitation learning.
Background:
- Library: LightOnML
- Dataset: Atari Games
- Nitro photonic core, Aurora 1.5 (LightOn Cloud)