Access the LightOn OPU technology now through LightOn Cloud.
We are providing instances that comprise a LightOn Aurora 1.5 OPU with a Nitro photonic core, as well as a Tesla V100 32GB Nvidia® GPU and an Intel® Xeon® Gold 16 cores CPU with 128GB of RAM.
Offloading some memory-intensive dense computations on the Aurora 2 OPU makes your CPU and GPU more efficient, and results in significantly accelerated computing pipelines, sometimes by a factor of 10 or more.
Run your current Machine Learning models, and boost them using the OPU through a single line of code. The OPU library is compliant with a number of popular Machine Learning frameworks, such as PyTorch and scikit-learn. Leverage what the community has developed in a variety of use cases and let the community know what’s on your GitHub!