Some Common Questions and Answers about LightOn's Technology

For FAQs about LightOn products, you may visit our “Products” page

What is the difference between LightOn and other AI chip companies?

  • LightOn is re-defining computing for some of today’s largest challenges in AI and HPC. We develop new photonic computing hardware orders of magnitude more efficient than silicon chips. We advocate for a hybrid technology, with standard chips and LightOn’s photonic co-processors ideally complementing each other – eventually getting the best of both worlds. In terms of algorithms, either seamlessly integrated into existing computing pipelines or leveraging optimized algorithms, LightOn’s technology makes it radically easier to process large-scale data. With massive models now more accessible, we provide unique tools to unlock the tremendous economic, societal, and scientific impact of Transformative AI.
  • LightOn is built around a community of AI / ML / HPC researchers and engineers in both academia and industry. We communicate to the scientific and tech community through papers/conferences, preprints, blog posts, workshops, and our API documentation pages. We also maintain our own GitHub account where we open-source most of the LightOn AI Research (LAIR) algorithms. We also organize monthly meetups with world-class guest researchers; you are welcome to join us in the next one!

Do I need to be familiar with Physics / Photonics / Optics in order to use the OPUs?

Not at all! LightOn’s technology harnesses light-matter interaction to perform computations “at the speed of light” but let us worry about that; you only need to import our libraries through a single line of Python code. We also have plenty of documentation, a user forum, and responsive support. Some users have told us that they were up and running in only half a day. 

Do you have plans to do other operations than Random Projections in LightOn OPUs?

Of course, we are only getting started! You can be among the first to learn about what we are working on by subscribing to our monthly newsletter.

Can you help me understand if LightOn’s technology is a good fit for my use-cases?

We would be happy to speak with you about your needs! LightOn may be able to provide consulting services to perform this analysis with your specific use case.

How can I use an OPU for my research projects?

  • As a researcher, you are eligible for the LightOn Cloud Research program that offers 20 free hours of LightOn Cloud usage. You can apply here.
  • If you are interested in using a LightOn Appliance for your research (for instance an H2020/Horizon Europe project or a nationally-funded project) please contact us to discuss possible arrangements here.

What is the Random Projection performed by the current OPU family?

A Random Projection, performed by the current OPU family (Aurora), is a specific kind of matrix-vector multiplication: the multiplication of an input vector with a matrix of fixed random coefficients.

  • Random Projections have a long history for the analysis of large-size data since they achieve universal data compression. For example, you can use this tool to reduce the size of any type of data, while keeping all the important information. There are well-established mathematical guarantees on why this works, in particular thanks to the Johnson-Lindenstrauss lemma. Essentially, this lemma states that compression through RPs approximately preserves distances: if two items are similar in the original domain, their compressed version will also be similar. In a Neural Network framework, this operation is simply a fixed fully-connected random layer commonly used to reduce data dimension. In randomized Numerical Linear Algebra, random projections are one of the required tools to make computations for data that is just too large to be handled by classical methods (for instance, in Randomized SVD).
  • Random Projections can also be used for data expansion, for instance when the classes are not easily separated in the original domain: expanding in higher dimension makes the linear separation more efficient – in the asymptotics, this actually approximates a well-defined kernel
  • Regardless of input/output dimensions, Random Projections also have some optimal data mixing properties, used for instance for data pre-conditioning.

Can Random Projections be applied to any data type?

Yes. Random Projections are universal operators that can therefore be applied to any type of data: images and videos, time-series, scientific experimental data, HPC simulation results, text/tokenized text graphs, audio/speech signals, financial data or more abstract features … anywhere where one is “drowned in data but starved for wisdom”.

Can I perform a Random Projection on a CPU or a GPU?

Yes, you can perform multiplication at small scales with a random matrix on a CPU or GPU (or an FPGA …). However, at larger scales an OPU allows you to perform such an operation, much faster, with much lower power consumption and without hitting memory limits – depending on your hardware configuration, the crossover may appear at dimensions from a few thousand to a few tens of thousands, not even mentioning the one-off cost of generating the random matrix. 

We even have a simulated OPU mode in our API, so that only a single line of code allows you to run the Random Projections on OPU or CPU/GPU.

Does the OPU work in the analog or in the digital domain?

Short Answer: The OPU works internally in the analog domain, but has digital inputs/outputs.

Long Answer: The computations inside the OPU are performed in an analog fashion, following a non-von Neumann computing paradigm. Such low-precision computing is now standard in the framework of Machine Learning, which is fundamentally about statistics on noisy data. In practice, we have noted repeatedly that the end result (e.g. in terms of classification rate) is not affected as compared to the same computations performed digitally in very high 32-bit precision. 

Can the OPU produce random numbers fast?

OPUs use fixed random matrices at extremely large sizes, with more than 1012 (one trillion) random parameters with physics-guaranteed statistical distributions. OPUs uses these matrices for super-fast matrix-vector multiplications, without having to explicitly identify these random coefficients. Extracting these random numbers is possible but would be slow  – without even mentioning storage issues -, so definitely not as efficient as a (True) Random Number Generator.

Can I mine cryptocurrencies (Bitcoin, etc.) with LightOn OPUs?

No.

Stay connected
Subscribe for LightOn’s news, every month, in your inbox!
Subscribe