Web Analytics Made Easy - Statcounter

Slash the training cost of your models with random learning signals and without feedback

At LightOn, we regularly organize meetups to gather ML researchers and engineers around topics of interest. Our meetups happen online in order to keep our attendees safe until further notice!

October 13, 0202
Lightbulb

TL;DR

At our last Meetup, we had Charlotte Frenkel, Postdoctoral researcher at the Institute of Neuroinformatics, UZH and ETH in Zürich, Switzerland, and Martin Lefevbre, Teaching assistant and PhD student at the Institute of Information and Communication Technologies, Electronics and Applied Mathematics, Université catholique de Louvain in Belgium.

In their presentation Learning without feedback: Fixed random learning signals allow for feed-forward training of deep neural networks, they introduced the Direct Random Target Projection algorithm, a weight-transport-free, update unlocked evolution of feedback alignment.

The weight transport problem refers to the biological implausibility that there is perfect symmetry between forward and backward weights. Feedback alignment methods solve this issue by using different backward path and weights. Update unlocking allows to compute the update of a layer as soon as it is executed, without waiting for the forward pass to be finished or the gradient signal from upstream layers. This quality has obvious repercussions on the training time.

Figure 1: Schematics for the Backpropagation (BP), Feedback Alignment (FA), Direct Feedback Alignment (DFA) and Direct Random Target Projection (DRTP) algorithms.

The idea of DRTP (Figure 1) is that we can compute a synthetic gradient starting from the random projection of the target vector. Wild, right? After showing with a few simple experiment that the information provided by the sign of the error vector is enough for training, DRTP stems from the realization that for classification the error sign is known in advance, so we do not need a feedback path anymore!

Charlotte and Martin concluded by pointing to a neuroscience paper showing mechanisms similar to DRTP in the brain, and to Charlotte’s work on a circuit implementation of DRTP with record low silicon area and energy overheads for on-chip learning at the edge!

Check out the slides for this talk! You can also watch the video of the presentation.

Subscribe to our Meetup to get notified of the next events, there is more cool stuff coming!

At LightOn we compute dense random projections literally at the speed of light. If you want to try out these algorithms or a new idea you had, you can register to the LightOn Cloud or apply to the LightOn Cloud for Research Program!

About Us

LightOn is a hardware company that develops new optical processors that considerably speed up Machine Learning computation. LightOn’s processors open new horizons in computing and engineering fields that are facing computational limits. Interested in speeding your computations up? Try out our solution on LightOn Cloud! 🌈

Follow us on Twitter at @LightOnIO, subscribe to our newsletter, and/or register to our workshop series. We live stream, so you can join from anywhere. 🌍

The author

Iacopo Poli, Lead Machine Learning Engineer at LightOn AI Research.

Acknowledgments

We would like to thank Victoire Louis for undertaking most of the effort in the organization of the meetup.

Ready to Transform Your Enterprise?

Recent Blogs

Ready to Transform Your Enterprise?