Web Analytics Made Easy - Statcounter

LightOn’s AI Research Workshop — FoRM #4: The Future of Random Matrices. Thursday, December 19th.

On Thursday, December 19th, we held our 4th research workshop on the Future of Random Projections (FoRM#4).

December 17, 2019
Lightbulb

TL;DR

On Thursday, December 19th, we held our 4th research workshop on the Future of Random Projections (FoRM#4). We had an exciting and diverse line-up, with talks on applications of random projections in compressive learning, matrix factorization, and even particle physics, as well as on efficient machine learning with either neural networks binarization techniques or with the replacements of convolutions with simpler operators.

Register on our meetup group to be kept up to date with our future workshops. FoRM#5 will be held on the 2nd of April 2020.

Abstracts of the talks and slides can be found at the end of this post. A recording of the event is available below.

Summary

Compressive learning with random projections, Ata Kaban

Ensemble of random projections can deliver major accuracy improvements on a variety of datasets and settings. Credit: Ata Kaban.

Ata Kaban gave us an overview of compressive learning with insights on precise bounds as well as on theoretical guarantees. In the applications she described, high-dimensional data are first randomly projected into a smaller space before applying classic machine learning techniques to untangle it (SVMs, NNs, etc.) This approach does not only make high-dimensional data more manageable, but also acts as a strong regularization step.

Ata also discussed learning on ensembles of random projections. In this case, different projections of the data are used to build an ensemble of classifiers. This technique can drastically increase performance as computations can be ran in parallel. Slides are available here.

Medical Applications of Low Precision Neuromorphic Systems, Bodgan Penkovsky

Binarized neural networks can even be applied to classic computer vision tasks, with minimal loss in accuracy. Credit: Bodgan Penkovsky.

Bodgan Penkovsky presented his work on neuromorphic hardware for medical systems in which he leveraged recent machine learning advances in resources-constrained situations. The type of hardware Bodgan described allows computations to be performed in-memory but requires binarized neural networks for inference. Thanks to a number of tricks, binarization can come at minimal performance cost at inference time and even enable these techniques to work even with classic convolutional architectures such as MobileNet. Slides are available here.

Comparing Low Complexity Linear Transforms, Gavin Gray

Low-complexity linear transforms such as ACDC or ShuffleNet allow practitioners to build neural networks with fewer parameters at minimal performance cost. Credit: Gavin Gray.

Gavin Gray, presented his recent PhD work remotely from Toronto. His focus was specifically on replacing convolutions by leaner linear transforms. Convolutions are central to modern computer vision workflows, but they can be expensive to compute. Alternatives exists, such as ACDC, however principled benchmarks are rare.

In his thesis, Gavin evaluated a number of alternative transforms and showed that in most modern architectures, these transforms can be leveraged to access regimes that relies on less parameters while maintaining similar accuracy as convolutions. A key finding of this work is that these regimes are not reachable using classic convolution operations. Slides are available here.

LightOn’s OPU+Particle Physics, David Rousseau, Aishik Ghosh, Laurent Basara, Biswajit Biswas

Even with few training samples, LightOn’s OPU can help classify collision events in the LHC. Credit: Biswajit Biswas.

Particle physics produces a wealth of data. Future upgrades to the LHC are expected to deliver 4 times as many recorded collisions. Machine learning is potentially invaluable in making sense of this data and in powering new scientific discovery. At LightOn, we have offered access to our OPUs to one of the team working on LHC data at LAL/LRI-Orsay (David Rousseau, Aishik Ghosh, Laurent Basara, Biswajit Biswas).

The team presented preliminary results in which our OPUs were used to help with the analysis of particle trajectory, of calorimeter data, and for the classification of collision events. As the team explained, generic training processes can be greatly sped up using our technology, thus enabling faster iteration in algorithm developement. Slides are in two parts, here and there.

Accelerated Weighted (Nonnegative) Matrix Factorization with Random Projections, Matthieu Puigt

Fastening matrix factorization is key to widening its application in big data context, such as in recommender systems. Credit: Matthieu Puigt.

Finally, Matthieu Puigt joined us remotely from Lille with a presentation of his work on randomized numerical linear algebra. Matthieu explained how weighted matrix factorization has applications in recommender systems, graph analysis, and sensor calibration amongst others.

Random projections makes these computations more accessible and applicable to higher dimensional data. Slides are available here.

Abstracts

Compressive Learning with Random Projections

Ata Kaban, University of Birmingham.
By direct analogy to compressive sensing, compressive learning has
been originally coined to mean learning efficiently from random
projections of high dimensional massive data sets that have a sparse
representation. In this talk we discuss compressive learning without
the sparse representation requirement, where instead we exploit the
natural structure of learning problems.

Medical Applications of Low Precision Neuromorphic Systems

Bodgan Penkvsky, Paris-Sud University.
The advent of deep learning has considerably accelerated machine learning development, but its development at the edge is limited by its high energy cost and memory requirement. With new memory technology available, emerging Binarized Neural Networks (BNNs) are promising to reduce the energy impact of the forthcoming machine learning hardware generation, enabling machine learning on the edge devices and avoiding data transfer over the network. In this talk we will discuss strategies to apply BNNs to biomedical signals such as electrocardiography and electroencephalography, without sacrificing accuracy and improving energy use. The ultimate goal of this research is to enable smart autonomous healthcare devices.

Comparing Low Complexity Linear Transforms

Gavin Gray, Edinburgh University.

In response to the development of recent efficient dense layers, this talk discusses replacing linear components in pointwise convolutions with structured linear decompositions for substantial gains in the efficiency/accuracy tradeoff. Pointwise convolutions are fully connected layers and are thus prepared for replacement by structured transforms. Networks using such layers are able to learn the same tasks as those using standard convolutions, and provide Pareto-optimal benefits in efficiency/accuracy, both in terms of computation (mult-adds) and parameter count (and hence memory).

OPU+Particle Physics

David Rousseau, Aishik Ghosh, Laurent Basara, Biswajit Biswas. LAL Orsay, LRI Orsay, BITS University.
LightOn’s OPU is opening a new machine learning paradigm. Two use cases have been selected to investigate the potentiality of OPU for particle physics:

  • End-to-End learning: high energy proton collision at the Large Hadron Collider have been simulated, each collision being recorded as an image representing the energy flux in the detector. Two classes of events have been simulated: signal are created by a hypothetical supersymmetric particle, and background by known processes. The task is to train a classifier to separate the signal from the background. Several techniques using the OPU will be presented, compared with more classical particle physics approaches.
  • Tracking: high energy proton collisions at the LHC yield billions of records with typically 100,000 3D points corresponding to the trajectory of 10,000 particles. Various investigations of the potential of the OPU to digest this high dimensional data will be reported.

Accelerated Weighted (Nonnegative) Matrix Factorization with Random Projections

Matthieu Puigt, Université du Littoral Côte d’Opale.
Random projections belong to the major techniques used to process big data. They have been successfully applied to, e.g., (Nonnegative) Matrix Factorization ((N)MF). However, missing entries in the matrix to factorize (or more generally weights which model the confidence in the entries of the data matrix) prevent their use. In this talk, I will present the framework that we recently proposed to solve this issue, i.e., to apply random projections to weighted (N)MF. We experimentally show the proposed framework to significantly speed-up state-of-the-art weighted NMF methods under some mild conditions.

About the author

This workshop was in large part organized by Julien Launay, PhD student and ML R&D engineer at LightOn.

Ready to Transform Your Enterprise?

Recent Blogs

Ready to Transform Your Enterprise?