Web Analytics Made Easy - Statcounter

Publications By LightOn

Sort by:
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference

Benjamin Warner, Antoine Chaffin, Benjamin Clavié, Orion Weller, Oskar Hallström, Said Taghadouini, Alexis Gallagher, Raja Biswas, Faisal Ladhak, Tom Aarsen, Nathan Cooper, Griffin Adams, Jeremy Howard, Iacopo Poli

Encoder-only transformer models such as BERT offer a great performance-size tradeoff for retrieval and classification tasks with respect to larger decoder-only models. Despite being the workhorse of numerous production pipelines, there have been limited Pareto improvements to BERT since its release. In this paper, we introduce ModernBERT, bringing modern model optimizations to encoder-only models and representing a major Pareto improvement over older encoders. Trained on 2 trillion tokens with a native 8192 sequence length, ModernBERT models exhibit state-of-the-art results on a large pool of evaluations encompassing diverse classification tasks and both single and multi-vector retrieval on different domains (including code). In addition to strong downstream performance, ModernBERT is also the most speed and memory efficient encoder and is designed for inference on common GPUs.

MonoQwen-Vision, the first visual document reranker

Antoine Chaffin, Aurélien Lac

We introduce MonoQwen2-VL-v0.1, the first visual document reranker to enhance the quality of the retrieved visual documents and take these pipelines to the next level. Reranking a small number of candidates with MonoQwen2-VL-v0.1 achieve top results on the ViDoRe leaderboard.

DuckSearch: search through Hugging Face datasets

Author: Raphaël Sourty

DuckSearch is a lightweight Python library built on DuckDB, designed for efficient document search and filtering with Hugging Face datasets and standard documents.

Reducing the Footprint of Multi-Vector Retrieval with Minimal Performance Impact via Token Pooling

Authors: Benjamin Clavié, Antoine Chaffin, Griffin Adams

Over the last few years, multi-vector retrieval methods, spearheaded by ColBERT, have become an increasingly popular approach to Neural IR. By storing representations at the token level rather than at the document level, these methods have demonstrated very strong retrieval performance, especially in out-of-domain settings. However, the storage and memory requirements necessary to store the large number of associated vectors remain an important drawback, hindering practical adoption. In this paper, we introduce a simple clustering-based token pooling approach to aggressively reduce the number of vectors that need to be stored. This method can reduce the space & memory footprint of ColBERT indexes by 50% with virtually no retrieval performance degradation. This method also allows for further reductions, reducing the vector count by 66%-to-75% , with degradation remaining below 5% on a vast majority of datasets. Importantly, this approach requires no architectural change nor query-time processing, and can be used as a simple drop-in during indexation with any ColBERT-like model.

FC-AMF-OCR Dataset : LightOn releases a 9.3 million images OCR dataset to improve real world document parsing, 2024

Author: Taghadouini Said

With over 9.3 million annotated images, this dataset offers researchers and AI developers a valuable resource for creating models adapted to real world documents.

PyLate: Flexible Training and Retrieval for ColBERT Models, 2024

Authors: Chaffin Antoine, Sourty Raphaël

We release PyLate, a new user-friendly library for training and experimenting with ColBERT models, a family of models that exhibit strong retrieval capabilities on out-of-domain data.

ArabicWeb24: Creating a high quality Arabic Web-only pre-training dataset, 2024

Authors: Farhat, May*: LightOn; INSAT., Taghadouini Said: LightOn, Hallström, Oskar: LightOn, Hajri Gabouj, Sonja: INSAT, 2024

This blog discusses the pre-processing recipe of the ArabicWeb24 dataset and the evaluation of the process via training different ablation models. It also outlines the impact of the different filtering pipelines on model’s output and on data’s quality.

Training Mamba Models on AMD MI250/MI250X GPUs with Custom Kernels, 2024

Authors: Veselka Austin, Taghadouini Said and Hallström Oskar

In this blogpost we show how we can train a Mamba model interchangeably on both NVIDIA and AMD and we compare both training performance and convergence in both cases. This shows that our training stack is becoming more GPU-agnostic.

LightOn AI Meetup: Creating a Large Dataset for Pretraining LLMs

Authors: Guilherme Penedo, HuggingFace

Passing the Torch: Training a Mamba Model for Smooth Handover, 2024

Authors: Hallström, Oskar and Taghadouini, Said and Thiriet, Clément and Chaffin, Antoine

We present our explorations on training language models based on the new Mamba architecture, which deviates from the traditional Transformer architecture.

Summary of LightOn AI meetup #14WeightWatcher a Diagnostic Tool for Deep Neural Networks

High Quality data need not apply: training LLMs with web data only

Authors: Julien Launay, Guilherme Penedo, Alessandro Cappelli, Baptiste Pannier, Julien Launay, Ruxandra Cojocaru, Ebtesam Almazrouei

4th workshop on Neural Scaling Laws: Towards Maximally Beneficial AGI, NeurIPS 2022 – Machine Learning/NLP – LLMsAbstract not available.

No matching results found

We couldn’t find what you searched for. Try different keywords.