Web Analytics Made Easy - Statcounter

Research And Development

R&D
Overview

Advancing Generative AI through Innovation

The R&D team at LightOn plays a pivotal role in advancing the field of generative AI through continuous innovation and development. Their expertise spans across creating and fine-tuning large language models (LLMs) that form the backbone of the Paradigm platform, a comprehensive AI solution designed for enterprise use. This platform simplifies the integration of generative AI into business workflows, offering both on-premise and cloud options to ensure flexibility and scalability for various business needs​

Pioneering AI with Alfred-40B-0723

One of the key achievements of LightOn's R&D team is the development of Alfred-40B-0723, an open-source LLM based on Falcon-40B. This model is fine-tuned using reinforcement learning from human feedback, enhancing its ability to perform complex tasks such as content summarization, query answering, and prompt engineering. The team's ongoing efforts ensure that Alfred remains at the cutting edge of AI technology, providing robust support for the Paradigm platform and enabling enterprises to deploy AI solutions that are secure, scalable, and tailored to their specific requirements​

Recent Posts

Read post

LightOn Pulse #2: Discover the New Paradigm Platform Updates

Version Glorious-Giraffe 20240828 Release Date: August 28, 2024

September 4, 2024
Blog

CTA Title

Lorem Ipsum

Read post

PyLate: Flexible Training and Retrieval for ColBERT Models

We release PyLate, a new user-friendly library for training and experimenting with ColBERT models, a family of models that exhibit strong retrieval capabilities on out-of-domain data.

August 29, 2024
Blog

CTA Title

Lorem Ipsum

Read post

ArabicWeb24: Creating a high quality Arabic Web-only pre-training dataset

August 7, 2024
Blog

CTA Title

Lorem Ipsum

Read post

LightOn Pulse #1

Innovations and Advancements (2024.07)

July 21, 2024
Blog

CTA Title

Lorem Ipsum

Read post

Training Mamba Models on AMD MI250/MI250X GPUs with Custom Kernels

In this blogpost we show how we can train a Mamba model interchangeably on both NVIDIA and AMD and we compare both training performance and convergence in both cases. This shows that our training stack is becoming more GPU-agnostic.

July 19, 2024
Blog

CTA Title

Lorem Ipsum

Read post

Transforming LLMs into Agents for Enterprise Automation

Developing Agentic Capabilities for LLMs to automate business workflows and create smart assistants.

June 25, 2024
Blog

CTA Title

Lorem Ipsum

Read post

Passing the Torch: Training a Mamba Model for Smooth Handover

We present our explorations on training language models based on the new Mamba architecture, which deviates from the traditional Transformer architecture.

April 10, 2024
Blog

CTA Title

Lorem Ipsum

Read post

LightOn AI Meetup Creating a Large Dataset for Pretraining LLMs

March 22, 2024
Blog

CTA Title

Lorem Ipsum

Read post

Partnership LightOn & Orange Business

Orange Business, LightOn, and HPE partner to offer trusted AI generative solutions

March 19, 2024
Blog

CTA Title

Lorem Ipsum