Web Analytics Made Easy - Statcounter

Research And Development

R&D
Overview

Advancing Generative AI through Innovation

The R&D team at LightOn plays a pivotal role in advancing the field of generative AI through continuous innovation and development. Their expertise spans across creating and fine-tuning large language models (LLMs) that form the backbone of the Paradigm platform, a comprehensive AI solution designed for enterprise use. This platform simplifies the integration of generative AI into business workflows, offering both on-premise and cloud options to ensure flexibility and scalability for various business needs​

Pioneering AI with Alfred-40B-0723

One of the key achievements of LightOn's R&D team is the development of Alfred-40B-0723, an open-source LLM based on Falcon-40B. This model is fine-tuned using reinforcement learning from human feedback, enhancing its ability to perform complex tasks such as content summarization, query answering, and prompt engineering. The team's ongoing efforts ensure that Alfred remains at the cutting edge of AI technology, providing robust support for the Paradigm platform and enabling enterprises to deploy AI solutions that are secure, scalable, and tailored to their specific requirements​

Recent Posts

Read post

PyLate: Flexible Training and Retrieval for ColBERT Models

We release PyLate, a new user-friendly library for training and experimenting with ColBERT models, a family of models that exhibit strong retrieval capabilities on out-of-domain data.

August 29, 2024
R&D

CTA Title

Lorem Ipsum

Read post

ArabicWeb24: Creating a high quality Arabic Web-only pre-training dataset

August 7, 2024
R&D

CTA Title

Lorem Ipsum

Read post

Training Mamba Models on AMD MI250/MI250X GPUs with Custom Kernels

In this blogpost we show how we can train a Mamba model interchangeably on both NVIDIA and AMD and we compare both training performance and convergence in both cases. This shows that our training stack is becoming more GPU-agnostic.

July 19, 2024
R&D

CTA Title

Lorem Ipsum

Read post

Transforming LLMs into Agents for Enterprise Automation

Developing Agentic Capabilities for LLMs to automate business workflows and create smart assistants.

June 25, 2024
R&D

CTA Title

Lorem Ipsum

Read post

Passing the Torch: Training a Mamba Model for Smooth Handover

We present our explorations on training language models based on the new Mamba architecture, which deviates from the traditional Transformer architecture.

April 10, 2024
R&D

CTA Title

Lorem Ipsum

Read post

LightOn AI Meetup Creating a Large Dataset for Pretraining LLMs

March 22, 2024
R&D

CTA Title

Lorem Ipsum

Read post

Introducing Alfred-40B-1023:

Pioneering the Future of Open-Source Language Model from LightOn

November 17, 2023
R&D

CTA Title

Lorem Ipsum

Read post

Introducing Alfred-40B-0723

The Open-Source Generative AI Copilot by LightOn

July 31, 2023
R&D

CTA Title

Lorem Ipsum

Read post

LightOn's Large Language Model of 40 billion parameters: MINI

LightOn, a prominent provider of advanced AI solutions, has announced the launch of Mini, its latest model in the field of large language models.

March 21, 2023
R&D

CTA Title

Lorem Ipsum