v1 published on 22/03/2022
- The user should be aware that systems with generative language capabilities may produce offensive or misinforming content, and could be potentially misused. This has been reported in the literature for other language models and/or other languages (see for example The Radicalization Risks of GPT-3 and Neural Language Models, RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models, Does GPT-2 know your number?).
- LightOn does not filter in real-time any of the input text (prompts) thus cannot exclude the generation of offensive or misinforming results. LightOn does not endorse and is not responsible for any outputs produced by the model.
- LightOn reserves the right to restrict or refuse access to anyone we think is misusing it, and more particularly for cases that are found to cause (or are intended to cause) physical, emotional, or psychological harm to people, including but not limited to harassment, intentional deception, radicalization, astroturfing, or spam.
- We welcome any constructive feedback on these issues that will help us get better with these aspects – that we take very seriously.
- Mitigation of harmful bias and other negative effects
- Mitigating negative effects such as harmful bias is a very important industry-wide issue (see for example Language Models are Few-Shot Learners, The Trouble with Bias, etc).
- Our model does exhibit biases that become evident through the interaction of a system with a certain context. These biases might be reflected in the generated output text.
- Our goal is to continue developing our understanding of the potential harms in each context of use and continually improve to help minimize them.
- While we’re conducting our own research into manifestations of harmful bias and broader issues in the broad areas of Fairness and Representation and Misuse Potential, we welcome any research suggestions in other areas of focus as well.
- If you are interested in working with us on these issues feel free to contact us at [email protected] with your proposition, question, or just a comment.
- Research collaboration
- Apart from the aforementioned areas of Fairness and Representation and the Misuse Potential of large-scale generative language models, we’d be happy to receive research proposals that address different questions, e.g. about Model’s Robustness, efficient Model Exploration, or even ideas for interdisciplinary research in the intersect of AI with cognitive science, philosophy, etc.
- The output text of the VLM-4 models are licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License