Project Details
Project Description
The rise of large language models (LLMs) has greatly improved the capabilities of natural language processing (NLP). However, concerns have been raised about the alignment of these models with human intention, as they are trained on vast amounts of text data, which may contain biased or harmful information. As models become increasingly sophisticated, it is essential to ensure that they align with human goals, values, and intentions to avoid unintended consequences. This project aims to devise a method to align large language models with human intention to ensure that they generate outputs that are consistent with human values and ethics.
Short title | Aligning Large Language Models with Human Intention |
---|---|
Acronym | Aligning Large Language Models with Human Intention |
Status | Active |
Effective start/end date | 4/07/23 → 21/12/26 |
Equipment
-
MASSIVE
Slava Kitaeff (Manager) & David Powell (Manager)
Office of the Vice-Provost (Research and Research Infrastructure)Facility/equipment: Facility