Deductive Closure Training of Language Models for Coherence, Accuracy, and Updatability

Afra Feyza Akyürek, Ekin Akyürek, Leshem Choshen, Derry Wijaya, Jacob Andreas

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

2 Citations (Scopus)

Abstract

While language models (LMs) can sometimes generate factually correct text and estimate truth values of individual claims, these generally do not reflect a globally coherent, manipulable model of the world. As a consequence, current LMs also generate incorrect or nonsensical content, and are difficult to edit and bring up to date. We present a method called Deductive Closure Training (DCT) that uses LMs themselves to identify implications of (and contradictions within) the text that they generate, yielding an efficient self-supervised procedure for improving LM factuality. Given a collection of seed documents, DCT prompts LMs to generate additional text implied by these documents, reason globally about the correctness of this generated text, and finally fine-tune on text inferred to be correct. Given seed documents from a trusted source, DCT provides a tool for supervised model updating; if seed documents are sampled from the LM itself, DCT enables fully unsupervised fine-tuning for improved coherence and accuracy. Across the CREAK, MQUAKE, and “Reversal Curse” datasets, supervised DCT improves LM fact verification and text generation accuracy by 3-26%; on CREAK, fully unsupervised DCT improves verification accuracy by 12%. These results show that LMs' reasoning capabilities during inference can be leveraged during training to improve their reliability.

Original languageEnglish
Title of host publicationACL 2024, 62nd Annual Meeting of the Association for Computational Linguistics, Findings of the Association for Computational Linguistics: ACL 2024
EditorsLun-Wei Ku, Andre Martins, Vivek Srikumar
Place of PublicationKerrville TX USA
PublisherAssociation for Computational Linguistics (ACL)
Pages9802-9818
Number of pages17
ISBN (Electronic)9798891760998
DOIs
Publication statusPublished - 2024
EventAnnual Meeting of the Association of Computational Linguistics 2024 - Bangkok, Thailand
Duration: 11 Aug 202416 Aug 2024
Conference number: 62nd
https://aclanthology.org/2024.acl-long.0/ (Proceedings)
https://2024.aclweb.org/ (Website)
https://aclanthology.org/volumes/2024.findings-acl/ (Proceedings (Findings))
https://aclanthology.org/volumes/2024.acl-long/ (Proceedings)

Conference

ConferenceAnnual Meeting of the Association of Computational Linguistics 2024
Abbreviated titleACL 2024
Country/TerritoryThailand
CityBangkok
Period11/08/2416/08/24
Internet address

Cite this