Koala: An index for quantifying overlaps with pre-training corpora

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review


In very recent years more attention has been placed on probing the role of pre-training data in Large Language Models (LLMs) downstream behaviour. Despite the importance, there is no public tool that supports such analysis of pre-training corpora at large scale. To help research in this space, we launch Koala, a searchable index over large pre-training corpora using lossless compressed suffix arrays with highly efficient compression rate and search support. In its first release we index the public proportion of OPT 175B, GPT-3, GPT-Neo, GPT-Neo, LLaMA, BERT, ELECTRA, RoBERTA, XLNet pre-training corpora. Koala provides a framework to do forensic analysis on the current and future benchmarks as well as to assess the degree of memorization in the output from the LLMs. Koala is available for public use at https://koala-index.erc.monash.edu/.
Original languageEnglish
Title of host publicationProceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
EditorsYansong Feng, Els Lefever
Place of PublicationStroudsburg PA USA
PublisherAssociation for Computational Linguistics (ACL)
Number of pages9
ISBN (Electronic)9788891760677
Publication statusPublished - 2023
EventEmpirical Methods in Natural Language Processing 2023 - , Singapore
Duration: 6 Dec 202310 Dec 2023
https://aclanthology.org/volumes/2023.findings-emnlp/ (Proceedings)
https://aclanthology.org/volumes/2023.emnlp-demo/ (Proceedings)


ConferenceEmpirical Methods in Natural Language Processing 2023
Abbreviated titleEMNLP 2023
Internet address

Cite this