Jargon: A Suite of Language Models and Evaluation Tasks for French Specialized Domains - GETALP Access content directly
Conference Papers Year : 2024

Jargon: A Suite of Language Models and Evaluation Tasks for French Specialized Domains

Abstract

Pretrained Language Models (PLMs) are the de facto backbone of most state-of-the-art NLP systems. In this paper, we introduce a family of domain-specific pretrained PLMs for French, focusing on three important domains: transcribed speech, medicine, and law. We use a transformer architecture based on efficient methods (LinFormer) to maximise their utility, since these domains often involve processing long documents. We evaluate and compare our models to state-of-the-art models on a diverse set of tasks and datasets, some of which are introduced in this paper. We gather the datasets into a new French-language evaluation benchmark for these three domains. We also compare various training configurations: continued pretraining, pretraining from scratch, as well as single- and multi-domain pretraining. Extensive domain-specific experiments show that it is possible to attain competitive downstream performance even when pre-training with the approximative LinFormer attention mechanism. For full reproducibility, we release the models and pretraining data, as well as contributed datasets.
Fichier principal
Vignette du fichier
FB2_domaines_specialises_LREC_COLING24.pdf (156.2 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-04535557 , version 1 (06-04-2024)

Identifiers

  • HAL Id : hal-04535557 , version 1

Cite

Vincent Segonne, Aidan Mannion, Laura Cristina Alonzo Canul, Alexandre Audibert, Xingyu Liu, et al.. Jargon: A Suite of Language Models and Evaluation Tasks for French Specialized Domains. LREC-COLING 2024 - Joint International Conference on Computational Linguistics, Language Resources and Evaluation, May 2024, Turin, Italy. ⟨hal-04535557⟩
221 View
123 Download

Share

Gmail Mastodon Facebook X LinkedIn More