Conference Papers Year : 2024

Fine-Tuning vs. Prompting: Evaluating the Knowledge Graph Construction with LLMs

Abstract

This paper explores Text-to-Knowledge Graph (T2KG) construction„ assessing Zero-Shot Prompting (ZSP), Few-Shot Prompting (FSP), and Fine-Tuning (FT) methods with Large Language Models (LLMs). Through comprehensive experimentation with Llama2, Mistral, and Starling, we highlight the strengths of FT, emphasize dataset size’s role, and introduce nuanced evaluation metrics. Promising perspectives include synonym-aware metric refinement, and data augmentation with LLMs. The study contributes valuable insights to KG construction methodologies, setting the stage for further advancements
Fichier principal
Vignette du fichier
221.pdf (1) Télécharger le fichier
Origin Publisher files allowed on an open archive

Dates and versions

hal-04862235 , version 1 (08-01-2025)

Licence

Identifiers

  • HAL Id : hal-04862235 , version 1

Cite

Hussam Ghanem, Christophe Cruz. Fine-Tuning vs. Prompting: Evaluating the Knowledge Graph Construction with LLMs. 3rd International Workshop on Knowledge Graph Generation from Text (Text2KG) Co-located with the Extended Semantic Web Conference (ESWC 2024), May 2024, Hersonissos, Greece. pp.7. ⟨hal-04862235⟩
28 View
3 Download

Share

More