Skip to yearly menu bar Skip to main content


Poster
in
Workshop: I Can’t Believe It’s Not Better (ICBINB): Failure Modes in the Age of Foundation Models

Analyzing the factual knowledge of parameter efficient instruction tuned mid-size Large Language Models

Anmol Nayak · Hari prasad Timmapathini


Abstract:

Large Language Models (LLM) have significantly improved Natural LanguageProcessing (NLP) by enhancing the accuracy, efficiency, and versatility of variousNLP applications, from text generation to language translation, due to their abilityto capture and leverage vast amounts of linguistic and factual knowledge. WhileLLM have pushed the boundaries, they typically need to be further instructiontuned to get improved performance on niche applications. In this paper, we focuson analyzing the factual knowledge of LLM keeping in mind the practical aspectsof using LLM by: 1) training only a small injection model (having ≈ 0.05 %of the parameters of the base LLM) using the Low Rank Adapation (LoRA)parameter efficient technique, and 2) restricting our study to Llama-2-13b-chat andStableBeluga-13B, which are two mid-size LLM having 13 billion parameters andare based on the LLama 2 architecture. The injection model is instruction tuned forKnowledge Base (KB) construction on the LM-KBC 2023 challenge dataset, whichcontains subject-relation-object triplets of Wikipedia entities across 21 differentfactual relations. Our empirical analysis shows that even after instruction tuning,the LLM are: 1) deficient in foundational knowledge of many must-know areaslike Geography, 2) unable to effectively use the context supplied in the prompt,and 3) fragile to subtle changes in prompt at inference. The source code for ourexperiments can be found at: https://github.com/Ffc1234/NIPSICBINBsubmission

Chat is not available.