IFEval-Extended: Enhancing Instruction-Following Evaluation in Large Language Models through Dynamic Prompt Generation

Main Article Content

Bohdan Kovalevskyi

Abstract

This paper introduces IFEval-Extended, an innovative benchmark for evaluating the instruction-following capabilities of Large Language Models (LLMs). Building upon the foundational principles of the existing IFEval framework, IFEval-Extended addresses the limitations of predefined prompts by employing a dynamic, generative approach to instruction synthesis. This method allows for the creation of thousands of unique, human-like instructions from a single base template, mitigating the risk of overfitting and enhancing the diversity and robustness of the evaluation process. The benchmark extends the original set of instruction categories in IFEval, providing a more granular assessment of LLM performance across various parameters such as language structure, keyword usage, and response formatting. The study evaluates state-of-the-art LLMs, including GPT-4o, LLama 3.1 (8B), and LLama 3 (70B), using strict and loose accuracy metrics. Results reveal that while models excel in handling simpler instructions, they struggle with complex tasks requiring precise adherence to multiple constraints. The findings highlight the strengths and weaknesses of current LLM capabilities, offering valuable insights for model development and real-world applications. IFEval-Extended contributes to the ongoing development of more robust, scalable, and objective LLM evaluation methods, thereby advancing the field of Natural Language Processing.

Article Details

How to Cite
Kovalevskyi, B. . (2024). IFEval-Extended: Enhancing Instruction-Following Evaluation in Large Language Models through Dynamic Prompt Generation. Journal of Artificial Intelligence General Science (JAIGS) ISSN:3006-4023, 5(1), 513–524. https://doi.org/10.60087/jaigs.v5i1.299
Section
Articles