Evaluation of Orca 2 against other LLMs for Retrieval Augmented Generation

This study presents a comprehensive evaluation of Microsoft Research’s Orca 2, a small yet potent language model, in the context of Retrieval Augmented Generation (RAG). The research involved comparing Orca 2 with other significant models such as Llama-2, GPT-3.5-Turbo, and GPT-4, particularly focus...

Full description

Saved in:
Bibliographic Details
Main Authors: HUANG, Donghao, WANG, Zhaoxia
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9052
https://ink.library.smu.edu.sg/context/sis_research/article/10055/viewcontent/RAFDA_2024_Empirical_Evaluation_of_Orca_2_Models_for_Retrieval_Augmented_Generation.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:This study presents a comprehensive evaluation of Microsoft Research’s Orca 2, a small yet potent language model, in the context of Retrieval Augmented Generation (RAG). The research involved comparing Orca 2 with other significant models such as Llama-2, GPT-3.5-Turbo, and GPT-4, particularly focusing on its application in RAG. Key metrics, included faithfulness, answer relevance, overall score, and inference speed, were assessed. Experiments conducted on high-specification PCs revealed Orca 2’s exceptional performance in generating high quality responses and its efficiency on consumer-grade GPUs, underscoring its potential for scalable RAG applications. This study highlights the pivotal role of smaller, efficient models like Orca 2 in the advancement of conversational AI and their implications for various IT infrastructures. The source codes and datasets of this paper are accessible here (https://github.com/inflaton/Evaluation-of-Orca-2-for-RAG.).