Benchmarking Low-Rank Adaptation for Enterprise LLM Fine-Tuning
Abstract
We evaluate low-rank adaptation (LoRA) versus full fine-tuning across policy, customer support, and retrieval tasks. On three internal corpora (28M tokens), LoRA achieved within 1.5 BLEU/2 ROUGE of full FT while cutting GPU-hours by 73%.
Cite this article
Scott, O. & Khan, A. (2025). Benchmarking Low-Rank Adaptation for Enterprise LLM Fine-Tuning. Research Explorations in Global Knowledge & Technology (REGKT), 3 (4). Retrieved from https://regkt.com/article.php?id=111&slug=benchmarking-lora-for-enterprise-llm-fine-tuning