Fine-Tuning

Learn how LoRA and QLoRA enable efficient fine-tuning of large language models using low-rank updates and quantization.

Questions about Low-Rank Adaptation (LoRA) are becoming increasingly common. Interviewers love asking what LoRA is and how it differs from traditional fine-tuning methods in large language models because it probes your understanding of cutting-edge parameter-efficient fine-tuning (PEFT) techniques and your ability to contrast them with classical approaches. This is a popular question because LoRA is a recent innovation that significantly lowers the cost of fine-tuning huge models and reveals whether you stay up-to-date with modern ML practices. As large language models (LLMs) like GPT, Claude, and Llama, etc., soar into the billions and maybe trillions of parameters, companies want engineers who know how to adapt these models efficiently without always resorting to brute-force retraining of all parameters.

Press + to interact

What the interviewer is assessing here is multifold. First, they want to see that you grasp what LoRA does under the hood—that you can describe its core idea (injecting low-rank matrices into a model’s layers instead of tuning every weight). They also expect you to understand why LoRA exists: namely, the practical pain points of traditional fine-tuning (huge GPU memory usage, slow training, storing separate gigantic models per task) and how LoRA addresses these. A strong answer will also mention how LoRA compares to other PEFT methods (like adapters, or prompt tuning) and even variants like QLoRA (quantized LoRA). By asking this, interviewers gauge your depth of knowledge: Do you know just the buzzword, or can you explain the math and trade-offs? Can you discuss when LoRA is advantageous and might not be the right choice? Ultimately, they’re checking if you can communicate a complex concept, structured—a crucial skill when working on GenAI teams.

By the end of this lesson, you should have a well-rounded understanding of LoRA and be ready to ace that interview question!

What is LoRA?

LoRA (Low-Rank Adaptation) is a technique for fine-tuning large models by adding some small trainable components to the original model instead of modifying all the original model’s parameters. Think of a huge pretrained model as a complex machine with billions of knobs (parameters) set in just the right way to perform general language tasks. Now, if you want this machine to perform a new task (say, be good at legal text Q&A), traditional fine-tuning would try to adjust all those billions of knobs—a very costly and delicate process.

LoRA takes a clever shortcut: it leaves the original knobs frozen in place and attaches a few new tiny knobs (small matrices) that can be tuned to achieve the desired adjustment. An analogy: imagine you have a giant painting (the pretrained model) that is mostly perfect, but you want to change the style slightly. Instead of repainting the whole thing (updating every pixel), you lay a thin transparent overlay on it and paint only on that overlay to get the effect you want. The original painting stays intact, and your changes are confined to the overlay. In LoRA, that overlay is realized as a low-rank matrix added to the model’s weights, which is much cheaper to train.

From a technical perspective, LoRA injects trainable low-rank matrices into the model’s existing layers (often the weight matrices of the Transformer’s attention and feed-forward networks)​. Here’s the breakdown: Suppose the pretrained model has a weight matrix W0W_0 in some layer (for example, the matrix that projects the hidden state in a transformer). This matrix might be huge (dimensions like d×kd \times k). LoRA proposes that when fine-tuning for a new task, the change to this weight—call it ΔWΔW—doesn’t need to be full-size; instead, it can be approximated by a low-rank decomposition. In math terms, LoRA assumes,

Where:

  • WARd×rW_A \in \mathbb{R}^{d \times r}

  • WBRr×k W_B \in \mathbb{R}^{r \times k}

This is for some small rank rr. As rr is chosen to be much smaller than dd or kk, the product WAWBW_A W_B ...