Fine-tuning vs. RAG: Tailoring Large Language Models with Different Strokes

One thought on “Fine-tuning vs. RAG: Tailoring Large Language Models with Different Strokes”

  1. IIRC Fine Tuning is usually better at adapting to the style / behavior of a domain than it is at learning new knowledge of that domain (especially if the FT dataset contains contradictory knowledge versus the pre training data). For domain adaptation, if you need to have grounded understanding of new knowledge within the domain, FT alone may not be sufficient.

Leave a Reply