Glossary — Agentic AI

What is Fine-Tuning?

1 min read Updated

Fine-tuning is the process of further training a pre-trained language model on a domain-specific dataset to improve its performance on particular tasks.

WHY IT MATTERS

Pre-trained LLMs are generalists. Fine-tuning makes them specialists. By training on curated examples — financial analysis formats, trading decision patterns — the model learns to perform domain tasks more reliably.

Approaches range from full model training (expensive) to parameter-efficient methods like LoRA and QLoRA that modify a small fraction of weights.

For agent development, fine-tuning can improve tool use accuracy and reduce hallucination in domain-specific contexts. But it's not a substitute for runtime guardrails.

FREQUENTLY ASKED QUESTIONS

When should you fine-tune vs use prompt engineering?
Fine-tune when you have consistent, repeatable tasks with clear examples and prompting isn't achieving required quality. Start with prompting — it's cheaper and faster.
How much data do you need?
With LoRA, even a few hundred high-quality examples can meaningfully improve performance. Full fine-tuning benefits from thousands to millions of examples.
Does fine-tuning prevent hallucination?
It reduces hallucination in trained domains but doesn't eliminate it. The model can still hallucinate on edge cases not in the fine-tuning data.

FURTHER READING

BUILD WITH POLICYLAYER

Non-custodial spending controls for AI agents. Setup in 5 minutes.

Get Started