One of the most common missteps we see in AI projects: a team spends significant time and budget training a custom model when better prompting would have solved their problem. The reverse is also true — teams using generic models with generic prompts when a fine-tuned model would have dramatically improved results.

What is prompting?

Prompting is how you communicate with an existing LLM. A well-crafted prompt gives the model context, instructions, examples, and constraints — all in natural language. It's how you tell the model what you want it to do and how you want it to behave. Good prompting can take a generic model and make it surprisingly effective for specific tasks.

The advantage of prompting: it's fast, it's cheap, and it's iterative. You can test a new approach in minutes. OpenAI, Google, and Anthropic have all built models that respond extremely well to careful prompting. For most use cases, this is where you should start.

What is fine-tuning?

Fine-tuning is training an existing model on your specific data. You take a base model (like GPT-4o or Gemini Flash) and continue training it on examples that are specific to your task. The result is a model that "knows" your domain, your terminology, your tone, and your patterns — without needing extensive prompting every time.

Fine-tuning makes sense when: your domain has specific terminology that generic models struggle with, you need consistent output format across thousands of requests, you have large amounts of labelled training data already, or inference cost at scale makes prompting with long contexts prohibitively expensive.

How we decide

We start every AI project with prompting. We prototype with carefully crafted prompts and measure the results. If the model is consistently failing in ways that can't be fixed with better prompting — it keeps misunderstanding domain-specific terminology, or the prompt is so long that inference is slow and expensive — then we consider fine-tuning. Fine-tuning is expensive in time, data requirements, and cost. Only pursue it when the ROI is clear.