4/24/2025
Published on Convert on 24/04/2025.
LLMs pattern-match; they don’t “think.” Examples in your prompt focus the model and make output more precise and consistent.
The problem: Ambiguity (e.g. “Cardinals” = football or baseball?) and messy or inconsistent format.
The fix: Add examples.
- One-shot: One input–output pair (e.g. one baseball team + stadium) nudges the model toward the right kind of answer.
- Few-shot: Several examples fix both meaning and format (e.g. team, stadium, location every time).
Where examples help:
- Focus: For feedback labels, “I really like the smell” → “Smells nice” trains the model to label what was liked. Good and bad examples sharpen this.
- Structure: Sections, tables, or JSON—show the shape you want and the model follows it.
- Tone: Share samples of the style you want. For sensitive content, a human draft plus AI line-edit often works better than full AI from scratch.
The article ties this to the author’s tool Ressada and to building small libraries of examples for reuse.