Local LLMs Part 2: How I Defeated ChatGPT 5 with the Smallest Language Model I Could Find

Published on Convert on 24/09/2025.

In Part 1, the 0.6B model only gave two insights; the 1.7B model was close to ChatGPT. This post gets the 0.6B model to “perfect” by changing how we prompt.

  • What didn’t fix it: Tweaking temperature and adding more few-shot examples helped a bit but not enough.
  • What worked: breakpoints. Like pausing in a debugger, we split the job so the model isn’t asked to do everything at once:
    • Prompt 1: List negative insights only (and ignore things like “pigs might fly”).
    • Prompt 2: List positive insights.
    • Prompt 3: Turn the lists into structured output (e.g. JSON).
  • Result: The tiny model matched ChatGPT’s insight quality for the test—with three prompts instead of one, but very fast on a local machine.

The article has full prompt examples and next steps (automation and training).

Read the full article on Convert →

Iqbal Ali

Iqbal Ali

Fractional AI Advisor and Experimentation Lead. Training, development, workshops, and fractional team member.