Local LLMs Part 3: How to Set Up a Local LLM Server Using LM Studio and Ollama

Published on Convert on 13/10/2025.

Chat UIs are fine for trying a model. For workflows, scripts, or tools you need the model as a server other apps can call.

  • LM Studio: Turn on Developer Mode in the sidebar. The API URL at the top is what apps use. Test with cURL: curl {url}/v1/models.
  • Ollama (recommended for a real server): Install from ollama.com. In the terminal: ollama pull <model>, then ollama serve. Server runs at http://127.0.0.1:11434.
  • Wider access: To reach Ollama from other devices or from n8n in Docker: get your local IP, set export OLLAMA_HOST=0.0.0.0, run ollama serve again, and test with curl {your_ip}:11434.

The article also explains APIs and endpoints in plain terms and points to Part 4 for connecting a chat UI to this server.

Read the full article on Convert →

Iqbal Ali

Iqbal Ali

Fractional AI Advisor and Experimentation Lead. Training, development, workshops, and fractional team member.