While there are more polished and feature-rich LLM clients available, NextChat (PWA) has been sufficient for my daily needs. One thing I’d love to do is integrate it with my local Ollama server. This short guide will walk you through the process.

Prerequisites

Steps

Note: the commands shown below are for MacOS. For Linux, see Setting environment variables on Linux.

  1. Run this command in your terminal to allow additional web origins to access Ollama:
    launchctl setenv OLLAMA_ORIGINS "https://your-nextchat-pwa.com"
  2. Configure Ollama API in NextChat
    • Leave OpenAI as the Model Provider
    • Update OpenAI Endpoint to your local Ollama endpoint (the default endpoint is http://localhost:11434/)
    • Leave OpenAI API Key empty
    • Set Custom Model to the model you want to use, e.g. gemma, mistral, and llama3.1
  3. You can start using the local model. Don’t forget to set the model to llama3.1 (llama3.1) in chats.

Tips

Sources