Model Providers
Imposter is designed to be provider-agnostic. Whether you want the total privacy of local models or the raw power of cloud-scale LLMs, connecting them is a simple, one-time process.
Ollama (Local AI)
Running models locally is the ultimate way to use Imposter. Your data never leaves your RAM, and the response time is near-instant (depending on your hardware).
How to Setup
- 1.Install Ollama from ollama.com.
- 2.Run
ollama pull llama3in your terminal. - 3.Imposter will auto-detect all installed models on startup.
Pro Tip: If you want to add a model manually (e.g., using a custom base URL), use the Model Management panel in Settings.
OpenRouter (Cloud Hub)
OpenRouter is the standard for accessing hundreds of cloud models through a single API key. It's perfect if you don't want to manage local hardware.
How to Setup
- 1.Go to openrouter.ai and create an account.
- 2.Generate an API Key in your dashboard.
- 3.Search for a "Free" model and copy its Model ID.
- 4.Paste the ID and Key into Imposter — everything is saved locally.
Google Gemini (Direct)
Direct integration with Google Gemini via Google AI Studio. This provides the most stable and feature-rich experience for Gemini models without intermediaries.
Verification Flow
- 1.Get your API Key from Google AI Studio.
- 2.Paste the key and click "Verify Models".
- 3.Imposter will ping Google's API, verify your key status, and automatically list every model available for that key.
- 4.Select your preferred model (e.g., Gemini 2.5 Flash) and save.
Common Issues
Ensure the Ollama application is running in your system tray. Click 'Refresh Models' in Imposter Settings.
Double check your API key for extra spaces. Ensure your region is supported by Google AI Studio.
Ensure the Model ID is exactly as shown on OpenRouter (e.g. 'anthropic/claude-3-haiku').
Imposter handles CORS automatically via the Main process. Check your firewall settings if issues persist.