← Back to Docs

LLM Configuration

Configure AI models for test generation

WellTested uses Large Language Models (LLMs) to generate test scenarios and code. You must configure an LLM provider before generating tests.

Quick Setup

On first login, you’ll see a prompt to configure LLM settings.

Visit System Settings and fill in the following fields:

  • LLM Provider: Select your LLM’s interface
  • API Key: Enter your API key
  • API Base URL: Enter your LLM’s API endpoint
  • Model Name: Enter the model name

Below are example LLM configurations:

LLM ProviderAPI Base URLModel Name (Recommended)
OpenAIopenaihttps://api.openai.com/v1gpt-4o-2024-08-06
Claude (Anthropic)claudehttps://api.anthropic.comclaude-sonnet-4-5
Qwenopenaihttps://dashscope.aliyuncs.com/compatible-mode/v1qwen3-max
Self-hosted vLLMopenaihttp://vllm-server:8000/v1deepseek-ai/DeepSeek-V3-0324
Self-hosted Ollamaopenaihttp://10.0.1.100:11434/v1deepseek-v3:671b

Get API Keys

Langfuse Monitoring (Optional)

Track AI usage and costs by configuring Langfuse in .env:

LANGFUSE_PUBLIC_KEY=pk-lf-...
LANGFUSE_SECRET_KEY=sk-lf-...
LANGFUSE_HOST=https://cloud.langfuse.com

Sign up at https://cloud.langfuse.com to get credentials.

Troubleshooting

Connection Test Fails

Solutions:

  • Verify API key is correct
  • Check API Base URL format
  • Ensure model name is correct
  • Check internet connection

Rate Limit Exceeded

Solutions:

  • Wait and retry
  • Upgrade provider plan
  • Use different model

Next Steps

After configuring LLM:

  1. Create Project - Create your first project
  2. Upload API - Import OpenAPI document
  3. Generate Scenario - Create test scenarios

← Back: Installation | ← Documentation | Next: Create Project →