π Deployment β
Deploy Unified AI Router to various platforms.
π Render.com β
π₯οΈ Dashboard Method β
Push to GitHub first:
bashgit push origin mainCreate Web Service on Render:
- Go to dashboard.render.com
- Click "New +" β "Web Service"
- Connect your GitHub repository
Configure Build Settings:
- Build Command:
npm install - Start Command:
npm start - Node Version: 24.x or higher
- Build Command:
Add Environment Variables:
bashOPENROUTER_API_KEY=your_key_here OPENAI_API_KEY=your_key_here PORT=3000Deploy and Test:
bashcurl https://your-app.onrender.com/health curl https://your-app.onrender.com/models
β Verify Deployment β
bash
# Health check
curl https://your-app.onrender.com/health
# List available models
curl https://your-app.onrender.com/v1/models
# Test chat completions
curl -X POST https://your-app.onrender.com/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"messages":[{"role":"user","content":"Hello!"}],"model":"any"}'π Railway β
Install Railway CLI:
bashnpm install -g @railway/cliLogin and Initialize:
bashrailway login railway initSet Environment Variables:
bashrailway variables set OPENROUTER_API_KEY=your_key railway variables set OPENAI_API_KEY=your_keyDeploy:
bashrailway up
βοΈ Environment Configuration β
π Production .env β
bash
# Required API Keys
OPENROUTER_API_KEY=sk-or-your-key
OPENAI_API_KEY=sk-your-openai-key
# Server Configuration
PORT=3000
NODE_ENV=production
# Optional: Circuit Breaker Settings
CIRCUIT_TIMEOUT=30000
CIRCUIT_ERROR_THRESHOLD=50
CIRCUIT_RESET_TIMEOUT=300000π Security Considerations β
- Never commit .env files to git
- Use platform-specific environment variable management
- Rotate API keys regularly
- Monitor API usage and costs
- Implement rate limiting if needed
π Monitoring β
π₯ Health Checks β
bash
# Basic health check
curl http://localhost:3000/health
# Provider status
curl http://localhost:3000/providers/statusLogging β
Logs include:
- Provider fallback events
- Circuit breaker state changes
- API response times
- Error details
View logs:
bash
# Render
tail -f /var/log/app.log
# Heroku
heroku logs --tail
# Docker
docker logs -f container-nameβ‘ Performance Tips β
- Use multiple API keys for load balancing
- Configure appropriate timeouts based on provider speeds
- Monitor circuit breaker settings to avoid premature failures
- Use streaming for better user experience with long responses
- Cache responses where appropriate