Introduction
This workflow connects to OpenAI, Anthropic, and Groq, processing requests in parallel with automatic performance metrics. Ideal for testing speed, cost, and quality across models.
How It Works
Webhooks trigger parameter extraction and routing. Three AI agents run simultaneously with memory and parsing. Responses merge with detailed metrics.
Workflow Template
Webhook → Extract Parameters → Router
├→ OpenAI Agent
├→ Anthropic Agent
├→ Groq Agent
→ Merge → Metrics → Respond
Workflow Steps
- Webhook receives POST with prompt and settings.
- Parameters extracted and validated.
- Router directs by cost, latency, or type.
- AI agents run in parallel.
- Results merged with metadata.
- Metrics compute time, cost, and quality.
- Response returns outputs and recommendation.
Setup Instructions
- Activate Webhook with authentication.
- Add API keys for all providers.
- Define models, tokens, and temperature.
- Adjust Router logic for selection.
- Tune Metrics scoring formulas.
Prerequisites
- n8n v1.0+ instance
- API keys: OpenAI, Anthropic, Groq
- HTTP client for testing
Customization
Add providers like Gemini or Azure OpenAI.
Enable routing by cost or performance.
Benefits
Auto-select efficient providers and compare model performance in real time.