n8n
Fair-code workflow automation platform with native AI capabilities. Combine visual building with custom code, self-host or cloud, 400+ integrations.
npx n8nFair-code workflow automation platform with native AI capabilities. Combine visual building with custom code, self-host or cloud, 400+ integrations.
npx n8nThe agent harness performance optimization system. Skills, instincts, memory, security, and research-first development for Claude Code, Codex, Opencode, Cursor and beyond.
git clone https://github.com/affaan-m/everything-claude-code.gitAn open-source AI agent that brings the power of Gemini directly into your terminal.
npx @google/gemini-cliBifrost is a high-performance AI gateway that unifies access to 15+ providers (OpenAI, Anthropic, AWS Bedrock, Google Vertex, and more) through a single OpenAI-compatible API. Deploy in seconds with zero configuration and get automatic failover, load balancing, semantic caching, and enterprise-grade features.

Go from zero to production-ready AI gateway in under a minute.
Step 1: Start Bifrost Gateway
# Install and run locally
npx -y @maximhq/bifrost
# Or use Docker
docker run -p 8080:8080 maximhq/bifrost
Step 2: Configure via Web UI
# Open the built-in web interface
open http://localhost:8080
Step 3: Make your first API call
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello, Bifrost!"}]
}'
That's it! Your AI gateway is running with a web interface for visual configuration, real-time monitoring, and analytics.
Complete Setup Guides:
Bifrost supports enterprise-grade, private deployments for teams running production AI systems at scale. In addition to private networking, custom security controls, and governance, enterprise deployments unlock advanced capabilities including adaptive load balancing, clustering, guardrails, MCP gateway and and other features designed for enterprise-grade scale and reliability.
Bifrost uses a modular architecture for maximum flexibility:
bifrost/
โโโ npx/ # NPX script for easy installation
โโโ core/ # Core functionality and shared components
โ โโโ providers/ # Provider-specific implementations (OpenAI, Anthropic, etc.)
โ โโโ schemas/ # Interfaces and structs used throughout Bifrost
โ โโโ bifrost.go # Main Bifrost implementation
โโโ framework/ # Framework components for data persistence
โ โโโ configstore/ # Configuration storages
โ โโโ logstore/ # Request logging storages
โ โโโ vectorstore/ # Vector storages
โโโ transports/ # HTTP gateway and other interface layers
โ โโโ bifrost-http/ # HTTP transport implementation
โโโ ui/ # Web interface for HTTP gateway
โโโ plugins/ # Extensible plugin system
โ โโโ governance/ # Budget management and access control
โ โโโ jsonparser/ # JSON parsing and manipulation utilities
โ โโโ logging/ # Request logging and analytics
โ โโโ maxim/ # Maxim's observability integration
โ โโโ mocker/ # Mock responses for testing and development
โ โโโ semanticcache/ # Intelligent response caching
โ โโโ telemetry/ # Monitoring and observability
โโโ docs/ # Documentation and guides
โโโ tests/ # Comprehensive test suites
Choose the deployment method that fits your needs:
Best for: Language-agnostic integration, microservices, and production deployments
# NPX - Get started in 30 seconds
npx -y @maximhq/bifrost
# Docker - Production ready
docker run -p 8080:8080 -v $(pwd)/data:/app/data maximhq/bifrost
Features: Web UI, real-time monitoring, multi-provider management, zero-config startup
Learn More: Gateway Setup Guide
Best for: Direct Go integration with maximum performance and control
go get github.com/maximhq/bifrost/core
Features: Native Go APIs, embedded deployment, custom middleware integration
Learn More: Go SDK Guide
Best for: Migrating existing applications with zero code changes
# OpenAI SDK
- base_url = "https://api.openai.com"
+ base_url = "http://localhost:8080/openai"
# Anthropic SDK
- base_url = "https://api.anthropic.com"
+ base_url = "http://localhost:8080/anthropic"
# Google GenAI SDK
- api_endpoint = "https://generativelanguage.googleapis.com"
+ api_endpoint = "http://localhost:8080/genai"
Learn More: Integration Guides
Bifrost adds virtually zero overhead to your AI requests. In sustained 5,000 RPS benchmarks, the gateway added only 11 ยตs of overhead per request.
| Metric | t3.medium | t3.xlarge | Improvement |
|---|---|---|---|
| Added latency (Bifrost overhead) | 59 ยตs | 11 ยตs | -81% |
| Success rate @ 5k RPS | 100% | 100% | No failed requests |
| Avg. queue wait time | 47 ยตs | 1.67 ยตs | -96% |
| Avg. request latency (incl. provider) | 2.12 s | 1.61 s | -24% |
Key Performance Highlights:
Complete Benchmarks: Performance Analysis
Complete Documentation: https://docs.getbifrost.ai
Join our Discord for community support and discussions.
Get help with:
We welcome contributions of all kinds! See our Contributing Guide for:
For development requirements and build instructions, see our Development Setup Guide.
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
Built with โค๏ธ by Maxim