Skip to main content
For organizations with strict data residency requirements, regulatory compliance needs, or air-gapped environments, Kestrel supports full self-hosted deployment in your own cloud infrastructure.

Deployment Models

Standard Deployment

Your Kubernetes cluster has internet access. Pull images directly from our container registry, access LLM services via regional endpoints, and integrate with GitHub, Slack, and PagerDuty via public APIs. This is the simplest deployment model for organizations without strict network isolation requirements.
  • All data stays within your environment
  • Uses cloud-native LLM services (Amazon Bedrock, Vertex AI, Azure OpenAI, OCI Generative AI)
  • Container images pulled from Kestrel’s private registry
  • Integrates with your existing CI/CD via GitHub.com or GitLab.com

Air-Gapped Deployment

No internet access required. All external services are accessed via VPC/Private endpoints - no public internet needed. Integrate with GitHub Enterprise Server for IaC workflows. Used by customers in defense, financial services, and other highly regulated industries.
  • No internet access required
  • LLM services accessed via private endpoints (AWS VPC Endpoints for Bedrock, GCP Private Service Connect for Vertex AI, Azure Private Endpoints for Azure OpenAI, OCI Service Gateway for OCI Generative AI)
  • Container images pulled from your private registry (ECR, Artifact Registry, ACR, OCI Container Registry)
  • Secrets stored in AWS Secrets Manager, GCP Secret Manager, Azure Key Vault, or OCI Secret Management
  • Integrates with GitHub Enterprise Server for IaC workflows

Supported Cloud Providers

ProviderStandardAir-GappedNative Services
AWSEKS, ECR, Amazon Bedrock, Secrets Manager, RDS, ElastiCache
GCPGKE, Artifact Registry, Vertex AI, Secret Manager, Cloud SQL, Memorystore
AzureAKS, ACR, Azure OpenAI, Key Vault, Azure Database, Azure Cache
OCIOKE, OCI Container Registry, OCI Generative AI, OCI Secret Management, OCI Managed Relational Databases, OCI Cache with Redis

Architecture

The on-premise deployment includes the following components:
  • Kestrel Server - Core API server and incident engine
  • Kestrel Frontend - Web dashboard
  • Kestrel Router - WebSocket and gRPC routing layer
  • Kestrel Agents - AI agent workers for incident analysis
  • PostgreSQL - Primary database (can use managed RDS/Cloud SQL/Azure Database/OCI PostgreSQL DB)
  • Redis - Caching and message queue (can use managed ElastiCache/Memorystore/Azure Cache/OCI Cache with Redis)
  • Elasticsearch/OpenSearch (Optional) - Enhanced search and log analysis
All components are deployed via a single unified Helm chart in your Kubernetes cluster.

Native Workload Identity

Kestrel integrates with each cloud provider’s native workload identity system for secure, keyless authentication:
ProviderWorkload Identity
AWSIRSA (IAM Roles for Service Accounts), EKS Pod Identity, or Node IAM Role
GCPWorkload Identity Federation for GKE service accounts
AzureAzure AD Workload Identity or Managed Identity for AKS pods
OCIOKE Workload Identity or Instance Principal authentication

Prerequisites

  • Kubernetes cluster (v1.24+) in your cloud environment
  • helm (v3.0+) installed
  • Container registry access (ECR, Artifact Registry, ACR, or OCI Container Registry)
  • LLM provider configured (Bedrock, Vertex AI, Azure OpenAI, OCI Generative AI, or OpenAI)
  • Secrets management service (AWS Secrets Manager, GCP Secret Manager, Azure Key Vault, or OCI Secret Management)
  • IAM role, service account, managed identity, or workload identity with required permissions
  • On-Premise license enabled for your Kestrel organization

Next Steps