Skip to main content
This guide walks you through the Kestrel on-premise deployment wizard, which generates a complete Helm values file for your environment.

Accessing the Setup Wizard

  1. Log in to your Kestrel dashboard at platform.usekestrel.ai
  2. Navigate to Integrations → On-Premise Deployment
  3. The guided setup wizard walks you through each configuration step
On-premise deployment requires a license. If you don’t see the On-Premise option, contact your Kestrel account manager.

Step 1: Choose Your Cloud Provider

Select your target cloud provider:
  • Amazon Web Services (AWS)
  • Google Cloud Platform (GCP)
  • Microsoft Azure
  • Oracle Cloud Infrastructure (OCI)
This selection determines the available LLM services, secrets providers, container registry, and managed database options.

Step 2: Select Deployment Type

Choose your deployment model:
TypeDescription
StandardOutbound internet access available. Uses cloud LLM APIs directly.
Air-GappedNo internet access. All services accessed via VPC/Private endpoints. Requires private container registry.

Step 3: Configure LLM

Select and configure your LLM provider:

Amazon Bedrock (AWS)

  • Select a model (e.g., Claude Sonnet 4.5)
  • Set the Bedrock region
  • Optionally provide a custom endpoint URL (for VPC endpoints in air-gapped mode)

Vertex AI (GCP)

  • Configure Vertex AI region and model settings
  • For air-gapped mode, use Private Service Connect

Azure OpenAI

  • Configure Azure OpenAI resource and deployment settings
  • For air-gapped mode, use Azure Private Endpoints

OCI Generative AI

  • Configure OCI Generative AI service settings
  • For air-gapped mode, use OCI Service Gateway

OpenAI

  • Enter your OpenAI API key
  • Select the model (e.g., GPT-5.2)

Step 4: Configure Infrastructure

Container Registry

Enter your private container registry URL where Kestrel images will be stored:
  • AWS: 123456789012.dkr.ecr.us-east-1.amazonaws.com
  • GCP: us-docker.pkg.dev/project-id/repo
  • Azure: kestrel.azurecr.io
  • OCI: <region>.ocir.io/<tenancy-namespace>/kestrel

IAM / Workload Identity

Provide the identity that Kestrel will use for cloud service access:
  • AWS: IAM Role ARN (IRSA, EKS Pod Identity, or Node IAM Role)
  • GCP: Service Account (Workload Identity Federation)
  • Azure: Managed Identity (Azure AD Workload Identity)
  • OCI: OKE Workload Identity or Instance Principal

Database

Choose between:
  • Bundled PostgreSQL - Included in the Helm deployment (suitable for dev/test)
  • External PostgreSQL - Use a managed database:
    • AWS: RDS
    • GCP: Cloud SQL
    • Azure: Azure Database
    • OCI: OCI Managed Relational Databases

Redis

Choose between bundled or external managed Redis:
  • AWS: ElastiCache
  • GCP: Memorystore
  • Azure: Azure Cache
  • OCI: OCI Cache with Redis

Elasticsearch/OpenSearch (Optional)

Enable for enhanced search and log analysis:
  • Endpoint URL
  • Username and password

Ingress

  • Ingress Class Name: Your ingress controller class (default: nginx)
  • Ingress Host: The domain name for your Kestrel installation

Secrets Provider

Select where credentials will be stored:
  • AWS Secrets Manager
  • GCP Secret Manager
  • Azure Key Vault
  • OCI Secret Management

Step 5: GitHub Integration (Air-Gapped Only)

For air-gapped deployments, configure GitHub Enterprise Server for IaC workflows:
  1. Enter your GitHub Enterprise Server URL (e.g., https://github.internal.company.com)
  2. Create a GitHub App on your GHE instance:
    • Go to Settings → Developer Settings → GitHub Apps → New GitHub App
    • Set the webhook URL to your Kestrel ingress host
    • Grant required permissions (repository contents: read/write, pull requests: read/write)
  3. Enter the GitHub App ID and App Slug
  4. Store credentials securely in AWS Secrets Manager, GCP Secret Manager, Azure Key Vault, or OCI Secret Management
Kestrel generates a TLS CA certificate for secure communication with GitHub Enterprise Server. Download it from the setup wizard and configure it in your GHE instance.

Step 6: Generate Configuration

Click Generate Helm Values to produce a complete values.yaml file customized for your environment. The generated configuration includes:
  • All service configurations
  • Database connection strings
  • LLM provider settings
  • Registry and image references
  • Ingress configuration
  • Secrets provider settings
  • Workload identity bindings

Step 7: Download and Deploy

Pull Container Images

Click Get Registry Credentials to obtain credentials for pulling Kestrel images from the private registry.
# Log in to the Kestrel registry
docker login ghcr.io -u <username> -p <token>

# Pull and retag images for your private registry
docker pull ghcr.io/kestrelai/kestrel-server:latest
docker tag ghcr.io/kestrelai/kestrel-server:latest <your-registry>/kestrel-server:latest
docker push <your-registry>/kestrel-server:latest

# Repeat for all Kestrel images
For air-gapped environments, download both the Helm chart and values file, then install from local files.

Create Secrets

Create the necessary Kubernetes secrets:
# Create the namespace
kubectl create namespace kestrel-ai

# Create registry credentials (if using private registry)
kubectl create secret docker-registry kestrel-registry \
  --namespace kestrel-ai \
  --docker-server=<your-registry> \
  --docker-username=<username> \
  --docker-password=<token>

Install with Helm

helm install kestrel \
  oci://ghcr.io/kestrelai/charts/kestrel \
  --version 1.0.0 \
  --namespace kestrel-ai \
  --create-namespace \
  -f kestrel-values.yaml

Verify the Deployment

# Check all pods are running
kubectl get pods -n kestrel-ai

# Check the ingress
kubectl get ingress -n kestrel-ai
Navigate to your configured domain to access the Kestrel dashboard.

Air-Gapped Private Endpoints

For air-gapped deployments, Kestrel guides you through creating the required private endpoints for your cloud provider:

LLM Service Access

AWS VPC Endpoints for Bedrock, GCP Private Service Connect for Vertex AI, Azure Private Endpoints for Azure OpenAI, or OCI Service Gateway for OCI Generative AI.

Container Registry

Private access to ECR (AWS), Artifact Registry (GCP), Azure Container Registry, or OCI Container Registry without traversing the public internet.

Secrets Management

Private endpoints for AWS Secrets Manager, GCP Secret Manager, Azure Key Vault, or OCI Secret Management to retrieve credentials at runtime. The wizard provides cloud-specific, step-by-step instructions for creating each endpoint, including security group configuration and private DNS settings.

Updating

To update your on-premise deployment:
  1. Return to the setup wizard and click Generate Helm Values to get the latest configuration
  2. Pull the newest container images
  3. Run Helm upgrade:
helm upgrade kestrel \
  oci://ghcr.io/kestrelai/charts/kestrel \
  --namespace kestrel-ai \
  -f kestrel-values.yaml

Next Steps