Accessing the Setup Wizard
- Log in to your Kestrel dashboard at platform.usekestrel.ai
- Navigate to Integrations → On-Premise Deployment
- The guided setup wizard walks you through each configuration step
On-premise deployment requires a license. If you don’t see the On-Premise option, contact your Kestrel account manager.
Step 1: Choose Your Cloud Provider
Select your target cloud provider:- Amazon Web Services (AWS)
- Google Cloud Platform (GCP)
- Microsoft Azure
- Oracle Cloud Infrastructure (OCI)
Step 2: Select Deployment Type
Choose your deployment model:| Type | Description |
|---|---|
| Standard | Outbound internet access available. Uses cloud LLM APIs directly. |
| Air-Gapped | No internet access. All services accessed via VPC/Private endpoints. Requires private container registry. |
Step 3: Configure LLM
Select and configure your LLM provider:Amazon Bedrock (AWS)
- Select a model (e.g., Claude Sonnet 4.5)
- Set the Bedrock region
- Optionally provide a custom endpoint URL (for VPC endpoints in air-gapped mode)
Vertex AI (GCP)
- Configure Vertex AI region and model settings
- For air-gapped mode, use Private Service Connect
Azure OpenAI
- Configure Azure OpenAI resource and deployment settings
- For air-gapped mode, use Azure Private Endpoints
OCI Generative AI
- Configure OCI Generative AI service settings
- For air-gapped mode, use OCI Service Gateway
OpenAI
- Enter your OpenAI API key
- Select the model (e.g., GPT-5.2)
Step 4: Configure Infrastructure
Container Registry
Enter your private container registry URL where Kestrel images will be stored:- AWS:
123456789012.dkr.ecr.us-east-1.amazonaws.com - GCP:
us-docker.pkg.dev/project-id/repo - Azure:
kestrel.azurecr.io - OCI:
<region>.ocir.io/<tenancy-namespace>/kestrel
IAM / Workload Identity
Provide the identity that Kestrel will use for cloud service access:- AWS: IAM Role ARN (IRSA, EKS Pod Identity, or Node IAM Role)
- GCP: Service Account (Workload Identity Federation)
- Azure: Managed Identity (Azure AD Workload Identity)
- OCI: OKE Workload Identity or Instance Principal
Database
Choose between:- Bundled PostgreSQL - Included in the Helm deployment (suitable for dev/test)
- External PostgreSQL - Use a managed database:
- AWS: RDS
- GCP: Cloud SQL
- Azure: Azure Database
- OCI: OCI Managed Relational Databases
Redis
Choose between bundled or external managed Redis:- AWS: ElastiCache
- GCP: Memorystore
- Azure: Azure Cache
- OCI: OCI Cache with Redis
Elasticsearch/OpenSearch (Optional)
Enable for enhanced search and log analysis:- Endpoint URL
- Username and password
Ingress
- Ingress Class Name: Your ingress controller class (default:
nginx) - Ingress Host: The domain name for your Kestrel installation
Secrets Provider
Select where credentials will be stored:- AWS Secrets Manager
- GCP Secret Manager
- Azure Key Vault
- OCI Secret Management
Step 5: GitHub Integration (Air-Gapped Only)
For air-gapped deployments, configure GitHub Enterprise Server for IaC workflows:- Enter your GitHub Enterprise Server URL (e.g.,
https://github.internal.company.com) - Create a GitHub App on your GHE instance:
- Go to Settings → Developer Settings → GitHub Apps → New GitHub App
- Set the webhook URL to your Kestrel ingress host
- Grant required permissions (repository contents: read/write, pull requests: read/write)
- Enter the GitHub App ID and App Slug
- Store credentials securely in AWS Secrets Manager, GCP Secret Manager, Azure Key Vault, or OCI Secret Management
Kestrel generates a TLS CA certificate for secure communication with GitHub Enterprise Server. Download it from the setup wizard and configure it in your GHE instance.
Step 6: Generate Configuration
Click Generate Helm Values to produce a completevalues.yaml file customized for your environment.
The generated configuration includes:
- All service configurations
- Database connection strings
- LLM provider settings
- Registry and image references
- Ingress configuration
- Secrets provider settings
- Workload identity bindings
Step 7: Download and Deploy
Pull Container Images
Click Get Registry Credentials to obtain credentials for pulling Kestrel images from the private registry.Create Secrets
Create the necessary Kubernetes secrets:Install with Helm
Verify the Deployment
Air-Gapped Private Endpoints
For air-gapped deployments, Kestrel guides you through creating the required private endpoints for your cloud provider:LLM Service Access
AWS VPC Endpoints for Bedrock, GCP Private Service Connect for Vertex AI, Azure Private Endpoints for Azure OpenAI, or OCI Service Gateway for OCI Generative AI.Container Registry
Private access to ECR (AWS), Artifact Registry (GCP), Azure Container Registry, or OCI Container Registry without traversing the public internet.Secrets Management
Private endpoints for AWS Secrets Manager, GCP Secret Manager, Azure Key Vault, or OCI Secret Management to retrieve credentials at runtime. The wizard provides cloud-specific, step-by-step instructions for creating each endpoint, including security group configuration and private DNS settings.Updating
To update your on-premise deployment:- Return to the setup wizard and click Generate Helm Values to get the latest configuration
- Pull the newest container images
- Run Helm upgrade:
Next Steps
- Connect your Kubernetes clusters to the on-premise Kestrel instance
- Set up cloud integrations for cloud resource monitoring