Deploy Itential on Kubernetes using these validated designs for specific providers and configurations.
Prerequisites
Ensure you have the following prerequisites.
Kubernetes cluster requirements
You need:
- A Kubernetes version
1.31or later - A cluster on a validated provider that meets the sizing recommendations
- Administrative access to your cluster, or access to the namespace where Itential applications run
Required tools and dependencies
Install these tools before deploying Itential:
- Helm version
3.15.0or later to use Itential Helm Charts - ExternalDNS (optional but recommended) to facilitate DNS entry creation for exposed services. For more information, see the ExternalDNS project.
- cert-manager to manage TLS certificates. For more information, see the cert-manager project.
Size your nodes and distribute pods
Itential Platform requires significant resources to run production automations. Requirements vary based on automation rate, complexity, and the number of adapters.
Development and production environments often have similar resource needs because engineers experiment with automations in development. Avoid undersizing development environments.
Choose your specifications
- If you're starting with Itential, use minimum specifications and scale as needed
- If you're an established Itential user migrating to Kubernetes, match your existing specifications
Minimum specifications
Use these specifications to get started:
| Resource | Specification |
|---|---|
| CPU | 4 |
| RAM | 16 GB |
| AWS EC2 instance type | m5a.xlarge |
| Azure type | Standard_D4as_v5 |
Production specifications
Use these specifications for production workloads:
| Resource | Specification |
|---|---|
| CPU | 16 |
| RAM | 32 GB |
| AWS EC2 instance type | c6a.4xlarge |
| Azure type | Standard_F16as_v5 |
The AWS instance and Azure types are general guidelines. As production usage reveals automation patterns, you might change to a more suitable instance type. For example, for AWS you might want to use memory-optimized (r5a) or general-purpose (m5a).
Distribute pods across nodes
Schedule StatefulSet containers on dedicated nodes within separate availability zones, with one IAP pod per node. This configuration provides continuity if an availability zone experiences an outage.

Manage secrets
Manage secrets outside Kubernetes and inject them using your preferred secret management method. You need three types of secrets:
Image pull secrets
Create an image pull secret to authenticate with your container registry.
- If you use AWS, Itential images are hosted in Amazon Elastic Container Registry (ECR). Configure credentials for an ECR-enabled account.
- The secret must exist in the cluster before deploying Itential applications.
TLS secrets
If you enable TLS, create a secret containing the TLS Certificate Authority (CA) certificate:
- Name the secret itential-ca
- The CA generates all other TLS certificates used by Itential applications
- The secret must exist in the cluster before deploying with TLS enabled
Application secrets
Each Itential application (Itential Platform, IAG4, IAG5) requires application-specific secrets including:
- Encryption keys
- Database passwords
- Other sensitive configuration values
These secrets must exist in the cluster before deploying applications. For the required structure of each application's secrets, see Itential Helm charts.
Understand Itential Helm charts
Itential uses Helm Charts to deploy on Kubernetes. Available charts install three principal Itential products:
- Itential Platform
- Itential Automation Gateway version 4 (IAG4)
- Itential Automation Gateway version 5 (IAG5)
These Helm charts install only Itential products, not dependencies. You must provide Redis and MongoDB separately. Itential applications require Redis and MongoDB—you can't substitute alternatives like Memcache or MySQL.
Kubernetes objects and architecture
Itential Helm charts create several Kubernetes objects and use dependent charts.
The following diagram shows the architecture of the Itential Kubernetes environment:

Ingress
The Ingress provides cluster access and supports a static hostname. It contains load balancer configuration and routing rules.
Itential uses the ExternalDNS project to facilitate DNS entry creation. ExternalDNS synchronizes exposed Kubernetes Services and Ingresses with DNS providers. While not required, ExternalDNS automates DNS management for exposed services.
For more information, see the ExternalDNS project.
Service
Two services exist:
- A service that routes traffic to all pods
- A headless service that routes traffic to individual pods
Some features require accessing individual pods directly.
StatefulSet
Itential Platform is stateful. The StatefulSet maintains predictable pod names and mounts predictable volumes.
Certificates and issuer
When you enable TLS (useTLS: true in your Helm values), the chart requires a Certificate Authority (CA) in your cluster. The chart uses this CA to generate certificates through a Kubernetes Issuer, which stores them in a Secret that pods mount.
Creating and adding this CA is outside the scope of this documentation.
Manual certificate configuration
If you can't use cert-manager, manually add TLS certificates to your cluster in a secret named <Chart.name>-tls-secret with these keys:
| Key | Description |
|---|---|
tls.crt |
The TLS certificate that identifies this server |
tls.key |
The private key for this certificate |
certificateca.crt |
The CA used to generate these certificates and keys |
StorageClass
If you use persistent volumes for adapters (as described in Add adapters and customizations), you need a StorageClass to provision storage automatically.
The StorageClass:
- Defines the type of storage to create (such as AWS EBS or Azure Managed Disks)
- Creates disk volumes when pods request storage
- Works with StatefulSets to ensure each pod consistently mounts its dedicated volume across restarts
Don't use EFS/NFS volumes. These volume types cause unwanted application latency.
Start with 10 GB of disk space. Adjust based on the number of adapters you include.
Add adapters and customizations
Itential applications use adapters to connect with external systems. You need to include these adapters in your Kubernetes deployment.
You can include adapters using two methods:
- Persistent volumes: Store adapters on disk storage that's mounted into your containers
- Layered containers: Build adapters directly into your container images
Choose a method
Use persistent volumes if you:
- Update adapters frequently
- Want to update adapters without rebuilding containers
- Prefer to manage adapters separately from container deployments
Use layered containers if you:
- Need portable, self-contained deployments
- Update adapters infrequently
- Want adapter versions tied to container builds
Use persistent volumes
Persistent volumes are disk storage that exists independently of your containers. You store adapters on this disk, and containers read them at runtime.
To use persistent volumes for adapters and custom applications:
- Add a volume accessible at
/opt/itential/platform/services/custom. - Place your adapters in this location.
The application reads adapters from the disk as part of the container. When you change files on the volume, restart the adapter from the Itential UI or API.
Use dedicated volumes per container. Shared volumes reduce performance.
Use layered containers
To layer adapters into the container:
- Clone and compile each adapter:
cd <build-directory> mdkir adapters # git clone the adapters git clone git@gitlab.com:itentialopensource/adapters/adapter-email.git adapters git clone git@gitlab.com:itentialopensource/adapters/adapter-servicenow.git adapters git clone git@gitlab.com:itentialopensource/adapters/adapter-panorama.git adapters # compile each of the adapters npm install adapter-email npm install adapter-servicenow npm install adapter-panorama - Create a Dockerfile that uses the Itential image as a base:
FROM <the-base-itential-image> # Switch to root user USER root # Create custom adapters directory RUN mkdir -p /opt/itential/platform/services/custom # Copy each adapter into the container to the correct location COPY adapters/adapter-email /opt/itential/platform/services/custom/adapter-email COPY adapters/adapter-panorama /opt/itential/platform/services/custom/adapter-panorama COPY adapters/adapter-servicenow /opt/itential/platform/services/custom/adapter-servicenow # Set the appropriate ownership for the adapters RUN chown -R itential:itential /opt/itential/platform/services/custom # Set the appropriate permissions for the adapters RUN chmod -R 775 /opt/itential/platform/services/custom # Switch back to the itential user USER itential - Build the image with your adapters included.
This method increases container size as you add more adapters. Consider using a CI/CD pipeline (such as GitHub Actions, GitLab CI/CD, or Jenkins) to automate adapter retrieval, compilation, and container builds.
Validated providers
You can deploy Itential on the following Kubernetes providers:
- Amazon Elastic Kubernetes Service (EKS)
- Azure Kubernetes Service (AKS)
Contact Itential for information about additional providers.
Amazon EKS
Amazon EKS manages the Kubernetes control plane. You maintain only the VMs needed to run pods.
Choose an ingress controller
You can use either of these ingress controllers with Amazon EKS:
- AWS Load Balancer controller (recommended): Integrates directly with AWS Application Load Balancers (ALB). Provides native AWS integration with features like AWS WAF, AWS Certificate Manager, and VPC networking.
- NGINX Ingress controller: Sits behind an AWS Network Load Balancer or Classic Load Balancer.
AWS Load Balancer Controller configuration
The following example shows common Ingress annotations for the AWS Load Balancer Controller. Adjust these values based on your requirements:
annotations:
alb.ingress.kubernetes.io/backend-protocol: "HTTPS"
alb.ingress.kubernetes.io/healthcheck-path: "/health/status?exclude-service=true"
alb.ingress.kubernetes.io/healthcheck-port: "3443"
alb.ingress.kubernetes.io/healthcheck-protocol: "HTTPS"
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "15"
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
alb.ingress.kubernetes.io/healthy-threshold-count: "2"
# Include port 8080 if using Gateway Manager with IAG5, otherwise remove it
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443},{"HTTPS": 8080}]'
alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=60
alb.ingress.kubernetes.io/load-balancer-name: "itential-iap-lb"
alb.ingress.kubernetes.io/scheme: "internet-facing"
alb.ingress.kubernetes.io/success-codes: "200"
alb.ingress.kubernetes.io/target-type: "ip"
alb.ingress.kubernetes.io/unhealthy-threshold-count: "2"
alb.ingress.kubernetes.io/websocket-paths: "/ws"
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=3600
These annotations configure the AWS Application Load Balancer to:
- Use HTTPS for backend communication
- Check application health every 15 seconds
- Enable WebSocket support for real-time features (For IAG5)
- Maintain session stickiness for 1 hour
Modify timeouts, health check intervals, and other parameters to match your deployment requirements.
NGINX Ingress controller configuration
The following example shows common Ingress annotations for the NGINX Ingress controller. Adjust these values based on your requirements:
ingress:
annotations:
# Backend protocol
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
# Health check configuration
nginx.ingress.kubernetes.io/health-check-path: "/health/status?exclude-service=true"
# WebSocket support
nginx.ingress.kubernetes.io/websocket-services: "iap-service"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
# SSL/TLS configuration
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
# Timeouts
nginx.ingress.kubernetes.io/proxy-connect-timeout: "60"
# External DNS
external-dns.alpha.kubernetes.io/hostname: itential.example.com
external-dns.alpha.kubernetes.io/ttl: "300"
Storage configuration
If you use persistent volumes for adapters (as described in Adapters and customizations), configure your StorageClass to use ebs.csi.aws.com as the provisioner.
Azure AKS
Azure Kubernetes Service (AKS) manages the Kubernetes control plane. You maintain only the pods and Kubernetes components.
Ingress controller
The validated design for Azure AKS uses the NGINX Ingress controller, which sits behind an Azure Load Balancer and manages traffic routing within your cluster.
The following example shows common ingress annotations for the NGINX Ingress controller on Azure. Adjust these values based on your requirements:
ingress:
annotations:
# Backend protocol
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
# Health check configuration
nginx.ingress.kubernetes.io/health-check-path: "/health/status?exclude-service=true"
# WebSocket support
nginx.ingress.kubernetes.io/websocket-services: "iap-service"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
# SSL/TLS configuration
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
# Timeouts
nginx.ingress.kubernetes.io/proxy-connect-timeout: "60"
# External DNS
external-dns.alpha.kubernetes.io/hostname: itential.example.com
external-dns.alpha.kubernetes.io/ttl: "300"
# Azure specific - use Azure Load Balancer
service.beta.kubernetes.io/azure-load-balancer-internal: "false"
These annotations configure the NGINX ingress controller to:
- Use HTTPS for backend communication
- Check application health
- Enable WebSocket support with extended timeouts (1 hour)
- Force SSL redirects for security
- Use an internet-facing Azure Load Balancer
Modify timeouts, SSL settings, and load balancer type to match your deployment requirements.
Storage configuration
If you use persistent volumes for adapters as described in Adapters and customizations, configure your StorageClass to use disk.csi.azure.com as the provisioner.