Deploy with Kubernetes

Itential provides Helm charts for deploying Platform on Kubernetes. This page covers prerequisites, node sizing, secrets management, Helm chart architecture, adapter delivery, and validated configurations for Amazon EKS and Azure AKS.

Itential Helm charts install Platform and IAG only — not their dependencies. You must provide MongoDB and Redis separately. Neither Memcache nor MySQL can substitute for these dependencies.

Prerequisites

Cluster requirements

  • Kubernetes 1.31 or later
  • A cluster on a validated provider that meets the sizing recommendations
  • Administrative access to the cluster, or access to the namespace where Itential applications will run

Required tools

ToolVersionNotes
Helm3.15.0 or laterRequired to use Itential Helm charts
cert-managerManages TLS certificates; see the cert-manager project
ExternalDNSOptional but recommended; automates DNS entry creation for exposed services; see the ExternalDNS project

Size your nodes and distribute pods

Platform requires significant resources to run production automations. Requirements vary based on automation rate, complexity, and the number of adapters.

Development environments often have similar resource needs to production because engineers experiment with real automations in development. Avoid undersizing development environments.

If you are starting fresh with Itential, begin with minimum specifications and scale as needed. If you are an existing Itential customer migrating to Kubernetes, match your current resource specifications.

Minimum specifications

ResourceSpecification
CPU4 cores
RAM16 GB
AWS EC2 typem5a.xlarge
Azure typeStandard_D4as_v5

Production specifications

ResourceSpecification
CPU16 cores
RAM32 GB
AWS EC2 typec6a.4xlarge
Azure typeStandard_F16as_v5

The instance types above are starting points. As production usage reveals your automation patterns, you may shift to memory-optimized (r5a) or general-purpose (m5a) instance types on AWS.

Pod distribution

Schedule StatefulSet containers on dedicated nodes in separate availability zones, with one Platform pod per node. This provides continuity if an availability zone experiences an outage.

Manage secrets

Manage secrets outside Kubernetes and inject them at deploy time using your preferred secrets management method. Three types of secrets are required before deploying.

Image pull secrets

Create an image pull secret to authenticate with your container registry. For AWS deployments, Platform images are hosted in Amazon ECR — configure credentials for an ECR-enabled account. This secret must exist in the cluster before deploying Itential applications.

TLS secrets

If TLS is enabled, create a secret named itential-ca containing the TLS Certificate Authority (CA) certificate. This CA is used to generate all other TLS certificates used by Itential applications. This secret must exist in the cluster before deploying with TLS enabled.

Application secrets

Each Itential application (Platform, IAG4, IAG5) requires application-specific secrets including encryption keys, database passwords, and other sensitive configuration values. These secrets must exist in the cluster before deploying. For the required structure of each application’s secrets object, refer to the Itential Helm charts documentation.

Helm chart architecture

Itential provides Helm charts for three products:

  • Itential Platform
  • Itential Automation Gateway version 4 (IAG4)
  • Itential Automation Gateway version 5 (IAG5)

The following diagram shows the architecture of the Itential Kubernetes environment:

Ingress

The Ingress provides external cluster access and supports a static hostname. It contains load balancer configuration and routing rules. ExternalDNS synchronizes exposed Services and Ingresses with your DNS provider automatically. While not required, it removes the need to manage DNS records manually.

Services

Two services are created per application:

  • A standard service that routes traffic to all pods
  • A headless service that routes traffic to individual pods directly (required for certain features)

StatefulSet

Platform runs as a StatefulSet to maintain predictable pod names and persistent volume mounts across restarts.

Certificates and issuer

When TLS is enabled (useTLS: true in Helm values), the chart uses a CA stored in your cluster to generate certificates through a Kubernetes Issuer. The issued certificates are stored in a Secret that pods mount at runtime. Adding the CA to the cluster is outside the scope of this guide.

If cert-manager is unavailable, add TLS certificates manually in a secret named <Chart.name>-tls-secret with these keys:

KeyDescription
tls.crtThe TLS certificate for this server
tls.keyThe private key for this certificate
certificateca.crtThe CA used to generate the certificate and key

StorageClass

If you use persistent volumes for adapters, a StorageClass is required to provision storage automatically. It defines the storage type to create (such as AWS EBS or Azure Managed Disks) and creates disk volumes when pods request storage.

Do not use EFS or NFS volumes. These volume types introduce application latency.

Start with 10 GB of disk space and adjust based on the number of adapters deployed.

Add adapters and customizations

Adapters can be delivered to your Kubernetes deployment using two methods: persistent volumes or layered containers.

Choose a method

Use persistent volumes if you:

  • Update adapters frequently
  • Want to update adapters without rebuilding container images
  • Prefer to manage adapter lifecycle separately from container deployments

Use layered containers if you:

  • Need portable, self-contained deployments
  • Update adapters infrequently
  • Want adapter versions tied to specific container builds

Persistent volumes

Store adapters on a persistent disk volume that is mounted into the container at runtime.

  1. Add a persistent volume accessible at /opt/itential/platform/services/custom.
  2. Place your adapter directories in that location.

When you update files on the volume, restart the affected adapter from the Itential Platform UI or API — a full container restart is not required.

Use dedicated volumes per container. Shared volumes reduce performance.

Layered containers

Build adapters directly into the container image.

1

Clone and compile each adapter

$mkdir -p <build-directory>/adapters
$cd <build-directory>/adapters
$
$git clone git@gitlab.com:itentialopensource/adapters/adapter-email.git
$git clone git@gitlab.com:itentialopensource/adapters/adapter-servicenow.git
$git clone git@gitlab.com:itentialopensource/adapters/adapter-panorama.git
$
$npm install adapter-email
$npm install adapter-servicenow
$npm install adapter-panorama
2

Create a Dockerfile using the Itential image as the base

1FROM <itential-base-image>
2
3USER root
4
5RUN mkdir -p /opt/itential/platform/services/custom
6
7COPY adapters/adapter-email /opt/itential/platform/services/custom/adapter-email
8COPY adapters/adapter-panorama /opt/itential/platform/services/custom/adapter-panorama
9COPY adapters/adapter-servicenow /opt/itential/platform/services/custom/adapter-servicenow
10
11RUN chown -R itential:itential /opt/itential/platform/services/custom
12RUN chmod -R 775 /opt/itential/platform/services/custom
13
14USER itential
3

Build the image

Build the image using your standard Docker build process. Consider using a CI/CD pipeline (GitHub Actions, GitLab CI/CD, or Jenkins) to automate adapter retrieval, compilation, and container builds as your adapter set grows.

Validated providers

Itential has validated Kubernetes deployments on the following providers:

  • Amazon Elastic Kubernetes Service (EKS)
  • Azure Kubernetes Service (AKS)

Contact Itential for information about additional providers.

Amazon EKS

Amazon EKS manages the Kubernetes control plane. You maintain only the VMs needed to run pods.

Ingress controller

Two ingress controllers are supported on EKS:

  • AWS Load Balancer Controller (recommended): Integrates directly with AWS Application Load Balancers. Provides native AWS integration with features like AWS WAF, AWS Certificate Manager, and VPC networking.
  • NGINX Ingress Controller: Sits behind an AWS Network Load Balancer or Classic Load Balancer.

AWS Load Balancer Controller — example annotations

1annotations:
2 alb.ingress.kubernetes.io/backend-protocol: "HTTPS"
3 alb.ingress.kubernetes.io/healthcheck-path: "/health/status?exclude-service=true"
4 alb.ingress.kubernetes.io/healthcheck-port: "3443"
5 alb.ingress.kubernetes.io/healthcheck-protocol: "HTTPS"
6 alb.ingress.kubernetes.io/healthcheck-interval-seconds: "15"
7 alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
8 alb.ingress.kubernetes.io/healthy-threshold-count: "2"
9 # Include port 8080 if using Gateway Manager with IAG5, otherwise remove it
10 alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443},{"HTTPS": 8080}]'
11 alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=60
12 alb.ingress.kubernetes.io/load-balancer-name: "itential-iap-lb"
13 alb.ingress.kubernetes.io/scheme: "internet-facing"
14 alb.ingress.kubernetes.io/success-codes: "200"
15 alb.ingress.kubernetes.io/target-type: "ip"
16 alb.ingress.kubernetes.io/unhealthy-threshold-count: "2"
17 alb.ingress.kubernetes.io/websocket-paths: "/ws"
18 alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=3600

These annotations configure the ALB to use HTTPS for backend traffic, check application health every 15 seconds, enable WebSocket support, and maintain session stickiness for one hour. Adjust timeouts, health check intervals, and other values to match your deployment.

NGINX Ingress Controller — example annotations (EKS)

1ingress:
2 annotations:
3 nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
4 nginx.ingress.kubernetes.io/health-check-path: "/health/status?exclude-service=true"
5 nginx.ingress.kubernetes.io/websocket-services: "iap-service"
6 nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
7 nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
8 nginx.ingress.kubernetes.io/ssl-redirect: "true"
9 nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
10 nginx.ingress.kubernetes.io/proxy-connect-timeout: "60"
11 external-dns.alpha.kubernetes.io/hostname: itential.example.com
12 external-dns.alpha.kubernetes.io/ttl: "300"

Storage configuration

If using persistent volumes for adapters, configure your StorageClass to use ebs.csi.aws.com as the provisioner.

Azure AKS

Azure AKS manages the Kubernetes control plane. You maintain only the pods and Kubernetes components.

Ingress controller

The validated design for AKS uses the NGINX Ingress Controller behind an Azure Load Balancer.

NGINX Ingress Controller — example annotations (AKS)

1ingress:
2 annotations:
3 nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
4 nginx.ingress.kubernetes.io/health-check-path: "/health/status?exclude-service=true"
5 nginx.ingress.kubernetes.io/websocket-services: "iap-service"
6 nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
7 nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
8 nginx.ingress.kubernetes.io/ssl-redirect: "true"
9 nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
10 nginx.ingress.kubernetes.io/proxy-connect-timeout: "60"
11 external-dns.alpha.kubernetes.io/hostname: itential.example.com
12 external-dns.alpha.kubernetes.io/ttl: "300"
13 # Use an internet-facing Azure Load Balancer
14 service.beta.kubernetes.io/azure-load-balancer-internal: "false"

These annotations configure NGINX to use HTTPS for backend traffic, enable WebSocket support with one-hour timeouts, force SSL redirects, and attach an internet-facing Azure Load Balancer. Adjust SSL settings, timeouts, and load balancer type to match your deployment.

Storage configuration

If using persistent volumes for adapters, configure your StorageClass to use disk.csi.azure.com as the provisioner.