Execution engine
Gateway clusters are the execution engine of the Itential Platform — the layer where automation actually runs. When a workflow task requires code to execute, a gateway cluster handles it: building the environment, running the code, and returning the result to Platform. This model keeps execution close to the infrastructure it touches, under the governance of the teams that own it.
What is a gateway cluster?
A gateway cluster is one or more gateway servers and runner nodes that share the same automation resources and appear as a single, unified execution environment to Itential Platform. When Platform routes a task to a gateway cluster, it connects to the cluster as a whole — not to individual nodes. You can deploy a single cluster or multiple independent clusters to separate environments by geography, network segment, or organizational unit.
For more information on cluster architecture and deployment models, see Choose a deployment architecture.
What gateway clusters execute
Gateway clusters support multiple types of execution:
- Gateway services — packaged automations (Python scripts, Ansible playbooks, OpenTofu plans) sourced from Git and registered as named services. Invoked from a workflow using the
runServicetask. For more information, see Add gateway services to workflows. - Inline code — Python code written directly on the workflow canvas and executed on a configured gateway cluster without a Git repository or service registration. Invoked using the
runCodetask. For more information, see runCode task. - Integration requests — outbound HTTP calls from integrations routed through a gateway runner node rather than directly from Platform. This enables network-proximate execution and explicit proxy support for environments where outbound traffic must pass through a corporate proxy.
All three execution types run through the same runner infrastructure, under the same audit logging and access controls. The type of work differs; the governance model does not.
Default cluster
When multiple gateway clusters are registered in Gateway Manager, Platform needs to know which cluster to route execution to. The default cluster is the gateway cluster designated in Admin Essentials as the fallback for all gateway-dependent features that don’t have their own cluster configured.
Individual features — such as integrations — can override the default and use a different cluster. The default applies wherever no feature-specific cluster is set.
The default cluster setting does not provide failover. If the designated cluster becomes unavailable, Platform returns an error rather than routing to a different cluster. Before designating clusters interchangeably, confirm they expose the same services and perform the same function.
For more information on configuring the default cluster and feature-level cluster overrides, see Gateway configuration.
Governed execution
Every execution that runs through a gateway cluster is traceable. The cluster logs which node handled the request, which credentials were used, and what ran and when. This applies regardless of whether the work came from a gateway service, an inline runCode task, or an integration request.
Network-proximate execution
Gateway clusters can be deployed close to the infrastructure they interact with — in a specific network segment, geographic region, or behind a corporate firewall. Execution happens where it needs to happen, under the access policies of the teams that own it.
For integration requests, this means outbound API calls originate from the gateway runner node rather than from Platform, so they pass through your existing network controls without requiring changes to firewall rules or routing.
If your network requires outbound traffic to pass through a proxy server, you can configure proxy settings per integration in Admin Essentials. For more information, see Gateway configuration.
Execution environments
When a gateway cluster runs a python-script service, ansible-playbook service, or a runCode task, it builds a clean, isolated virtual environment on the runner containing the exact dependencies required for that execution. Environments are cached and reused when the same dependency set is requested again, and rebuilt only when requirements change.
For more information, see Manage virtual environments.