runCode
The runCode task executes a Python script directly in a workflow. Use it to replace combinations of existing canvas tasks, such as chains of JSON Schema Transformations (JSTs), query tasks, and merge tasks, with a single, readable Python script.
The task uses Itential Automation Gateway (IAG) to create a virtual Python environment and execute the script.
Prerequisites
Before using the runCode task, ensure the following:
- Register an active IAG 5.4.0 or later cluster with Itential Platform using Gateway Manager. For information, see Create cluster in Gateway Manager.
- Ensure the Python version on the IAG 5 host is compatible with your scripts and packages. The task uses the default Python interpreter on the host, which is the version invoked by
pythonorpython3on the iagctl command line. IAG does not enforce a minimum version. To check, runpython --versionon the IAG 5 host. - Ensure the
gateway:coderole from GatewayManager is assigned to your group. Contact your administrator to have this role assigned.
Potential use cases
Use this task when existing canvas tasks become unwieldy for the logic you need to express. Common scenarios include:
- Replacing a chain of JST tasks that extract, filter, and aggregate API response data
- Performing data aggregation, calculation, or text processing on task outputs
- Reshaping nested JSON structures for consumption by downstream tasks
Task properties
Incoming
Outgoing
The task returns all output fields as properties of the result outgoing variable.
Error
This field appears in the task response only when result.status is error.
Use runCode in a workflow
The following sections walk through configuring the task, mapping inputs from upstream tasks, and referencing outputs in downstream tasks.
Configure the task
The runCode task is configured across two surfaces: the task panel and the code editor. See Configure and manage tasks for the general task configuration pattern.
To test before saving, click Run Code. The result appears in the Outputs tab. To test with specific input values, add key:value pairs in the Inputs tab. These values are saved across sessions but do not affect workflow execution.
Configure data inputs
Configure the keys and their sources in the task’s Data tab.
For example, to pass the output of an upstream getDevices task into your script as devices, add a key named devices and set Previous Task to getDevices with the appropriate reference variable.
Reference runCode output in downstream tasks
To use the runCode output in a downstream task, configure that task’s input with:
- Previous Task: the
runCodetask - Task Variable:
result.stdout_jsonfor the parsed JSON result, or any otherresultfield
If you need a specific nested value from result.stdout_json, use a task query to extract it directly on the input field without adding a query task to the canvas. See Task query for details.
Errors and timeout
The task catches unhandled Python exceptions and captures the traceback in result.stderr. The task still completes and the workflow does not follow the error transition.
The Timeout (safety.timeout) stops execution if exceeded. When the timeout is exceeded, result.status becomes error and result.error contains the Platform-specific failure reason. The timeout applies to your code only, not to package installation.
If the task cannot run on the gateway, for example, if the gateway is not enabled or loses connectivity, the task itself fails and no result output is produced. The error is visible in the task’s Error tab in Operations Manager. For example:
Size limit
Code is limited to 2 MB. You cannot save code that exceeds this limit. The limit applies to code text only. Installed packages do not count toward it.
Package caching
The first time you run a task with a new set of packages, the runner installs them into a virtual environment. Subsequent runs reuse the cached environment, so there’s no install overhead. Any change to the packages list triggers a fresh install.
Example: filter a device list by status
This example filters a list of devices to return only those with an active status. It replaces the equivalent JST filter operation with a single readable script.
Data inputs
Configure one key in the data object in the Inputs tab:
Sample input
Script
Output
Downstream tasks access the result as result.stdout_json:
Troubleshoot
Use the following to troubleshoot errors and issues.
Execution status vs. exit code
result.status and result.return_code communicate different types of failure:
status: completedwithreturn_code: 0means the script ran and exited cleanly.status: completedwith a non-zeroreturn_codemeans the script ran but exited with a failure code. Your code executed; the script itself signaled failure.status: errorindicates a Platform-level failure, such as a timeout. The script may not have run, or it stopped mid-execution. Checkresult.errorfor details. Gateway connectivity failures produce a different outcome. The task itself errors and no result fields are populated, because a task failure is distinct from a task error. See Errors and timeout.
stdout_json absent after a successful run
If result.stdout_json does not appear after a successful execution (return_code: 0), the most common causes are:
stdoutcontained non-JSON content, a mix of JSON and other output, or no output- A type in the output object was not JSON-serializable, causing
json.dumps()to raise aTypeErrorand write a traceback tostderrinstead of JSON tostdout
Inspect result.stdout and result.stderr directly to determine which applies.