Python
Execute Python code for data processing and complex calculations using built-in libraries.
Python Node
Execute Python code for data processing or complex calculations in your workflows. Python nodes provide a powerful sandbox environment with comprehensive data manipulation capabilities using Python's built-in standard library.
Quick Navigation
Workflow Context
Learn how to access node data, variables, and user information via args
Built-in Libraries
Explore Python standard library modules available in the sandbox
Data Processing Examples
Real-world examples of data validation, calculations, and API processing
Best Practices
Guidelines for efficient, reliable Python node development
Workflow Context Access
Python nodes have full access to the workflow execution context via the args variable, just like all other nodes. This provides complete access to node data, variables, constants, and system information.
For comprehensive context documentation, see the Workflow Context reference.
Context Structure
The Python sandbox receives the complete workflow context as the args parameter:
# args contains the full workflow context structure
print("Available context keys:", list(args.keys()))
# Complete context structure:
# {
# 'nodes': {
# 'start': {'inputs': {}, 'outputs': {}, 'metadata': {}},
# 'apiCall': {'inputs': {}, 'outputs': {}, 'metadata': {}, 'error': ''},
# 'dataProcessor': {'inputs': {}, 'outputs': {}, 'metadata': {}}
# },
# 'vars': {...}, # Workflow variables (mutable during execution)
# 'consts': {...}, # Workflow constants (immutable values)
# 'workspace_id': 'uuid', # Unique workspace identifier
# 'workspace_slug': 'workspace-name', # Human-readable workspace name
# 'app_id': 'uuid', # Unique application identifier
# 'app_slug': 'app-name', # Human-readable application name
# 'user': {
# 'id': 'uuid', # Unique user identifier
# 'login': 'username', # Username or email
# 'roles': ['role1', 'role2'], # Array of user roles
# 'attrs': {} # Additional user attributes
# }
# }Accessing Node Data
# Access inputs and outputs from any executed node
user_data = args.get('nodes', {}).get('start', {}).get('inputs', {}).get('userData', {})
api_response = args.get('nodes', {}).get('apiCall', {}).get('outputs', {}).get('data', {})
processed_data = args.get('nodes', {}).get('dataProcessor', {}).get('outputs', {}).get('result', [])
# Check for node execution errors
api_error = args.get('nodes', {}).get('apiCall', {}).get('error', '')
if api_error:
return {'success': False, 'error': f'API call failed: {api_error}'}
# Access detailed metadata (varies by node type)
http_status = args.get('nodes', {}).get('apiCall', {}).get('metadata', {}).get('statusCode', 0)
response_headers = args.get('nodes', {}).get('apiCall', {}).get('metadata', {}).get('response', {}).get('headers', {})
request_body = args.get('nodes', {}).get('apiCall', {}).get('metadata', {}).get('request', {}).get('body', {})Accessing Variables and Constants
# Read workflow variables (mutable during execution)
user_settings = args.get('vars', {}).get('userSettings', {})
session_data = args.get('vars', {}).get('currentSession', {})
# Read workflow constants (immutable configuration)
api_endpoint = args.get('consts', {}).get('API_BASE_URL', '')
max_retries = args.get('consts', {}).get('MAX_RETRY_ATTEMPTS', 3)
# Python nodes are read-only for workflow context
# Results are automatically stored in workflow context for subsequent nodes
print(f"Processing session data for endpoint: {api_endpoint}")System and User Information
# Access workspace and application context
workspace_id = args.get('workspace_id', '')
workspace_name = args.get('workspace_slug', '')
app_id = args.get('app_id', '')
app_name = args.get('app_slug', '')
# Access current user information
current_user = args.get('user', {}).get('login', '')
user_id = args.get('user', {}).get('id', '')
user_roles = args.get('user', {}).get('roles', [])
user_attributes = args.get('user', {}).get('attrs', {})
# Check user permissions
is_admin = 'admin' in user_roles
is_manager = 'manager' in user_roles
has_special_access = any(role in ['admin', 'manager', 'operator'] for role in user_roles)
print(f"User {current_user} in {workspace_name}/{app_name}")
print(f"Access levels: admin={is_admin}, manager={is_manager}")Complete Workflow Example
# Access data from previous nodes and workflow context
orders = args.get('vars', {}).get('orders', [])
user_role = args.get('user', {}).get('roles', ['user'])[0]
currency_rate = args.get('consts', {}).get('USD_RATE', 1.0)
# Check if previous node had errors
data_source_error = args.get('nodes', {}).get('dataSource', {}).get('error', '')
if data_source_error:
return {
'success': False,
'error': f'Data source failed: {data_source_error}'
}
# Process data with user-based filtering
processed_orders = []
total_value = 0
user_can_view_all = user_role in ['admin', 'manager']
for order in orders:
# Apply user-based filtering
if not user_can_view_all and order.get('userId') != args.get('user', {}).get('id'):
continue
# Process and convert currency
processed_order = {
'id': order.get('id'),
'amount': order.get('amount', 0) * currency_rate,
'status': order.get('status'),
'timestamp': order.get('createdAt'),
'processedBy': args.get('user', {}).get('login', 'system')
}
processed_orders.append(processed_order)
total_value += processed_order['amount']
# Return result (automatically stored in workflow context)
result = {
'processedOrders': processed_orders,
'totalValue': total_value,
'count': len(processed_orders),
'processor': args.get('user', {}).get('login', 'system'),
'workspace': args.get('workspace_slug', ''),
'timestamp': str(datetime.datetime.now())
}Built-in Libraries
Python nodes have access to Python's comprehensive standard library for powerful data processing:
Data Processing
json, csv, collections, itertools - Core data manipulation and processing
Mathematical
math, statistics, operator - Mathematical operations and statistical analysis
Text Processing
re, urllib.parse - Regular expressions and string/URL processing
System & Time
datetime, functools - Date/time handling and function utilities
Available Modules
# Core libraries available for data processing
import json # JSON processing
import csv # CSV file handling
import re # Regular expressions
import math # Mathematical functions
import statistics # Statistical functions
import datetime # Date and time handling
import collections # Specialized container types
import itertools # Iterator functions
import functools # Higher-order functions
import operator # Function operators
import urllib.parse # URL parsing utilities
# Example: Advanced data processing with built-in libraries
raw_data = args.get('vars', {}).get('csvData', [])
# Group data using collections
from collections import defaultdict, Counter
grouped_data = defaultdict(list)
for item in raw_data:
category = item.get('category', 'unknown')
grouped_data[category].append(item.get('value', 0))
# Calculate statistics for each group
summary = {}
for category, values in grouped_data.items():
if values:
summary[category] = {
'count': len(values),
'mean': statistics.mean(values),
'median': statistics.median(values),
'max': max(values),
'min': min(values)
}
result = {
'summary_by_category': summary,
'total_records': len(raw_data),
'categories': list(grouped_data.keys())
}Data Processing Examples
Error Handling
Performance Best Practices
Memory Management
# For large datasets, process in chunks
def process_in_chunks(data, chunk_size=1000):
"""Process large datasets in chunks"""
results = []
for i in range(0, len(data), chunk_size):
chunk = data[i:i + chunk_size]
# Process chunk
processed_chunk = []
for item in chunk:
processed_chunk.append({
'id': item.get('id'),
'processed': True,
'value': item.get('value', 0) * 2
})
results.extend(processed_chunk)
# Optional: log progress
if i % (chunk_size * 10) == 0:
print(f"Processed {i} items...")
return results
# Use chunked processing for large datasets
large_data = args.get('vars', {}).get('bigDataset', [])
if len(large_data) > 5000:
processed_data = process_in_chunks(large_data)
else:
processed_data = [{'id': item.get('id'), 'processed': True} for item in large_data]
result = {
'processed_count': len(processed_data),
'data': processed_data
}Efficient Data Structures
# Use sets for membership testing
valid_ids = set(args.get('vars', {}).get('validIds', []))
items = args.get('vars', {}).get('items', [])
# Faster than checking lists
filtered_items = [item for item in items if item.get('id') in valid_ids]
# Use dictionaries for lookups
lookup_data = args.get('vars', {}).get('lookupData', [])
lookup_dict = {item['key']: item['value'] for item in lookup_data}
# Fast lookups
enriched_items = []
for item in items:
enriched_item = item.copy()
enriched_item['enriched_value'] = lookup_dict.get(item.get('key'), 'Unknown')
enriched_items.append(enriched_item)
result = {
'filtered_items': filtered_items,
'enriched_items': enriched_items
}Return Value Patterns
Simple Results
# Return simple count
count = len(args.get('vars', {}).get('items', []))
result = countStructured Results
# Return structured data (recommended)
result = {
'status': 'success',
'data': processed_data,
'metadata': {
'processed_at': datetime.datetime.now().isoformat(),
'count': len(processed_data),
'processor': args.get('user', {}).get('login', 'system')
}
}Multiple Outputs
# Return multiple named outputs
users = args.get('vars', {}).get('users', [])
# Separate different types of data
admins = [u for u in users if u.get('role') == 'admin']
regular_users = [u for u in users if u.get('role') == 'user']
result = {
'admins': admins,
'regularUsers': regular_users,
'counts': {
'total': len(users),
'admins': len(admins),
'users': len(regular_users)
}
}Common Patterns
Best Practices
- Always validate inputs - Check for required data and proper types
- Handle errors gracefully - Use try/catch blocks and return structured error information
- Use built-in libraries efficiently - Leverage collections, statistics, itertools for data processing
- Return structured data - Use dictionaries for complex results
- Process large data in chunks - For memory efficiency with large datasets
- Log progress for long operations - Help with debugging and monitoring
- Use appropriate data structures - Sets for membership, dicts for lookups
- Include metadata in results - Processing time, counts, user context
The result of your Python script automatically becomes available in the workflow's variable context for use by subsequent nodes.