Execution Tools
Secure code execution, command running, and package management in isolated sandbox environments.
Execution Tools
Execute code and commands securely within isolated sandbox environments with comprehensive package management and resource controls.
🔒 Secure Execution Environment
All execution tools run in isolated containers with resource limits, network controls, and file system sandboxing for maximum security.
Quick Navigation
Code Execution
Run Python and JavaScript code securely with built-in libraries and utilities
Command Execution
Execute shell commands and system operations within secure boundaries
Package Installer
Install and manage Python, Node.js, and system packages safely
Available Tools
| Tool | Code | Purpose | Supported Languages |
|---|---|---|---|
| Code Execution | codeExecution | Execute scripts and programs | Python, JavaScript |
| Command Execution | commandExecution | Run shell commands and system operations | Bash, Shell scripts |
| Package Installer | packageInstaller | Install dependencies and libraries | pip, npm, apt |
Security Architecture
Execution Isolation
Security Features
🛡️ Multi-Layer Security
- Container Isolation: Each execution runs in a separate container
- Resource Limits: CPU, memory, and execution time constraints
- Network Controls: Restricted outbound connectivity with policy enforcement
- File System Sandboxing: Isolated file operations with permission controls
- Output Sanitization: All outputs are validated and sanitized
- Process Monitoring: Real-time tracking of execution behavior
Execution Capabilities
Code Execution Features
Command Execution Capabilities
⚠️ Command Execution Security
Command execution is heavily restricted and monitored. Only safe, pre-approved commands are allowed.
Supported Operations:
- File Operations - ls, cp, mv, rm, mkdir, chmod (restricted)
- Text Processing - grep, sed, awk, sort, uniq
- Archive Operations - tar, gzip, unzip (size limits apply)
- System Information - ps, df, free, uname (limited output)
- Network Tools - ping, curl, wget (restricted destinations)
Security Restrictions:
- Whitelist of allowed commands
- Argument validation and sanitization
- Output size limits
- Execution timeout controls
- Network destination restrictions
Package Management
Performance & Limits
Resource Constraints
| Resource | Limit | Description |
|---|---|---|
| CPU Usage | 2 cores | Maximum CPU allocation per execution |
| Memory | 4GB | RAM limit for code execution |
| Execution Time | 10 minutes | Maximum runtime before timeout |
| Disk Space | 2GB | Storage limit for files and packages |
| Network Bandwidth | 100MB/min | Download/upload rate limiting |
| File Operations | 1000/min | Maximum file I/O operations |
Performance Optimization
💡 Performance Tips
- Batch Operations: Combine multiple operations to reduce overhead
- Memory Management: Clean up variables and close files explicitly
- Efficient Libraries: Use optimized libraries (numpy, pandas) for data processing
- Output Streaming: Process large datasets in chunks rather than loading entirely
- Resource Monitoring: Check resource usage within scripts to avoid limits
Common Patterns
Data Processing Workflow
# Efficient data processing pattern
import pandas as pd
import numpy as np
# Read data in chunks for large files
chunk_size = 10000
processed_data = []
for chunk in pd.read_csv('large_file.csv', chunksize=chunk_size):
# Process each chunk
chunk_processed = chunk.groupby('category').agg({
'value': 'sum',
'count': 'size'
})
processed_data.append(chunk_processed)
# Combine results
final_result = pd.concat(processed_data).groupby(level=0).sum()
final_result.to_csv('output.csv')Package Installation Pattern
# Install required packages at runtime
import subprocess
import sys
def install_package(package):
subprocess.check_call([sys.executable, '-m', 'pip', 'install', package])
# Install packages as needed
required_packages = ['scikit-learn', 'seaborn']
for package in required_packages:
install_package(package)
# Now use the packages
import sklearn
import seaborn as snsError Handling Pattern
import logging
import traceback
# Set up logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
try:
# Your code here
result = process_data()
logger.info(f"Processing completed successfully: {len(result)} records")
except MemoryError:
logger.error("Memory limit exceeded. Try processing in smaller chunks.")
raise
except TimeoutError:
logger.error("Execution timeout. Optimize code or request more time.")
raise
except Exception as e:
logger.error(f"Unexpected error: {str(e)}")
logger.error(traceback.format_exc())
raiseIntegration Patterns
With File System Tools
# Workflow: Upload → Process → Download
# 1. File uploaded via file system tools
input_file = 'uploaded_data.csv'
# 2. Process with code execution
data = pd.read_csv(input_file)
processed = perform_analysis(data)
processed.to_csv('results.csv')
# 3. Download results via file system tools
output_file = 'results.csv'With Web Tools
# Workflow: Web Search → Process → Generate Report
# 1. Data collected via web tools
search_results = 'search_data.json'
# 2. Process with code execution
with open(search_results, 'r') as f:
data = json.load(f)
analysis = analyze_web_data(data)
generate_insights(analysis)
# 3. Results ready for document generationBest Practices
📋 Execution Best Practices
Code Quality
- Error Handling - Always implement comprehensive error handling
- Resource Cleanup - Close files and clean up resources explicitly
- Progress Logging - Log progress for long-running operations
- Input Validation - Validate all inputs before processing
Security Guidelines
- Input Sanitization - Clean and validate all external inputs
- Output Verification - Verify outputs before returning to workflow
- Dependency Management - Only install necessary packages
- Resource Monitoring - Monitor and respect resource limits
Performance Optimization
- Efficient Algorithms - Choose optimal algorithms for data size
- Memory Management - Process large datasets in chunks
- Caching - Cache expensive computations when appropriate
- Profiling - Profile code to identify performance bottlenecks
Troubleshooting
Common Issues
| Problem | Symptoms | Solution |
|---|---|---|
| Memory Limit Exceeded | MemoryError, process killed | Process data in smaller chunks, optimize memory usage |
| Execution Timeout | TimeoutError, operation cancelled | Optimize algorithm, reduce data size, or request more time |
| Package Installation Failed | ModuleNotFoundError, import errors | Check package name, verify network connectivity, use package installer |
| Permission Denied | PermissionError, access violations | Ensure file permissions, use correct file paths within sandbox |
| Network Restrictions | Connection errors, blocked requests | Verify URL whitelist, use HTTPS, check network policies |
Debugging Tips
🐛 Debugging Guidelines
- Use Logging: Add comprehensive logging to track execution flow
- Check Resource Usage: Monitor CPU and memory consumption
- Validate Inputs: Ensure input data format and content are correct
- Test Incrementally: Test with smaller datasets before full execution
- Review Error Messages: Read complete error traces for specific issues
Related Tools
File System Tools
Upload files for processing and download results from code execution
Data Analysis Tools
Advanced Excel processing and statistical analysis capabilities
Web Tools
Collect data from the web for processing with execution tools
Core Code Execution
Compare with core JavaScript and Python execution nodes
Next Steps: Start with Code Execution for script execution, or explore Package Installer for dependency management.