Code Execution
Execute Python and JavaScript code securely in isolated sandbox environments with built-in libraries and utilities.
Code Execution
Execute Python and JavaScript code securely within isolated sandbox environments, designed primarily for AI-generated and untrusted code execution. Supports modern programming features, popular libraries, and comprehensive error handling.
🤖 AI Code Safety First
This tool is specifically designed for safely executing AI-generated code, LLM outputs, and untrusted scripts. The secure runtime environment provides essential protection when running code that hasn't been manually reviewed or verified.
Quick Navigation
AI Code Safety
Learn safe patterns for executing LLM-generated and untrusted code
Python Execution
Run Python scripts with pandas, numpy, requests, and standard library
JavaScript Execution
Execute Node.js code with ES6+ features and npm package support
Security Model
Understand isolation, validation, and safety measures for untrusted code
Methods
ExecuteCode
Run Python or JavaScript code in a secure sandbox environment.
Inputs
| Name | Code | Type | Required | Description |
|---|---|---|---|---|
| Language | language | Select | ✅ Yes | Programming language (python, javascript) |
| Code | code | Text | ✅ Yes | Source code to execute |
| Timeout | timeout | Number | No | Execution timeout in seconds (default: 300) |
| Install Packages | installPackages | Text | No | Packages to install before execution |
Outputs
| Name | Type | Description |
|---|---|---|
| result | Object | Execution result with output and metadata |
| output | Text | Standard output from code execution |
| error | Text | Error messages if execution failed |
| executionTime | Number | Time taken for execution in seconds |
| resourceUsage | Object | CPU and memory usage statistics |
Configuration Example
{
"language": "python",
"code": "# Example of AI-generated data analysis code\nimport pandas as pd\nimport numpy as np\n\n# AI-generated function for sales analysis\ndef analyze_sales_trends(filepath):\n try:\n data = pd.read_csv(filepath)\n \n # Validate data structure (safety check)\n if 'date' not in data.columns or 'sales' not in data.columns:\n raise ValueError('Missing required columns')\n \n # Perform trend analysis\n data['date'] = pd.to_datetime(data['date'])\n monthly_trends = data.groupby(data['date'].dt.to_period('M'))['sales'].agg(['sum', 'mean', 'count'])\n \n print(f'Analyzed {len(data)} sales records')\n print(f'Found trends across {len(monthly_trends)} months')\n \n return monthly_trends\n except Exception as e:\n print(f'Analysis failed safely: {e}')\n return None\n\n# Execute AI-generated analysis safely\nresult = analyze_sales_trends('/sandbox/data/sales.csv')\nif result is not None:\n result.to_csv('/sandbox/output/ai_analysis.csv')",
"timeout": 300,
"installPackages": "pandas numpy matplotlib"
}AI Code Execution Patterns
Safe AI Code Execution
Python Execution
Available Libraries
Performance Optimization
💡 Python Performance Tips
- Use vectorized operations with numpy/pandas for large datasets
- Process data in chunks for memory efficiency
- Use list comprehensions instead of explicit loops where possible
- Close files explicitly or use context managers
- Monitor memory usage with memory profiling
# Efficient data processing pattern
import pandas as pd
import numpy as np
# Read large files in chunks
chunk_size = 10000
processed_chunks = []
for chunk in pd.read_csv('large_file.csv', chunksize=chunk_size):
# Vectorized operations are faster
chunk['processed_value'] = chunk['value'] * np.sqrt(chunk['multiplier'])
# Efficient filtering
valid_chunk = chunk[chunk['processed_value'] > 0]
processed_chunks.append(valid_chunk)
# Combine results efficiently
final_result = pd.concat(processed_chunks, ignore_index=True)
final_result.to_csv('processed_output.csv', index=False)
print(f"Processed {len(final_result)} records")JavaScript Execution
Node.js Environment
Security Model
Isolation Features
Security Restrictions
⚠️ Security Limitations
Restricted Operations:
- System calls - Limited access to system-level operations
- Network access - Only HTTPS connections to approved domains
- File system - Access limited to sandbox directory
- Process execution - Cannot spawn additional processes
- Memory limits - Maximum memory allocation enforced
| Security Feature | Python | JavaScript | Description |
|---|---|---|---|
| Container Isolation | ✅ | ✅ | Complete process isolation |
| Network Filtering | ✅ | ✅ | HTTPS-only, approved domains |
| File System Sandbox | ✅ | ✅ | Access limited to sandbox directory |
| Resource Limits | ✅ | ✅ | CPU, memory, and time constraints |
| Import Restrictions | ✅ | ❌ | Some Python modules blocked |
| Output Sanitization | ✅ | ✅ | All outputs validated and cleaned |
Performance & Limits
Resource Constraints
| Resource | Python | JavaScript | Description |
|---|---|---|---|
| CPU | 2 cores | 2 cores | Maximum CPU allocation |
| Memory | 4GB | 4GB | RAM limit per execution |
| Execution Time | 10 minutes | 10 minutes | Maximum runtime |
| Disk Space | 2GB | 2GB | Temporary storage limit |
| Network I/O | 100MB/min | 100MB/min | Bandwidth limit |
| Output Size | 10MB | 10MB | Maximum output data |
Performance Monitoring
# Python resource monitoring
import psutil
import time
def monitor_execution():
start_time = time.time()
start_memory = psutil.virtual_memory().used
# Your code execution here
result = perform_computation()
# Monitor resources
end_time = time.time()
end_memory = psutil.virtual_memory().used
execution_stats = {
'execution_time': end_time - start_time,
'memory_used': end_memory - start_memory,
'cpu_percent': psutil.cpu_percent(interval=1)
}
print(f"Execution stats: {execution_stats}")
return result, execution_stats// JavaScript performance monitoring
const { performance } = require('perf_hooks');
function monitorExecution() {
const startTime = performance.now();
const startMemory = process.memoryUsage();
// Your code execution here
const result = performComputation();
// Monitor resources
const endTime = performance.now();
const endMemory = process.memoryUsage();
const executionStats = {
executionTime: (endTime - startTime) / 1000, // Convert to seconds
memoryUsed: endMemory.heapUsed - startMemory.heapUsed,
heapTotal: endMemory.heapTotal
};
console.log('Execution stats:', executionStats);
return { result, executionStats };
}Error Handling
Common Error Patterns
Best Practices
📋 Code Execution Best Practices
Performance Optimization
- Use appropriate data structures - Choose efficient algorithms and data types
- Monitor resource usage - Track CPU and memory consumption
- Process data in chunks - Handle large datasets incrementally
- Close resources explicitly - Clean up files, connections, and objects
- Use built-in functions - Leverage optimized library functions
Security Guidelines
- Validate inputs - Check all external data before processing
- Sanitize outputs - Clean data before returning results
- Limit network requests - Only use HTTPS with approved domains
- Handle secrets carefully - Never log or expose sensitive data
- Use safe libraries - Prefer well-maintained, secure packages
Error Management
- Implement comprehensive error handling - Catch and handle all error types
- Log execution details - Record important execution information
- Provide meaningful error messages - Help with debugging and troubleshooting
- Test edge cases - Validate behavior with unusual inputs
- Plan for timeouts - Design code to handle execution time limits
Integration Patterns
With File System Tools
# Complete workflow: Upload → Process → Download
import pandas as pd
# 1. File uploaded via file system tools
input_file = 'uploaded_data.csv'
# 2. Process with code execution
try:
# Load and validate data
data = pd.read_csv(input_file)
# Perform analysis
summary = data.groupby('category').agg({
'value': ['sum', 'mean', 'count'],
'date': 'max'
}).round(2)
# Generate insights
insights = {
'total_records': len(data),
'categories': len(summary),
'summary_stats': summary.to_dict()
}
# Save results for download
summary.to_csv('analysis_summary.csv')
with open('insights.json', 'w') as f:
json.dump(insights, f, indent=2)
print(f"Analysis complete: {insights['total_records']} records processed")
except Exception as e:
print(f"Error processing data: {str(e)}")
# Create error report
with open('error_report.txt', 'w') as f:
f.write(f"Processing failed: {str(e)}")With Web Tools
// Workflow: Web Data → Process → Generate Report
const fs = require('fs').promises;
async function processWebData() {
try {
// 1. Data collected via web tools
const webData = await fs.readFile('web_search_results.json', 'utf8');
const searchResults = JSON.parse(webData);
// 2. Process and analyze data
const analysis = searchResults.map(result => ({
url: result.url,
title: result.title,
wordCount: result.content.split(' ').length,
sentiment: analyzeSentiment(result.content),
keywords: extractKeywords(result.content),
processedAt: new Date().toISOString()
}));
// 3. Generate summary statistics
const summary = {
totalResults: analysis.length,
averageWordCount: analysis.reduce((sum, item) => sum + item.wordCount, 0) / analysis.length,
sentimentDistribution: calculateSentimentDistribution(analysis),
topKeywords: getTopKeywords(analysis)
};
// 4. Save results for document generation
await fs.writeFile('processed_analysis.json', JSON.stringify(analysis, null, 2));
await fs.writeFile('summary_report.json', JSON.stringify(summary, null, 2));
console.log(`Processed ${analysis.length} web search results`);
return summary;
} catch (error) {
console.error('Web data processing failed:', error.message);
throw error;
}
}
function analyzeSentiment(text) {
// Simple sentiment analysis
const positiveWords = ['good', 'great', 'excellent', 'amazing', 'wonderful'];
const negativeWords = ['bad', 'terrible', 'awful', 'horrible', 'disappointing'];
const words = text.toLowerCase().split(' ');
const positiveCount = words.filter(word => positiveWords.includes(word)).length;
const negativeCount = words.filter(word => negativeWords.includes(word)).length;
if (positiveCount > negativeCount) return 'positive';
if (negativeCount > positiveCount) return 'negative';
return 'neutral';
}
// Execute processing
processWebData();Troubleshooting
Common Issues
| Issue | Python | JavaScript | Solution |
|---|---|---|---|
| Import/Require Errors | ModuleNotFoundError | Cannot find module | Use package installer or check spelling |
| Memory Limit | MemoryError | JavaScript heap out of memory | Process data in chunks, optimize memory usage |
| Timeout | TimeoutError | Execution timeout | Optimize algorithm, reduce data size |
| File Not Found | FileNotFoundError | ENOENT | Check file path, ensure file was uploaded |
| Permission Error | PermissionError | EACCES | Use correct file paths within sandbox |
| Syntax Error | SyntaxError | SyntaxError | Check code syntax, validate indentation |
Debug Information
# Python debugging information
import sys
import os
def get_debug_info():
debug_info = {
'python_version': sys.version,
'working_directory': os.getcwd(),
'environment_variables': dict(os.environ),
'available_modules': [module for module in sys.modules.keys()],
'file_list': os.listdir('.') if os.path.exists('.') else []
}
return debug_info
# Print debug info
print(json.dumps(get_debug_info(), indent=2, default=str))// JavaScript debugging information
const fs = require('fs');
const os = require('os');
function getDebugInfo() {
const debugInfo = {
nodeVersion: process.version,
platform: process.platform,
workingDirectory: process.cwd(),
environmentVariables: process.env,
memoryUsage: process.memoryUsage(),
fileList: fs.existsSync('.') ? fs.readdirSync('.') : []
};
return debugInfo;
}
// Print debug info
console.log(JSON.stringify(getDebugInfo(), null, 2));Related Tools
Package Installer
Install Python and Node.js packages for code execution
File System Tools
Upload code files and download execution results
Command Execution
Run shell commands and system operations
Core Code Execution
Compare with core JavaScript and Python execution nodes
Next Steps: Explore Package Installer for dependency management or Command Execution for shell operations.