eval's safety hazards
In a production deployment of LangGraph CodeAct, it is straightforward to use Python's built-in eval Execution of the generated code poses a serious security risk:
- Code Injection Risks: Possible execution of maliciously constructed code
- System privilege issues: Access to system resources and sensitive information
- Stability effects: May cause the program to crash or run out of resources
Security Implementation Program
It is recommended to use a specialized code sandbox for safe execution:
- process isolation: Run the code in a separate process
- Resource constraints: Limit CPU, memory, and other resource usage
- privilege control: Reduced execution privileges and restricted access to documents
- timeout handling: Setting the upper limit of execution time
Customized Sandbox Implementation
The article provides a basic example:
def custom_sandbox(code: str, _locals: dict) -> tuple[str, dict]:
try:
with open("temp.py", "w") as f:
f.write(code)
import subprocess
result = subprocess.check_output(["python", "temp.py"], text=True)
return result, {}
except Exception as e:
return f"错误: {e}", {}
This implementation provides basic security isolation by writing code to a temporary file and executing it through a child process.
Recommended Professional Programs
For enterprise level applications, it is recommended to consider using a proven sandboxing solution such as:
- Docker containers
- Dedicated Code Sandbox Service
- Cloud Function Execution Environment
This answer comes from the articleLangGraph CodeAct: generating code to help intelligences solve complex tasksThe
































