# Deserialization of Untrusted Data (CWE-502) The product deserializes untrusted data without sufficiently verifying that the resulting data will be valid. - Prevalence: Medium 3 languages covered - Impact: Critical 3 critical-severity rules - Prevention: Documented 7 fix examples **OWASP:** Software and Data Integrity Failures (A08:2021-Software and Data Integrity Failures) - #8 ## Description Many programming languages allow the serialization of objects for storage or transmission. When untrusted data is deserialized, it can lead to code execution, denial of service, or other unintended consequences. ## Prevention Prevention strategies for Deserialization of Untrusted Data based on 7 Shoulder detection rules. ### Go Use strict typed structs instead of interface{} and avoid gob with untrusted data Validate all training data against strict schemas and apply content moderation before ingestion ### Node.js Validate training data against schemas and use content moderation before fine-tuning Use JSON.parse() instead of node-serialize, and yaml.SAFE_SCHEMA for YAML parsing ### Python Validate training data with Pydantic schemas and apply content moderation before ingestion Replace pickle/marshal with JSON or other safe serialization formats Use yaml.safe_load() instead of yaml.load() to prevent code execution ## Warning Signs - [HIGH] Untrusted data is deserialized without validation - [HIGH] truly dangerous deserialization in Go - [HIGH] Untrusted data flows to ... without validation - [HIGH] untrusted data flowing into AI/LLM fine-tuning or training processes without validation - [HIGH] untrusted or unvalidated data flowing into AI/LLM fine-tuning or training processes - [CRITICAL] user input flowing to unsafe deserialization functions like node-serialize or yaml - [CRITICAL] untrusted user input being deserialized using unsafe methods like pickle - [CRITICAL] unsafe YAML deserialization using yaml ## Consequences - Execute Unauthorized Code - DoS: Crash/Exit/Restart - Modify Application Data ## Mitigations - Avoid deserialization of untrusted data if possible - If deserialization is necessary, use safer formats like JSON - Implement integrity checks such as digital signatures - Isolate deserialization in low privilege environments ## Detection - Total rules: 7 - Critical: 3 - Languages: go, javascript, typescript, python ## Rules by Language ### Python (3 rules) - **LLM Training Data Poisoning** [HIGH]: Detects untrusted or unvalidated data flowing into AI/LLM fine-tuning or training processes. OWASP LLM03 - Training Data Poisoning. Training data poisoning can: - Introduce backdoors into model behavior - Bias model outputs maliciously - Embed harmful content that appears in responses - Compromise model accuracy and reliability - Create security vulnerabilities in model behavior - Remediation: Validate training data with Pydantic and use content moderation. ```python from pydantic import BaseModel, validator class TrainingData(BaseModel): examples: list @validator('examples', each_item=True) def validate_example(cls, v): if len(v.get('content', '')) > 4000: raise ValueError('Content too long') return v data = TrainingData(**request.json) moderation = await openai.moderations.create(input=data.json()) ``` Learn more: https://shoulder.dev/learn/python/cwe-502/llm-training-data-poisoning - **Unsafe Deserialization** [CRITICAL]: Detects untrusted user input being deserialized using unsafe methods like pickle.loads() or yaml.load(). - Remediation: Use json.loads() or yaml.safe_load() instead of pickle. ```python import json obj = json.loads(user_data) ``` Learn more: https://shoulder.dev/learn/python/cwe-502/unsafe-deserialization - **Unsafe YAML Deserialization** [CRITICAL]: Detects unsafe YAML deserialization using yaml.load() without SafeLoader. - Remediation: Use yaml.safe_load() instead of yaml.load(). ```python config = yaml.safe_load(yaml_string) ``` Learn more: https://shoulder.dev/learn/python/cwe-502/yaml-deserialization ### Go (2 rules) - **Insecure Deserialization** [HIGH]: Detects truly dangerous deserialization in Go. Unlike Java or Python, Go's encoding/json is safe (data-only parsing, no code execution). This rule focuses on: - gob.Decoder: Can instantiate arbitrary types, potential RCE (CRITICAL) - json/yaml/xml to interface{}: Type confusion risk when combined with untrusted input (MEDIUM) Note: json.Unmarshal to typed structs is NOT flagged as it cannot execute code. - Remediation: Use strict struct types instead of interface{} and validate after unmarshaling. ```go type User struct { Name string `json:"name"` Email string `json:"email"` } var user User if err := json.Unmarshal(input, &user); err != nil { return err } ``` Learn more: https://shoulder.dev/learn/go/cwe-502/unsafe-deserialization - **LLM Training Data Poisoning** [HIGH]: Detects untrusted data flowing into AI/LLM fine-tuning or training processes without validation. - Remediation: Validate all training data against strict schemas before ingestion. ```go if err := validate.Struct(doc); err != nil { return errors.New("validation failed") } ``` Learn more: https://shoulder.dev/learn/go/cwe-502/llm-training-data-poisoning ### Javascript (2 rules) - **LLM Training Data Poisoning** [HIGH]: Detects untrusted or unvalidated data flowing into AI/LLM fine-tuning or training processes. OWASP LLM03 - Training Data Poisoning. Training data poisoning can: - Introduce backdoors into model behavior - Bias model outputs maliciously - Embed harmful content that appears in responses - Compromise model accuracy and reliability - Create security vulnerabilities in model behavior This rule detects: - User-provided data used directly in fine-tuning - External data sources used without validation - Training data loaded from untrusted URLs - Missing data validation before training - Remediation: Validate training data against schemas and use content moderation before fine-tuning. ```javascript if (!validate(trainingData)) { return res.status(400).json({ error: 'Invalid format' }); } await openai.files.create({ file: trainingData, purpose: 'fine-tune' }); ``` Learn more: https://shoulder.dev/learn/javascript/cwe-502/llm-training-data-poisoning - **Unsafe Deserialization** [CRITICAL]: Detects user input flowing to unsafe deserialization functions like node-serialize or yaml.load(). - Remediation: Use JSON.parse() instead of node-serialize, or use yaml.SAFE_SCHEMA for YAML parsing. ```javascript const data = JSON.parse(userInput); // Or for YAML: const config = yaml.load(input, { schema: yaml.SAFE_SCHEMA }); ``` Learn more: https://shoulder.dev/learn/javascript/cwe-502/unsafe-deserialization ### Typescript (2 rules) - **LLM Training Data Poisoning** [HIGH]: Detects untrusted or unvalidated data flowing into AI/LLM fine-tuning or training processes. OWASP LLM03 - Training Data Poisoning. Training data poisoning can: - Introduce backdoors into model behavior - Bias model outputs maliciously - Embed harmful content that appears in responses - Compromise model accuracy and reliability - Create security vulnerabilities in model behavior This rule detects: - User-provided data used directly in fine-tuning - External data sources used without validation - Training data loaded from untrusted URLs - Missing data validation before training - Remediation: Validate training data against schemas and use content moderation before fine-tuning. ```javascript if (!validate(trainingData)) { return res.status(400).json({ error: 'Invalid format' }); } await openai.files.create({ file: trainingData, purpose: 'fine-tune' }); ``` Learn more: https://shoulder.dev/learn/javascript/cwe-502/llm-training-data-poisoning - **Unsafe Deserialization** [CRITICAL]: Detects user input flowing to unsafe deserialization functions like node-serialize or yaml.load(). - Remediation: Use JSON.parse() instead of node-serialize, or use yaml.SAFE_SCHEMA for YAML parsing. ```javascript const data = JSON.parse(userInput); // Or for YAML: const config = yaml.load(input, { schema: yaml.SAFE_SCHEMA }); ``` Learn more: https://shoulder.dev/learn/javascript/cwe-502/unsafe-deserialization