NattyTech AI Security & Privacy Practices
Last Updated: April 2026
🔒
Inference Safety
- Shell output suppression across all personas
- Strict persona boundary enforcement to prevent context bleed
- Fallback tagging for traceable logic and auditability
- Explainability protocols for enriched outputs
- Adversarial input detection and mitigation
🧼
Log Management
- Ephemeral session logs with optional analyst export
-
Automatic deletion after
24 hours (configurable) - Redaction of sensitive input/output using semantic filters
- No logging of PII unless explicitly enabled
- Audit trail for enrichment, fallback and persona switches
🧪
Sandbox Architecture
- Dockerized persona containers with isolated memory
- No persistent storage unless explicitly enabled
- Network isolation for high-risk personas
- Runtime validation of persona boundaries
- External API calls gated by allowlist and rate limits
📜
Responsible Use
-
Terms aligned with defensive, synthetic generation, and simulation
boundaries - Simulation scope defined per deployment
- No real-time threat response unless certified
- “Simulation only” tags for analyst-facing personas
- User consent required for data retention/export
🧑⚖️
Regulatory Compliance
- GDPR & CCPA alignment: opt-in, deletion, minimization
- AI Act (EU) readiness: transparency, oversight, risk tiers
- SOC 2 & ISO 27001 compatibility for enterprise use
- Model retraining protocols for data deletion requests
- Privacy impact assessments for new personas
🧠
Governance & Ethics
- Cross-functional AI governance team
- Bias detection and suppression across all active personas
- Transparent decision-making logs
- Incident response plan for misuse or hallucinations
- Public changelog for suppression and fallback updates

