Risk Management Process Overview


ZenZen follows a structured risk management process based on internationally recognised standards, including ISO 14971 for medical device risk management and IEC 62304 for software lifecycle safety. We systematically identify, assess, and control potential risks across our product, infrastructure, and operations. Each identified risk receives documented evaluation, mitigation measures, and ongoing monitoring.

Our team performs continuous reviews and updates to ensure that new features, data flows, and clinical insights are assessed before release. Protective measures, safeguards, and monitoring processes are built directly into the design of the ZenZen system to maintain a high level of safety for all users.

We protect your conversations with the AI assistant through multiple layers of security:

AI Safety - Every message is checked

Before your message reaches the AI assistant, we automatically check it for problematic content like violence, hate speech, or self-harm. The assistant's responses are also checked before being delivered to you.

If something violates our guidelines, the message is blocked and you receive a clear notification instead. This way, there are no gaps in your conversation history and you always know what happened.

Only you can access your chats

You must be signed in to use the AI assistant. Anonymous access is not allowed. Certain features are additionally restricted to authorized administrators only.

Protection against abuse

To prevent misuse, we limit the number of requests per time period. This protects the system from overload and keeps costs under control.

The assistant knows its limits

Our AI assistant is specifically designed for topics around pregnancy and nutrition. It's configured to:

  • Not provide medical diagnoses or emergency instructions.
  • Not create inappropriate content.
  • Follow our safety guidelines.

Your data stays protected

The AI is only accessed through our servers—never directly from your device. API keys and technical details are not visible to you. We only send the minimum necessary information to the AI; sensitive data like internal IDs stays on our servers.

Transparency and accountability

We log all interactions to detect issues and continuously improve security. For admin actions like mass notifications, we also record who sent what and when.

Processing in your region

Where possible, we process your data in your region (EU or US) to meet data protection requirements.

Multiple layers of protection

Technology alone isn't enough—that's why we combine automated checks with organizational measures. Changes to the AI's behavior go through internal review processes. This ensures that even if one security layer misses something, other safeguards step in.