AI Governance
AI Ethics & Governance Policy
Version 1.0 — Last updated: April 18, 2026
1. Purpose & Scope
This policy establishes Mediar AI's commitment to responsible AI development and deployment. It applies to all AI systems developed, deployed, or used by Mediar AI, including:
- • Workflow automation agents
- • LLM-based assistants and orchestrators
- • UI automation systems
- • Any AI/ML models integrated into our products
2. Leadership Commitment
Mediar AI's leadership commits to:
- • Allocating resources for AI safety and compliance
- • Regularly reviewing AI system impacts and risks
- • Ensuring alignment between AI development and business objectives
- • Promoting continuous improvement in AI governance
- • Responding promptly to identified AI-related risks or incidents
Signed
Matthew Diakonov, CEO
December 8, 2025
3. Core AI Principles
3.1 Transparency
- • Users are informed when interacting with AI systems
- • AI decision-making processes are documented and explainable where feasible
- • Limitations of AI systems are clearly communicated
3.2 Human Oversight
- • Critical decisions maintain human-in-the-loop controls
- • Users can override or stop AI actions at any time
- • Automated workflows include cancellation mechanisms
3.3 Privacy & Data Protection
- • AI systems process only necessary data
- • Personal data is handled per GDPR requirements
- • User data is not used for model training without consent
3.4 Safety & Security
- • AI systems undergo security review before deployment
- • Prompt injection and adversarial attacks are mitigated
- • Credentials and sensitive data are never logged by AI systems
3.5 Fairness & Non-discrimination
- • AI systems are tested for bias before deployment
- • Discriminatory outputs are prohibited
- • Feedback mechanisms allow users to report issues
4. AI Risk Assessment
| Risk Category | Example | Mitigation |
|---|---|---|
| Misuse | Automation of harmful tasks | Content filtering; Terms of Service |
| Data Leakage | AI exposing sensitive data | RLS; Credential isolation |
| Hallucination | Incorrect AI outputs | Human review; Confidence thresholds |
| Availability | AI system downtime | Auto-scaling; Redundancy |
| Adversarial | Prompt injection attacks | Input sanitization; Output filtering |
5. Governance Structure
As an early-stage startup, Mediar AI maintains founder-led governance with clear accountability:
| Role | Responsibility | Person |
|---|---|---|
| AI Policy Owner | Overall AI governance and compliance | Matthew Diakonov (CEO) |
| Technical Lead | AI system implementation and safety | Matthew Diakonov (CEO) |
| Compliance | Regulatory alignment | Matthew Diakonov (interim) |
As the company grows, an independent advisor with AI expertise will be appointed to provide external oversight.
6. AI System Inventory
| System | Purpose | Risk Level |
|---|---|---|
| Workflow Executor | UI automation via LLM orchestration | Medium |
| MCP Agent | Desktop automation and control | Medium |
| Vertex AI Integration | LLM inference for decision making | Low |
7. Incident Response
AI-related incidents are handled through a structured process:
- Detection — Monitoring alerts or user reports
- Assessment — Evaluate impact and severity
- Containment — Disable affected AI features if needed
- Resolution — Fix root cause
- Review — Document lessons learned
Report AI incidents to: matt@mediar.ai
8. Review & Updates
This policy is reviewed:
- • Annually (minimum)
- • After significant AI system changes
- • Following AI-related incidents
- • When regulations change (e.g., EU AI Act)
Document Control
Version: 1.0
Last Updated: December 8, 2025
Author: Matthew Diakonov
Next Review: December 2026