AI Governance

AI Ethics & Governance Policy

Version 1.0 — Last updated: April 18, 2026

1. Purpose & Scope

This policy establishes Mediar AI's commitment to responsible AI development and deployment. It applies to all AI systems developed, deployed, or used by Mediar AI, including:

  • • Workflow automation agents
  • • LLM-based assistants and orchestrators
  • • UI automation systems
  • • Any AI/ML models integrated into our products

2. Leadership Commitment

Mediar AI's leadership commits to:

  • • Allocating resources for AI safety and compliance
  • • Regularly reviewing AI system impacts and risks
  • • Ensuring alignment between AI development and business objectives
  • • Promoting continuous improvement in AI governance
  • • Responding promptly to identified AI-related risks or incidents

Signed

Matthew Diakonov, CEO

December 8, 2025

3. Core AI Principles

3.1 Transparency

  • • Users are informed when interacting with AI systems
  • • AI decision-making processes are documented and explainable where feasible
  • • Limitations of AI systems are clearly communicated

3.2 Human Oversight

  • • Critical decisions maintain human-in-the-loop controls
  • • Users can override or stop AI actions at any time
  • • Automated workflows include cancellation mechanisms

3.3 Privacy & Data Protection

  • • AI systems process only necessary data
  • • Personal data is handled per GDPR requirements
  • • User data is not used for model training without consent

3.4 Safety & Security

  • • AI systems undergo security review before deployment
  • • Prompt injection and adversarial attacks are mitigated
  • • Credentials and sensitive data are never logged by AI systems

3.5 Fairness & Non-discrimination

  • • AI systems are tested for bias before deployment
  • • Discriminatory outputs are prohibited
  • • Feedback mechanisms allow users to report issues

4. AI Risk Assessment

Risk CategoryExampleMitigation
MisuseAutomation of harmful tasksContent filtering; Terms of Service
Data LeakageAI exposing sensitive dataRLS; Credential isolation
HallucinationIncorrect AI outputsHuman review; Confidence thresholds
AvailabilityAI system downtimeAuto-scaling; Redundancy
AdversarialPrompt injection attacksInput sanitization; Output filtering

5. Governance Structure

As an early-stage startup, Mediar AI maintains founder-led governance with clear accountability:

RoleResponsibilityPerson
AI Policy OwnerOverall AI governance and complianceMatthew Diakonov (CEO)
Technical LeadAI system implementation and safetyMatthew Diakonov (CEO)
ComplianceRegulatory alignmentMatthew Diakonov (interim)

As the company grows, an independent advisor with AI expertise will be appointed to provide external oversight.

6. AI System Inventory

SystemPurposeRisk Level
Workflow ExecutorUI automation via LLM orchestrationMedium
MCP AgentDesktop automation and controlMedium
Vertex AI IntegrationLLM inference for decision makingLow

7. Incident Response

AI-related incidents are handled through a structured process:

  1. Detection — Monitoring alerts or user reports
  2. Assessment — Evaluate impact and severity
  3. Containment — Disable affected AI features if needed
  4. Resolution — Fix root cause
  5. Review — Document lessons learned

Report AI incidents to: matt@mediar.ai

8. Review & Updates

This policy is reviewed:

  • • Annually (minimum)
  • • After significant AI system changes
  • • Following AI-related incidents
  • • When regulations change (e.g., EU AI Act)

Document Control

Version: 1.0

Last Updated: December 8, 2025

Author: Matthew Diakonov

Next Review: December 2026