AI Governance

 

Subject

This AI Governance Statement describes the principles, commitments, and governance mechanisms implemented by MR SURICATE the use of artificial intelligence (AI) technologies within its SaaS platform. 

This document is for informational purposes only and does not constitute a standalone contractual commitment. The applicable commitments are set forth in the Agreement, the DPA, and the SLA. 

 

Scope of AI Features

The AI technologies integrated into the MR SURICATE platform MR SURICATE be used in: 

  • Assistance with creating test scenarios 
  • Suggestions for optimizing scripts 
  • Detection of technical anomalies 
  • Automated analysis of results 
  • Technical Documentation Support 
  • Agent orchestration 

AI features are designed as assistive tools. They do not replace human intervention or user validation. 

 

Guiding Principles

MR SURICATE to design AI systems that are robust and resilient in order to minimize operational errors or unexpected behavior. MR SURICATE the following principles:

Human supervision

Operational decisions remain under the control of users. 

AI-generated suggestions require human validation.

Proportionality

The use of AI mechanisms is proportionate to the technical purpose of the service.

Transparency

MR SURICATE to document: 

  • Use cases for AI features 
  • Their known limitations 
  • Best practices for use

Security

The AI components comply with the security standards applicable to the platform: 

  • Hosting in a secure environment 
  • Strict access controls 
  • Logging of administrative actions
  • Logical segmentation of environments 

Data Protection

The use of AI mechanisms complies with the requirements of the GDPR. 

MR SURICATE to limit the exposure of personal data whenever possible, in particular by prioritizing: 

  • Technical specifications 
  • Pseudonymized data 
  • Appropriate testing environments

 

Risk Management

Risks associated with AI technologies are incorporated into the SMSI risk analysis. These risks may include: 

  • Reliability of suggestions 
  • Misinterpretations 
  • Potential biases 
  • Dependencies on third-party suppliers 

The identified risks are monitored as part of our continuous improvement process. 

Where appropriate, MR SURICATE analyze the results of AI systems to identify any technical biases or unexpected behavior.

 

AI Risk Classification (AI Act Alignment)

MR SURICATE its AI use cases in light of the European Artificial Intelligence Act (AI Act). 

Type of systems

The AI features are designed to provide technical support in the field of automated testing. 
They do not make legal or automated decisions that directly affect individuals.

Risk category

Given their current purpose, the AI systems deployed by MR SURICATE classified as: 
AI systems with limited risk 
or 
Technical assistance tools that do not constitute high-risk systems within the meaning of the AI Act.

No high-risk use cases

AI features are not used for: 

  • Assessment of Individuals 
  • Behavioral scoring 
  • Automated decision-making with legal effect 
  • Biometrics 
  • Large-scale surveillance 

Consequently, they do not fall into the categories of high-risk AI.

Regulatory Monitoring

MR SURICATE continuously MR SURICATE regulatory developments in order to adapt its governance mechanisms in response to operational or regulatory changes.

 

Model and Data Lifecycle Governance

MR SURICATE governance principles that cover the entire lifecycle of AI components and associated data. 

Model selection and integration

AI models can be: 

  • Developed in-house 
  • Provided by third-party technology partners 

Prior to onboarding, an assessment is conducted covering: 

  • Safety guarantees 
  • Contractual obligations 
  • Data protection mechanisms 
  • Applicable regulatory compliance 

Runtime environments

AI components are executed in secure environments that comply with: 

  • The platform's security standards 
  • Strict access controls 
  • Logging mechanisms 
  • Logical isolation requirements

Data Governance

MR SURICATE that: 

  • Limit the use of personal data 
  • Prioritize technical or anonymized data 
  • Apply the same security rules as for the entire system 
  • Data is not used for external training without an appropriate legal basis. 

Customer data used within the platform is not used to train public or third-party artificial intelligence models without explicit contractual consent. 

Logging and traceability

Administrative actions related to AI features are logged in accordance with the platform's logging policy. 

Traceability aims to: 

  • Identify access points 
  • Identify the changes 
  • Enable analysis in the event of an incident

Version and Change Management

The developments in AI mechanisms are: 

  • Tested before deployment 
  • Documented 
  • Integrated into the change management process 
  • These updates are part of our ongoing improvement efforts.

Performance Monitoring

MR SURICATE monitor: 

  • The relevance of the suggestions 
  • Technical issues 
  • User feedback 

This monitoring is intended to ensure continuous improvement and does not guarantee absolute accuracy.

Third-Party Dependency Management

When third-party services are used: 

  • Suppliers are evaluated 
  • Contractual obligations are regulated 
  • Data transfers comply with regulatory requirements 

 

Regulatory Compliance

MR SURICATE changes in applicable regulatory frameworks, including: 

  • Regulation (EU) 2016/679 (GDPR) 
  • European Regulation on Artificial Intelligence (AI Act) 
  • Best practices for responsible AI 

Current AI systems are designed as technical assistance tools and do not constitute high-risk AI systems. 

 

Responsibility

AI features are provided on a best-efforts basis. 
MR SURICATE guarantee either the absolute accuracy or the absence of errors in the generated suggestions. The user remains responsible for validating the actions taken.

 

Continuous Improvement

The governance of AI mechanisms is monitored as part of MR SURICATE SMSI MR SURICATE may be subject to periodic security and compliance reviews. 

The measures described may change in the context of: 

  • Continuous improvement 
  • Technological Advancements 
  • Regulatory Adaptation 

Provided that an equivalent level of security and compliance is maintained.

 

Contact

For any questions regarding AI governance: suricate