AI Transparency

Responsible AI at RALIA

How we design, operate, and govern AI systems to help you achieve compliance with confidence

Version 2.0January 20265 min read

This document explains how RisqBase d.o.o. designs, operates, and governs AI systems within RALIA. We are committed to responsible AI development that aligns with our values and applicable regulations, including the EU AI Act.

1. Our AI Principles

Our approach to AI is guided by five core principles:

Transparency

We are open about how our AI works, its capabilities, and limitations. Users can always distinguish AI-generated content from human input.

Human Oversight

AI provides assistance, but humans remain in control of all meaningful decisions. Our systems augment, not replace, human judgement.

Privacy by Design

We minimise data collection, do not use customer data to train models, and implement robust security measures.

Accuracy & Reliability

We continuously improve AI accuracy and clearly communicate when outputs should be verified by professionals.

Fairness

We design systems to treat all users fairly and work to identify and mitigate potential biases in AI outputs.

2. How We Use AI

RALIA incorporates AI technology across four key areas:

LIA Assistant

Our AI-powered Legal Intelligence Assistant helps you navigate compliance requirements:

  • Answering questions about regulatory frameworks
  • Providing guidance on AI Act compliance
  • Explaining technical concepts in plain language
  • Suggesting relevant documentation and resources

Assessments

AI helps users complete compliance assessments by:

  • Providing context-aware question explanations
  • Suggesting appropriate responses based on input
  • Generating preliminary risk classifications
  • Identifying potential compliance gaps

Documents

AI assists in creating compliance documentation:

  • Fundamental Rights Impact Assessments (FRIAs)
  • Data Protection Impact Assessments (DPIAs)
  • Transparency documentation
  • Risk assessment reports

Horizon Intelligence

AI enhances our regulatory monitoring service:

  • Summarising regulatory updates and guidance
  • Identifying relevant content based on your preferences
  • Analysing trends in regulatory developments
  • Generating digest emails and alerts

3. AI Governance

3.1 Risk Classification

Limited Risk AI System

Under the EU AI Act, RALIA is classified as a limited risk AI system. Our primary obligation is transparency about AI use, fulfilled through clear labelling and this documentation.

RALIA is not used for and explicitly prohibits use cases that would constitute high-risk or prohibited AI applications under the EU AI Act.

3.2 Oversight & Accountability

  • Regular review of AI outputs for quality and accuracy
  • Monitoring for potential bias or harmful outputs
  • Clear escalation procedures for AI-related issues
  • Designated personnel responsible for AI governance

3.3 Model Selection & Providers

We carefully select and evaluate the AI models that power RALIA:

ProviderRoleData Protection
Anthropic (Claude)PrimaryEU Data Processing Agreement
OpenAIBackupEU Data Processing Agreement

All providers are contractually bound to data protection requirements with regular evaluation of performance and safety.

4. Data & Privacy

Your Data Is Not Used for Training

We do NOT use customer data to train our AI models. Your compliance information, assessment responses, and documents are processed to provide results, but are never used to train or improve the underlying AI systems.

Data Responsibility

RALIA

  • Encrypts all data in transit (TLS 1.3)
  • Providers do not retain data beyond session
  • Maintains audit logs for security
  • Minimises data sent to AI providers

You

  • Choose what information to share
  • Review AI outputs before use
  • Avoid inputting sensitive PII
  • Report any data concerns

5. Transparency & Labelling

5.1 AI Content Identification

All AI-generated content in RALIA is clearly identified:

LIA Responses

Labelled as AI-generated

Documents

Include AI disclaimers

Recommendations

Indicate AI involvement

Summaries

Note their AI origin

5.2 Limitations Disclosure

Important Limitations

  • • AI outputs may contain errors or inaccuracies
  • • AI guidance does not constitute legal advice
  • • Complex situations require professional human review
  • • AI knowledge has temporal limitations (training cutoff dates)

6. Human Oversight

User Control

  • AI features can be used optionally
  • Edit, override, or reject AI suggestions
  • Final decisions always rest with you
  • Request human support at any time

Review Processes

  • Risk classifications are marked as preliminary
  • Generated documents are presented as drafts
  • Recommendations include professional verification caveats

Escalation

Escalate concerns about AI outputs through our support channels. All escalations are reviewed by qualified human staff.

7. Safety & Security

7.1 Content Filtering

Our AI systems include safeguards against:

Harmful or offensive content
Malicious misuse attempts
Prompt injection attacks
Misleading legal information

7.2 Security Measures

TLS 1.3

Secure API communications

Rate Limiting

Abuse prevention

Access Controls

Authentication required

Monitoring

Pattern detection

7.3 Incident Response

  • Rapid assessment of potential harms
  • Temporary suspension of features if necessary
  • User notification where appropriate
  • Root cause analysis and remediation

8. Continuous Improvement

8.1 Monitoring & Updates

We continuously monitor and improve AI performance:

User feedback collection
Quality assurance reviews
Accuracy tracking
Bias assessments

8.2 Regulatory Compliance

We monitor evolving AI regulations and maintain compliance with:

  • EU AI Act requirements
  • GDPR automated decision-making obligations
  • Industry standards and best practices
  • Regulatory authority guidance

9. Your Responsibilities

As a user of RALIA's AI features, you are responsible for:

  • 1
    Using AI features for their intended purpose (compliance assistance)
  • 2
    Reviewing AI outputs before making decisions
  • 3
    Seeking professional advice for complex matters
  • 4
    Reporting concerning, inaccurate, or harmful outputs
  • 5
    Complying with our Acceptable Use Policy
  • 6
    Not attempting to manipulate or abuse AI systems

10. Contact & Feedback

We welcome feedback about our AI systems and this documentation.

Use the feedback buttons within the platform or contact us directly to report issues.

Document History

VersionDateSummary
2.0January 2026Enhanced interactive design, improved navigation
1.0January 2026Initial publication