Usage Guidelines

Responsible Use Guidance

How to use RALIA effectively and responsibly for AI compliance

Version 2.0January 20268 min read

This document provides guidance on using RALIA responsibly, including its AI-powered features. By using RALIA, you agree to follow these guidelines and our Terms of Service.

1. Understanding RALIA

What RALIA Is

  • AI-powered compliance management platform
  • Tool for navigating AI Act requirements
  • Draft documentation generator
  • Regulatory monitoring service
  • Workflow management system

What RALIA Is Not

  • Not a legal advice provider
  • Not a replacement for professionals
  • Not a compliance guarantee
  • Not an official certification
  • Cannot make binding legal decisions

Your Role as Deployer

Under the EU AI Act, organisations using RALIA are considered "deployers" of an AI system. You have responsibilities for how AI outputs are used within your organisation. RALIA supports you, but ultimate decisions and accountability remain with your organisation.

2. Intended Use

2.1 Approved Use Cases

Compliance Self-Assessment

Internal AI system assessments against regulatory frameworks

Documentation Drafting

Generate draft compliance documents for review

Regulatory Research

Understand requirements and their application

Compliance Monitoring

Track regulatory developments

Team Collaboration

Coordinate compliance activities

2.2 Prohibited Uses

Providing Legal Advice

Using outputs as substitute for legal counsel

Misrepresenting Compliance

Claiming certification based on RALIA

Automated Decisions

No human review of AI decisions

Malicious Use

Manipulating or exploiting AI systems

Fraudulent Activity

Creating false documentation

3. Working with AI Outputs

3.1 Review Before Use

All AI-generated content should be reviewed before use. Use this checklist:

Review Checklist

  • Check accuracy against official sources
  • Verify applicability to your situation
  • Ensure content is current and up-to-date
  • Adapt generic guidance to your context
  • Have qualified personnel review critical content

3.2 Understanding Limitations

AI Output Limitations

  • Accuracy:AI may produce errors or hallucinations
  • Completeness:Outputs may not cover all relevant factors
  • Currency:AI training has a knowledge cutoff date
  • Context:AI may not understand your specific circumstances
  • Jurisdiction:Guidance may not account for local variations

3.3 Citation and Attribution

Do:

  • Review AI outputs before presenting them
  • Verify citations to source documents
  • Check original sources for critical info
  • Note when content is AI-assisted

Don't:

  • Present AI outputs as original analysis without review
  • Assume AI summaries capture all details
  • Rely solely on AI citations without verification
  • Skip professional review for important decisions

4. Data Input Guidelines

Appropriate Data

The following data is appropriate to input:

  • Descriptions of your AI systems and use cases
  • Compliance-related questions and scenarios
  • Organisational policies and procedures for review
  • Technical documentation about AI implementations
  • Questions about regulatory requirements

Avoid Inputting

Avoid inputting the following sensitive data:

  • Personal data of individuals (names, contact details)
  • Special category data (health, biometric, political)
  • Trade secrets or highly confidential information
  • Security credentials, passwords, or API keys
  • Information subject to legal privilege
  • Data you lack authority to process

Data Quality Matters

The quality of AI outputs depends on the quality of inputs. Provide accurate, complete information; clear, specific questions; relevant context; and corrections when AI makes incorrect assumptions.

5. Security Responsibilities

5.1 Account Security

Security Checklist

  • Use strong, unique passwords
  • Enable two-factor authentication
  • Never share login credentials
  • Report suspected unauthorised access
  • Log out on shared devices

5.2 Data Handling

Do:

  • Store exported documents securely
  • Share only with authorised individuals
  • Apply appropriate access controls
  • Delete data when no longer needed

Don't:

  • Store sensitive exports unencrypted
  • Share access broadly without need
  • Leave exports accessible to unauthorised users
  • Retain data beyond retention periods

5.3 Incident Reporting

Report any security concerns or suspicious activity immediately:

security@risqbase.com

6. Organisational Use

6.1 Team Responsibilities

  • 1
    Ensure team members understand these guidelines
  • 2
    Assign appropriate roles and permissions
  • 3
    Establish internal review processes for AI outputs
  • 4
    Maintain audit trails of compliance decisions
  • 5
    Designate personnel responsible for AI governance

6.2 Integration with Compliance Programmes

RALIA should complement, not replace, your compliance programme. Integrate outputs into existing workflows, document how AI was used, ensure authorised personnel give final approvals, and keep records of modifications to AI-generated content.

6.3 Regulatory Interactions

Do:

  • Be transparent about AI tool usage
  • Explain your human oversight processes
  • Maintain records of human review
  • Document your AI governance approach

Don't:

  • Claim RALIA outputs as certifications
  • Hide AI involvement from regulators
  • Suggest automated compliance without oversight
  • Present AI drafts as final submissions

7. Feedback & Reporting

7.1 What to Report

Inaccurate outputs
Harmful content
Technical errors
Security concerns
Suspected misuse

7.2 How to Report

8. Compliance with Guidelines

Enforcement

Violation of these guidelines may result in:

  • • Warning and requirement to cease prohibited behaviour
  • • Temporary suspension of account access
  • • Permanent termination of service
  • • Reporting to relevant authorities where required by law

Updates to Guidelines

We may update these guidelines to reflect changes in best practices, regulations, or platform capabilities. Material changes will be communicated to users.

9. Additional Resources

10. Contact

Questions about these guidelines or responsible use of RALIA? Contact us:

Post

RisqBase d.o.o.
Zagreb, Croatia

Document History

VersionDateSummary
2.0January 2026Enhanced design with checklists and improved navigation
1.0January 2026Initial publication