Skip to content

Frameworks Module

Compliance framework implementations for EU AI Act, SOC2, HIPAA, GDPR, NIST AI RMF, ISO 42001, and MAS FEAT.


Base Types

ComplianceRule

ComplianceRule dataclass

Definition of a single compliance rule within a framework.

Rules represent specific regulatory requirements that AI systems must satisfy. Each rule has an associated check function that evaluates audit entries for compliance.

Attributes:

Name Type Description
rule_id str

Unique identifier for this rule within the framework

name str

Human-readable name of the rule

description str

Detailed description of the requirement

severity RiskLevel

Default severity level for violations of this rule

category str

Category grouping for the rule

check_fn Optional[Callable[[AuditEntry], bool]]

Optional custom check function for specialized validation

remediation str

Default remediation guidance for violations

references List[str]

External references (regulation sections, standards, etc.)

Source code in src/rotalabs_comply/frameworks/base.py
@dataclass
class ComplianceRule:
    """
    Definition of a single compliance rule within a framework.

    Rules represent specific regulatory requirements that AI systems
    must satisfy. Each rule has an associated check function that
    evaluates audit entries for compliance.

    Attributes:
        rule_id: Unique identifier for this rule within the framework
        name: Human-readable name of the rule
        description: Detailed description of the requirement
        severity: Default severity level for violations of this rule
        category: Category grouping for the rule
        check_fn: Optional custom check function for specialized validation
        remediation: Default remediation guidance for violations
        references: External references (regulation sections, standards, etc.)
    """
    rule_id: str
    name: str
    description: str
    severity: RiskLevel
    category: str
    check_fn: Optional[Callable[[AuditEntry], bool]] = None
    remediation: str = ""
    references: List[str] = field(default_factory=list)

Definition of a single compliance rule within a framework.

Attributes:

Attribute Type Description
rule_id str Unique identifier within framework
name str Human-readable name
description str Detailed requirement description
severity RiskLevel Default severity for violations
category str Category grouping
check_fn Optional[Callable] Custom check function
remediation str Default remediation guidance
references List[str] External references

Example:

from rotalabs_comply.frameworks.base import ComplianceRule, RiskLevel

rule = ComplianceRule(
    rule_id="CUSTOM-001",
    name="Custom Requirement",
    description="Description of what's required",
    severity=RiskLevel.MEDIUM,
    category="custom",
    remediation="How to fix violations",
    references=["Internal Policy 1.2.3"],
)

ComplianceFramework Protocol

ComplianceFramework

Protocol defining the interface for compliance frameworks.

All compliance frameworks must implement this protocol to ensure consistent behavior across different regulatory standards.

Frameworks evaluate audit entries against their rules and produce compliance check results with any violations found.

Source code in src/rotalabs_comply/frameworks/base.py
@runtime_checkable
class ComplianceFramework(Protocol):
    """
    Protocol defining the interface for compliance frameworks.

    All compliance frameworks must implement this protocol to ensure
    consistent behavior across different regulatory standards.

    Frameworks evaluate audit entries against their rules and produce
    compliance check results with any violations found.
    """

    @property
    def name(self) -> str:
        """
        Get the name of this compliance framework.

        Returns:
            Human-readable name (e.g., "EU AI Act", "SOC2 Type II")
        """
        ...

    @property
    def version(self) -> str:
        """
        Get the version of the framework being implemented.

        Returns:
            Version string (e.g., "2024", "2017")
        """
        ...

    @property
    def rules(self) -> List[ComplianceRule]:
        """
        Get all rules defined in this framework.

        Returns:
            List of all compliance rules
        """
        ...

    async def check(
        self, entry: AuditEntry, profile: ComplianceProfile
    ) -> ComplianceCheckResult:
        """
        Check an audit entry for compliance violations.

        Evaluates the entry against all applicable rules based on
        the provided compliance profile.

        Args:
            entry: The audit entry to evaluate
            profile: Configuration profile controlling evaluation

        Returns:
            ComplianceCheckResult containing any violations found
        """
        ...

    def get_rule(self, rule_id: str) -> Optional[ComplianceRule]:
        """
        Get a specific rule by its ID.

        Args:
            rule_id: The unique identifier of the rule

        Returns:
            The ComplianceRule if found, None otherwise
        """
        ...

    def list_categories(self) -> List[str]:
        """
        List all rule categories in this framework.

        Returns:
            List of unique category names
        """
        ...

name property

name: str

Get the name of this compliance framework.

Returns:

Type Description
str

Human-readable name (e.g., "EU AI Act", "SOC2 Type II")

version property

version: str

Get the version of the framework being implemented.

Returns:

Type Description
str

Version string (e.g., "2024", "2017")

rules property

rules: List[ComplianceRule]

Get all rules defined in this framework.

Returns:

Type Description
List[ComplianceRule]

List of all compliance rules

check async

check(
    entry: AuditEntry, profile: ComplianceProfile
) -> ComplianceCheckResult

Check an audit entry for compliance violations.

Evaluates the entry against all applicable rules based on the provided compliance profile.

Parameters:

Name Type Description Default
entry AuditEntry

The audit entry to evaluate

required
profile ComplianceProfile

Configuration profile controlling evaluation

required

Returns:

Type Description
ComplianceCheckResult

ComplianceCheckResult containing any violations found

Source code in src/rotalabs_comply/frameworks/base.py
async def check(
    self, entry: AuditEntry, profile: ComplianceProfile
) -> ComplianceCheckResult:
    """
    Check an audit entry for compliance violations.

    Evaluates the entry against all applicable rules based on
    the provided compliance profile.

    Args:
        entry: The audit entry to evaluate
        profile: Configuration profile controlling evaluation

    Returns:
        ComplianceCheckResult containing any violations found
    """
    ...

get_rule

get_rule(rule_id: str) -> Optional[ComplianceRule]

Get a specific rule by its ID.

Parameters:

Name Type Description Default
rule_id str

The unique identifier of the rule

required

Returns:

Type Description
Optional[ComplianceRule]

The ComplianceRule if found, None otherwise

Source code in src/rotalabs_comply/frameworks/base.py
def get_rule(self, rule_id: str) -> Optional[ComplianceRule]:
    """
    Get a specific rule by its ID.

    Args:
        rule_id: The unique identifier of the rule

    Returns:
        The ComplianceRule if found, None otherwise
    """
    ...

list_categories

list_categories() -> List[str]

List all rule categories in this framework.

Returns:

Type Description
List[str]

List of unique category names

Source code in src/rotalabs_comply/frameworks/base.py
def list_categories(self) -> List[str]:
    """
    List all rule categories in this framework.

    Returns:
        List of unique category names
    """
    ...

Protocol defining the interface for compliance frameworks.

Properties:

Property Type Description
name str Framework name
version str Framework version
rules List[ComplianceRule] All rules

Methods:

Method Signature Description
check async (entry, profile) -> ComplianceCheckResult Check entry
get_rule (rule_id: str) -> Optional[ComplianceRule] Get rule by ID
list_categories () -> List[str] List categories

BaseFramework

BaseFramework

Abstract base class for compliance frameworks.

Provides common functionality for all framework implementations including rule management, category listing, and the main check loop. Subclasses must implement the _check_rule method to define framework-specific validation logic.

Attributes:

Name Type Description
_name

Framework name

_version

Framework version

_rules

List of rules in this framework

_rules_by_id Dict[str, ComplianceRule]

Dictionary mapping rule IDs to rules for fast lookup

Source code in src/rotalabs_comply/frameworks/base.py
class BaseFramework(ABC):
    """
    Abstract base class for compliance frameworks.

    Provides common functionality for all framework implementations
    including rule management, category listing, and the main check
    loop. Subclasses must implement the _check_rule method to define
    framework-specific validation logic.

    Attributes:
        _name: Framework name
        _version: Framework version
        _rules: List of rules in this framework
        _rules_by_id: Dictionary mapping rule IDs to rules for fast lookup
    """

    def __init__(self, name: str, version: str, rules: List[ComplianceRule]):
        """
        Initialize the base framework.

        Args:
            name: Human-readable framework name
            version: Framework version string
            rules: List of compliance rules
        """
        self._name = name
        self._version = version
        self._rules = rules
        self._rules_by_id: Dict[str, ComplianceRule] = {
            rule.rule_id: rule for rule in rules
        }

    @property
    def name(self) -> str:
        """Get the framework name."""
        return self._name

    @property
    def version(self) -> str:
        """Get the framework version."""
        return self._version

    @property
    def rules(self) -> List[ComplianceRule]:
        """Get all rules in this framework."""
        return self._rules

    def get_rule(self, rule_id: str) -> Optional[ComplianceRule]:
        """
        Get a specific rule by its ID.

        Args:
            rule_id: The unique identifier of the rule

        Returns:
            The ComplianceRule if found, None otherwise
        """
        return self._rules_by_id.get(rule_id)

    def list_categories(self) -> List[str]:
        """
        List all unique rule categories in this framework.

        Returns:
            Sorted list of unique category names
        """
        categories = set(rule.category for rule in self._rules)
        return sorted(categories)

    async def check(
        self, entry: AuditEntry, profile: ComplianceProfile
    ) -> ComplianceCheckResult:
        """
        Check an audit entry for compliance violations.

        Evaluates the entry against all applicable rules based on
        the provided compliance profile, respecting category filters
        and excluded rules.

        Args:
            entry: The audit entry to evaluate
            profile: Configuration profile controlling evaluation

        Returns:
            ComplianceCheckResult containing any violations found
        """
        violations: List[ComplianceViolation] = []
        rules_checked = 0

        for rule in self._rules:
            # Skip excluded rules
            if rule.rule_id in profile.excluded_rules:
                continue

            # Filter by category if specified
            if profile.enabled_categories and rule.category not in profile.enabled_categories:
                continue

            # Filter by minimum severity
            severity_order = [
                RiskLevel.INFO,
                RiskLevel.LOW,
                RiskLevel.MEDIUM,
                RiskLevel.HIGH,
                RiskLevel.CRITICAL,
            ]
            if severity_order.index(rule.severity) < severity_order.index(profile.min_severity):
                continue

            rules_checked += 1

            # Check the rule
            violation = self._check_rule(entry, rule)
            if violation is not None:
                violations.append(violation)

        return ComplianceCheckResult(
            entry_id=entry.entry_id,
            framework=self._name,
            framework_version=self._version,
            timestamp=datetime.utcnow(),
            violations=violations,
            rules_checked=rules_checked,
            rules_passed=rules_checked - len(violations),
            is_compliant=len(violations) == 0,
        )

    @abstractmethod
    def _check_rule(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check a single rule against an audit entry.

        This method must be implemented by subclasses to define
        framework-specific validation logic.

        Args:
            entry: The audit entry to check
            rule: The rule to evaluate

        Returns:
            ComplianceViolation if the rule is violated, None otherwise
        """
        ...

name property

name: str

Get the framework name.

version property

version: str

Get the framework version.

rules property

rules: List[ComplianceRule]

Get all rules in this framework.

__init__

__init__(
    name: str, version: str, rules: List[ComplianceRule]
)

Initialize the base framework.

Parameters:

Name Type Description Default
name str

Human-readable framework name

required
version str

Framework version string

required
rules List[ComplianceRule]

List of compliance rules

required
Source code in src/rotalabs_comply/frameworks/base.py
def __init__(self, name: str, version: str, rules: List[ComplianceRule]):
    """
    Initialize the base framework.

    Args:
        name: Human-readable framework name
        version: Framework version string
        rules: List of compliance rules
    """
    self._name = name
    self._version = version
    self._rules = rules
    self._rules_by_id: Dict[str, ComplianceRule] = {
        rule.rule_id: rule for rule in rules
    }

get_rule

get_rule(rule_id: str) -> Optional[ComplianceRule]

Get a specific rule by its ID.

Parameters:

Name Type Description Default
rule_id str

The unique identifier of the rule

required

Returns:

Type Description
Optional[ComplianceRule]

The ComplianceRule if found, None otherwise

Source code in src/rotalabs_comply/frameworks/base.py
def get_rule(self, rule_id: str) -> Optional[ComplianceRule]:
    """
    Get a specific rule by its ID.

    Args:
        rule_id: The unique identifier of the rule

    Returns:
        The ComplianceRule if found, None otherwise
    """
    return self._rules_by_id.get(rule_id)

list_categories

list_categories() -> List[str]

List all unique rule categories in this framework.

Returns:

Type Description
List[str]

Sorted list of unique category names

Source code in src/rotalabs_comply/frameworks/base.py
def list_categories(self) -> List[str]:
    """
    List all unique rule categories in this framework.

    Returns:
        Sorted list of unique category names
    """
    categories = set(rule.category for rule in self._rules)
    return sorted(categories)

check async

check(
    entry: AuditEntry, profile: ComplianceProfile
) -> ComplianceCheckResult

Check an audit entry for compliance violations.

Evaluates the entry against all applicable rules based on the provided compliance profile, respecting category filters and excluded rules.

Parameters:

Name Type Description Default
entry AuditEntry

The audit entry to evaluate

required
profile ComplianceProfile

Configuration profile controlling evaluation

required

Returns:

Type Description
ComplianceCheckResult

ComplianceCheckResult containing any violations found

Source code in src/rotalabs_comply/frameworks/base.py
async def check(
    self, entry: AuditEntry, profile: ComplianceProfile
) -> ComplianceCheckResult:
    """
    Check an audit entry for compliance violations.

    Evaluates the entry against all applicable rules based on
    the provided compliance profile, respecting category filters
    and excluded rules.

    Args:
        entry: The audit entry to evaluate
        profile: Configuration profile controlling evaluation

    Returns:
        ComplianceCheckResult containing any violations found
    """
    violations: List[ComplianceViolation] = []
    rules_checked = 0

    for rule in self._rules:
        # Skip excluded rules
        if rule.rule_id in profile.excluded_rules:
            continue

        # Filter by category if specified
        if profile.enabled_categories and rule.category not in profile.enabled_categories:
            continue

        # Filter by minimum severity
        severity_order = [
            RiskLevel.INFO,
            RiskLevel.LOW,
            RiskLevel.MEDIUM,
            RiskLevel.HIGH,
            RiskLevel.CRITICAL,
        ]
        if severity_order.index(rule.severity) < severity_order.index(profile.min_severity):
            continue

        rules_checked += 1

        # Check the rule
        violation = self._check_rule(entry, rule)
        if violation is not None:
            violations.append(violation)

    return ComplianceCheckResult(
        entry_id=entry.entry_id,
        framework=self._name,
        framework_version=self._version,
        timestamp=datetime.utcnow(),
        violations=violations,
        rules_checked=rules_checked,
        rules_passed=rules_checked - len(violations),
        is_compliant=len(violations) == 0,
    )

Abstract base class for compliance frameworks.

Constructor

BaseFramework(name: str, version: str, rules: List[ComplianceRule])

Abstract Method

Subclasses must implement:

def _check_rule(
    self, entry: AuditEntry, rule: ComplianceRule
) -> Optional[ComplianceViolation]

Example Custom Framework:

from rotalabs_comply.frameworks.base import BaseFramework, ComplianceRule, RiskLevel

class MyFramework(BaseFramework):
    def __init__(self):
        rules = [
            ComplianceRule(
                rule_id="MY-001",
                name="My Rule",
                description="Description",
                severity=RiskLevel.MEDIUM,
                category="custom",
            ),
        ]
        super().__init__("My Framework", "1.0", rules)

    def _check_rule(self, entry, rule):
        if rule.rule_id == "MY-001":
            if not entry.metadata.get("my_field"):
                return self._create_violation(entry, rule, "my_field missing")
        return None

AuditEntry (Frameworks)

AuditEntry dataclass

Represents a single audit log entry for an AI system interaction.

Audit entries capture the essential metadata about AI system operations that compliance frameworks need to evaluate against regulatory requirements.

Attributes:

Name Type Description
entry_id str

Unique identifier for this audit entry

timestamp datetime

When the event occurred

event_type str

Type of event (e.g., "inference", "training", "data_access")

actor str

Identifier for the user, system, or agent that triggered the event

action str

Description of the action taken

resource str

The resource being accessed or modified

metadata Dict[str, Any]

Additional context-specific information about the event

risk_level RiskLevel

Assessed risk level of this operation

system_id str

Identifier for the AI system involved

data_classification str

Classification of data involved (e.g., "PII", "PHI", "public")

user_notified bool

Whether the user was notified about AI involvement

human_oversight bool

Whether human oversight was present

error_handled bool

Whether errors were handled gracefully

documentation_ref Optional[str]

Reference to related technical documentation

Source code in src/rotalabs_comply/frameworks/base.py
@dataclass
class AuditEntry:
    """
    Represents a single audit log entry for an AI system interaction.

    Audit entries capture the essential metadata about AI system operations
    that compliance frameworks need to evaluate against regulatory requirements.

    Attributes:
        entry_id: Unique identifier for this audit entry
        timestamp: When the event occurred
        event_type: Type of event (e.g., "inference", "training", "data_access")
        actor: Identifier for the user, system, or agent that triggered the event
        action: Description of the action taken
        resource: The resource being accessed or modified
        metadata: Additional context-specific information about the event
        risk_level: Assessed risk level of this operation
        system_id: Identifier for the AI system involved
        data_classification: Classification of data involved (e.g., "PII", "PHI", "public")
        user_notified: Whether the user was notified about AI involvement
        human_oversight: Whether human oversight was present
        error_handled: Whether errors were handled gracefully
        documentation_ref: Reference to related technical documentation
    """
    entry_id: str
    timestamp: datetime
    event_type: str
    actor: str
    action: str
    resource: str = ""
    metadata: Dict[str, Any] = field(default_factory=dict)
    risk_level: RiskLevel = RiskLevel.LOW
    system_id: str = ""
    data_classification: str = "unclassified"
    user_notified: bool = False
    human_oversight: bool = False
    error_handled: bool = True
    documentation_ref: Optional[str] = None

Audit entry structure used by frameworks for compliance checking.

Attributes:

Attribute Type Default Description
entry_id str Required Unique identifier
timestamp datetime Required Event time
event_type str Required Type of event
actor str Required Who triggered event
action str Required Action description
resource str "" Resource accessed
metadata Dict[str, Any] {} Additional context
risk_level RiskLevel LOW Risk classification
system_id str "" AI system identifier
data_classification str "unclassified" Data sensitivity
user_notified bool False User knows about AI
human_oversight bool False Human oversight present
error_handled bool True Errors handled gracefully
documentation_ref Optional[str] None Documentation reference

ComplianceProfile (Frameworks)

ComplianceProfile dataclass

Configuration profile for compliance evaluation.

Profiles define which rules to apply, severity thresholds, and system-specific compliance requirements.

Attributes:

Name Type Description
profile_id str

Unique identifier for this profile

name str

Human-readable profile name

description str

Detailed description of the profile's purpose

enabled_frameworks List[str]

List of framework names to evaluate against

enabled_categories List[str]

Categories of rules to check (empty = all)

min_severity RiskLevel

Minimum severity level to report

system_classification str

Classification of the AI system being evaluated

custom_rules List[str]

Additional custom rule IDs to include

excluded_rules List[str]

Rule IDs to exclude from evaluation

metadata Dict[str, Any]

Additional profile configuration

Source code in src/rotalabs_comply/frameworks/base.py
@dataclass
class ComplianceProfile:
    """
    Configuration profile for compliance evaluation.

    Profiles define which rules to apply, severity thresholds, and
    system-specific compliance requirements.

    Attributes:
        profile_id: Unique identifier for this profile
        name: Human-readable profile name
        description: Detailed description of the profile's purpose
        enabled_frameworks: List of framework names to evaluate against
        enabled_categories: Categories of rules to check (empty = all)
        min_severity: Minimum severity level to report
        system_classification: Classification of the AI system being evaluated
        custom_rules: Additional custom rule IDs to include
        excluded_rules: Rule IDs to exclude from evaluation
        metadata: Additional profile configuration
    """
    profile_id: str
    name: str
    description: str = ""
    enabled_frameworks: List[str] = field(default_factory=list)
    enabled_categories: List[str] = field(default_factory=list)
    min_severity: RiskLevel = RiskLevel.LOW
    system_classification: str = "standard"
    custom_rules: List[str] = field(default_factory=list)
    excluded_rules: List[str] = field(default_factory=list)
    metadata: Dict[str, Any] = field(default_factory=dict)

Configuration profile for compliance evaluation.

Attributes:

Attribute Type Default Description
profile_id str Required Unique identifier
name str Required Profile name
description str "" Profile description
enabled_frameworks List[str] [] Frameworks to evaluate
enabled_categories List[str] [] Categories to check
min_severity RiskLevel LOW Minimum severity to report
system_classification str "standard" System classification
custom_rules List[str] [] Additional rule IDs
excluded_rules List[str] [] Rules to skip
metadata Dict[str, Any] {} Additional config

ComplianceViolation (Frameworks)

ComplianceViolation dataclass

Represents a single compliance violation detected during evaluation.

Violations are the output of rule checks that identify non-compliance with regulatory requirements.

Attributes:

Name Type Description
rule_id str

ID of the rule that was violated

rule_name str

Human-readable name of the violated rule

severity RiskLevel

Severity level of the violation

description str

Detailed description of what was violated

evidence str

Specific evidence from the audit entry

remediation str

Suggested steps to remediate the violation

entry_id str

ID of the audit entry that triggered this violation

category str

Category of the violated rule

framework str

Name of the framework containing the rule

Source code in src/rotalabs_comply/frameworks/base.py
@dataclass
class ComplianceViolation:
    """
    Represents a single compliance violation detected during evaluation.

    Violations are the output of rule checks that identify non-compliance
    with regulatory requirements.

    Attributes:
        rule_id: ID of the rule that was violated
        rule_name: Human-readable name of the violated rule
        severity: Severity level of the violation
        description: Detailed description of what was violated
        evidence: Specific evidence from the audit entry
        remediation: Suggested steps to remediate the violation
        entry_id: ID of the audit entry that triggered this violation
        category: Category of the violated rule
        framework: Name of the framework containing the rule
    """
    rule_id: str
    rule_name: str
    severity: RiskLevel
    description: str
    evidence: str
    remediation: str
    entry_id: str
    category: str
    framework: str

A compliance violation detected during evaluation.

Attributes:

Attribute Type Description
rule_id str Violated rule ID
rule_name str Rule name
severity RiskLevel Violation severity
description str Rule description
evidence str Specific evidence
remediation str How to fix
entry_id str Entry that triggered
category str Rule category
framework str Framework name

ComplianceCheckResult (Frameworks)

ComplianceCheckResult dataclass

Result of a compliance check against an audit entry.

Contains all violations found, along with summary statistics about the compliance evaluation.

Attributes:

Name Type Description
entry_id str

ID of the audit entry that was checked

framework str

Name of the framework used for evaluation

framework_version str

Version of the framework

timestamp datetime

When the check was performed

violations List[ComplianceViolation]

List of all violations found

rules_checked int

Total number of rules evaluated

rules_passed int

Number of rules that passed

is_compliant bool

Whether the entry is fully compliant (no violations)

metadata Dict[str, Any]

Additional check result metadata

Source code in src/rotalabs_comply/frameworks/base.py
@dataclass
class ComplianceCheckResult:
    """
    Result of a compliance check against an audit entry.

    Contains all violations found, along with summary statistics
    about the compliance evaluation.

    Attributes:
        entry_id: ID of the audit entry that was checked
        framework: Name of the framework used for evaluation
        framework_version: Version of the framework
        timestamp: When the check was performed
        violations: List of all violations found
        rules_checked: Total number of rules evaluated
        rules_passed: Number of rules that passed
        is_compliant: Whether the entry is fully compliant (no violations)
        metadata: Additional check result metadata
    """
    entry_id: str
    framework: str
    framework_version: str
    timestamp: datetime
    violations: List[ComplianceViolation] = field(default_factory=list)
    rules_checked: int = 0
    rules_passed: int = 0
    is_compliant: bool = True
    metadata: Dict[str, Any] = field(default_factory=dict)

    def __post_init__(self):
        """Update is_compliant based on violations."""
        self.is_compliant = len(self.violations) == 0

__post_init__

__post_init__()

Update is_compliant based on violations.

Source code in src/rotalabs_comply/frameworks/base.py
def __post_init__(self):
    """Update is_compliant based on violations."""
    self.is_compliant = len(self.violations) == 0

Result of a compliance check against an audit entry.

Attributes:

Attribute Type Description
entry_id str Checked entry ID
framework str Framework name
framework_version str Framework version
timestamp datetime Check time
violations List[ComplianceViolation] Violations found
rules_checked int Total rules evaluated
rules_passed int Rules that passed
is_compliant bool No violations found
metadata Dict[str, Any] Additional data

EU AI Act Framework

EUAIActFramework

EU AI Act compliance framework.

Implements compliance checks based on the EU AI Act (2024) requirements for high-risk AI systems. The framework evaluates audit entries against the Act's requirements for transparency, human oversight, risk management, documentation, and security.

The EU AI Act classifies AI systems into risk categories: - Unacceptable risk: Prohibited systems - High-risk: Systems subject to strict requirements (this framework's focus) - Limited risk: Systems with transparency obligations - Minimal risk: Most AI systems with few requirements

This implementation focuses on high-risk system requirements as they represent the most comprehensive compliance obligations.

Example

framework = EUAIActFramework() result = await framework.check(entry, profile) if not result.is_compliant: ... for violation in result.violations: ... print(f"{violation.rule_id}: {violation.description}")

Source code in src/rotalabs_comply/frameworks/eu_ai_act.py
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
class EUAIActFramework(BaseFramework):
    """
    EU AI Act compliance framework.

    Implements compliance checks based on the EU AI Act (2024) requirements
    for high-risk AI systems. The framework evaluates audit entries against
    the Act's requirements for transparency, human oversight, risk management,
    documentation, and security.

    The EU AI Act classifies AI systems into risk categories:
    - Unacceptable risk: Prohibited systems
    - High-risk: Systems subject to strict requirements (this framework's focus)
    - Limited risk: Systems with transparency obligations
    - Minimal risk: Most AI systems with few requirements

    This implementation focuses on high-risk system requirements as they
    represent the most comprehensive compliance obligations.

    Example:
        >>> framework = EUAIActFramework()
        >>> result = await framework.check(entry, profile)
        >>> if not result.is_compliant:
        ...     for violation in result.violations:
        ...         print(f"{violation.rule_id}: {violation.description}")
    """

    def __init__(self):
        """Initialize the EU AI Act framework with all defined rules."""
        rules = self._create_rules()
        super().__init__(name="EU AI Act", version="2024", rules=rules)

    def _create_rules(self) -> List[ComplianceRule]:
        """
        Create all EU AI Act compliance rules.

        Returns:
            List of ComplianceRule objects representing EU AI Act requirements
        """
        return [
            ComplianceRule(
                rule_id="EUAI-001",
                name="Human Oversight Documentation",
                description=(
                    "High-risk AI systems shall be designed and developed in such a way "
                    "that they can be effectively overseen by natural persons during the "
                    "period in which they are in use. Human oversight shall aim to prevent "
                    "or minimise the risks to health, safety or fundamental rights that may "
                    "emerge when a high-risk AI system is used in accordance with its "
                    "intended purpose or under conditions of reasonably foreseeable misuse. "
                    "(Article 14)"
                ),
                severity=RiskLevel.HIGH,
                category="oversight",
                remediation=(
                    "Ensure human oversight mechanisms are in place and documented. "
                    "Implement 'human-in-the-loop', 'human-on-the-loop', or "
                    "'human-in-command' approaches as appropriate for the risk level."
                ),
                references=["EU AI Act Article 14", "Annex IV point 3"],
            ),
            ComplianceRule(
                rule_id="EUAI-002",
                name="Transparency - AI Interaction Notification",
                description=(
                    "Providers shall ensure that AI systems intended to interact directly "
                    "with natural persons are designed and developed in such a way that "
                    "the natural persons concerned are informed that they are interacting "
                    "with an AI system, unless this is obvious from the circumstances and "
                    "the context of use. (Article 50)"
                ),
                severity=RiskLevel.HIGH,
                category="transparency",
                remediation=(
                    "Implement clear notification mechanisms to inform users when they "
                    "are interacting with an AI system. This notification should be "
                    "provided before or at the start of the interaction."
                ),
                references=["EU AI Act Article 50(1)"],
            ),
            ComplianceRule(
                rule_id="EUAI-003",
                name="Risk Assessment for High-Risk Systems",
                description=(
                    "High-risk AI systems shall be subject to a risk management system "
                    "consisting of a continuous iterative process planned and run "
                    "throughout the entire lifecycle of a high-risk AI system, requiring "
                    "regular systematic updating. It shall include identification, "
                    "estimation, and evaluation of risks. (Article 9)"
                ),
                severity=RiskLevel.CRITICAL,
                category="risk_management",
                remediation=(
                    "Implement a comprehensive risk management system that identifies, "
                    "analyzes, estimates, and evaluates risks throughout the AI system's "
                    "lifecycle. Document all risk assessments and mitigation measures."
                ),
                references=["EU AI Act Article 9", "Annex IV point 2"],
            ),
            ComplianceRule(
                rule_id="EUAI-004",
                name="Technical Documentation Maintenance",
                description=(
                    "The technical documentation of a high-risk AI system shall be drawn "
                    "up before that system is placed on the market or put into service "
                    "and shall be kept up to date. Technical documentation shall contain "
                    "at minimum the elements set out in Annex IV. (Article 11)"
                ),
                severity=RiskLevel.HIGH,
                category="documentation",
                remediation=(
                    "Maintain comprehensive technical documentation including: general "
                    "description, detailed description of elements, development process, "
                    "monitoring and functioning information, and description of "
                    "appropriate human oversight measures."
                ),
                references=["EU AI Act Article 11", "Annex IV"],
            ),
            ComplianceRule(
                rule_id="EUAI-005",
                name="Data Governance - Training Data Documentation",
                description=(
                    "High-risk AI systems which make use of techniques involving the "
                    "training of AI models with data shall be developed on the basis of "
                    "training, validation and testing data sets that meet quality criteria. "
                    "Training data must be documented regarding data collection, "
                    "preparation, and assumptions. (Article 10)"
                ),
                severity=RiskLevel.HIGH,
                category="documentation",
                remediation=(
                    "Document all training, validation, and testing datasets including: "
                    "data collection processes, data preparation operations (annotation, "
                    "labeling, cleaning), relevant assumptions, prior assessment of "
                    "availability, quantity and suitability of datasets, and examination "
                    "of possible biases."
                ),
                references=["EU AI Act Article 10", "Annex IV point 2(d)"],
            ),
            ComplianceRule(
                rule_id="EUAI-006",
                name="Robustness - Error Handling",
                description=(
                    "High-risk AI systems shall be designed and developed in such a way "
                    "that they achieve an appropriate level of robustness and that they "
                    "can handle errors or inconsistencies during all lifecycle phases, "
                    "including interaction with other systems. (Article 15)"
                ),
                severity=RiskLevel.MEDIUM,
                category="risk_management",
                remediation=(
                    "Implement robust error handling mechanisms including: graceful "
                    "degradation, fallback procedures, and appropriate logging. Systems "
                    "should continue to operate safely even when errors occur."
                ),
                references=["EU AI Act Article 15(1)(2)"],
            ),
            ComplianceRule(
                rule_id="EUAI-007",
                name="Accuracy Monitoring",
                description=(
                    "High-risk AI systems shall be designed and developed in such a way "
                    "that they achieve an appropriate level of accuracy, robustness and "
                    "cybersecurity. Accuracy levels shall be specified in the accompanying "
                    "instructions of use and monitored throughout the system's lifecycle. "
                    "(Article 15)"
                ),
                severity=RiskLevel.MEDIUM,
                category="risk_management",
                remediation=(
                    "Implement accuracy monitoring systems that track system performance "
                    "over time. Document accuracy metrics in technical documentation and "
                    "instructions for use. Establish thresholds for acceptable accuracy."
                ),
                references=["EU AI Act Article 15(1)", "Annex IV point 2(g)"],
            ),
            ComplianceRule(
                rule_id="EUAI-008",
                name="Cybersecurity Measures",
                description=(
                    "High-risk AI systems shall be designed and developed in such a way "
                    "that they achieve an appropriate level of cybersecurity. The AI "
                    "system shall be resilient against attempts by unauthorized third "
                    "parties to alter its use, outputs or performance by exploiting "
                    "system vulnerabilities. (Article 15)"
                ),
                severity=RiskLevel.HIGH,
                category="security",
                remediation=(
                    "Implement comprehensive cybersecurity measures including: access "
                    "controls, input validation, adversarial robustness testing, and "
                    "regular security assessments. Document security measures in "
                    "technical documentation."
                ),
                references=["EU AI Act Article 15(4)(5)"],
            ),
        ]

    def _check_rule(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check a single EU AI Act rule against an audit entry.

        Evaluates the audit entry against the specific rule requirements
        and returns a violation if the entry does not comply.

        Args:
            entry: The audit entry to check
            rule: The rule to evaluate

        Returns:
            ComplianceViolation if the rule is violated, None otherwise
        """
        # Use custom check function if provided
        if rule.check_fn is not None:
            is_compliant = rule.check_fn(entry)
            if not is_compliant:
                return self._create_violation(entry, rule, "Custom check failed")
            return None

        # Framework-specific rule checks
        if rule.rule_id == "EUAI-001":
            return self._check_human_oversight(entry, rule)
        elif rule.rule_id == "EUAI-002":
            return self._check_transparency(entry, rule)
        elif rule.rule_id == "EUAI-003":
            return self._check_risk_assessment(entry, rule)
        elif rule.rule_id == "EUAI-004":
            return self._check_technical_documentation(entry, rule)
        elif rule.rule_id == "EUAI-005":
            return self._check_data_governance(entry, rule)
        elif rule.rule_id == "EUAI-006":
            return self._check_robustness(entry, rule)
        elif rule.rule_id == "EUAI-007":
            return self._check_accuracy_monitoring(entry, rule)
        elif rule.rule_id == "EUAI-008":
            return self._check_cybersecurity(entry, rule)

        return None

    def _check_human_oversight(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check EUAI-001: Human oversight documentation required for high-risk.

        High-risk AI operations must have human oversight documented.
        This is evaluated based on the risk_level and human_oversight flags.
        """
        # Only applies to high-risk operations
        if entry.risk_level not in (RiskLevel.HIGH, RiskLevel.CRITICAL):
            return None

        if not entry.human_oversight:
            return self._create_violation(
                entry,
                rule,
                f"High-risk operation (level={entry.risk_level.value}) performed "
                f"without documented human oversight",
            )
        return None

    def _check_transparency(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check EUAI-002: Users must know they're interacting with AI.

        User-facing interactions must include AI disclosure notification.
        """
        # Check if this is a user-facing interaction
        user_facing_events = {"inference", "chat", "completion", "interaction", "response"}
        if entry.event_type.lower() not in user_facing_events:
            return None

        if not entry.user_notified:
            return self._create_violation(
                entry,
                rule,
                f"User-facing AI interaction (type={entry.event_type}) performed "
                f"without notifying user of AI involvement",
            )
        return None

    def _check_risk_assessment(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check EUAI-003: Risk assessment required for high-risk systems.

        High-risk operations must have risk assessment documentation.
        """
        if entry.risk_level not in (RiskLevel.HIGH, RiskLevel.CRITICAL):
            return None

        # Check for risk assessment documentation in metadata
        has_risk_assessment = entry.metadata.get("risk_assessment_documented", False)
        if not has_risk_assessment:
            return self._create_violation(
                entry,
                rule,
                f"High-risk operation (level={entry.risk_level.value}) performed "
                f"without documented risk assessment",
            )
        return None

    def _check_technical_documentation(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check EUAI-004: Technical documentation must be maintained.

        All operations should reference technical documentation.
        """
        # Only check for significant operations
        significant_events = {"deployment", "training", "fine_tuning", "model_update"}
        if entry.event_type.lower() not in significant_events:
            return None

        if not entry.documentation_ref:
            return self._create_violation(
                entry,
                rule,
                f"Significant operation (type={entry.event_type}) performed "
                f"without reference to technical documentation",
            )
        return None

    def _check_data_governance(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check EUAI-005: Training data must be documented.

        Training-related operations must document data governance.
        """
        training_events = {"training", "fine_tuning", "data_preparation", "data_ingestion"}
        if entry.event_type.lower() not in training_events:
            return None

        has_data_governance = entry.metadata.get("data_governance_documented", False)
        if not has_data_governance:
            return self._create_violation(
                entry,
                rule,
                f"Training operation (type={entry.event_type}) performed "
                f"without documented data governance",
            )
        return None

    def _check_robustness(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check EUAI-006: System must handle errors gracefully.

        Operations should demonstrate proper error handling.
        """
        if not entry.error_handled:
            return self._create_violation(
                entry,
                rule,
                f"Operation (type={entry.event_type}) indicates error was not "
                f"handled gracefully",
            )
        return None

    def _check_accuracy_monitoring(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check EUAI-007: Accuracy monitoring required.

        Inference operations should include accuracy monitoring metadata.
        """
        inference_events = {"inference", "prediction", "completion"}
        if entry.event_type.lower() not in inference_events:
            return None

        has_accuracy_monitoring = entry.metadata.get("accuracy_monitored", False)
        if not has_accuracy_monitoring:
            return self._create_violation(
                entry,
                rule,
                f"Inference operation (type={entry.event_type}) performed "
                f"without accuracy monitoring",
            )
        return None

    def _check_cybersecurity(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check EUAI-008: Cybersecurity measures required.

        Check for security-related metadata on operations.
        """
        # Only check for operations that could have security implications
        security_relevant_events = {
            "inference", "data_access", "model_access", "api_call",
            "authentication", "data_export"
        }
        if entry.event_type.lower() not in security_relevant_events:
            return None

        # Check for security metadata
        has_security_check = entry.metadata.get("security_validated", False)
        has_access_control = entry.metadata.get("access_controlled", False)

        if not (has_security_check or has_access_control):
            return self._create_violation(
                entry,
                rule,
                f"Security-relevant operation (type={entry.event_type}) performed "
                f"without documented cybersecurity validation",
            )
        return None

    def _create_violation(
        self, entry: AuditEntry, rule: ComplianceRule, evidence: str
    ) -> ComplianceViolation:
        """
        Create a compliance violation object.

        Args:
            entry: The audit entry that triggered the violation
            rule: The rule that was violated
            evidence: Specific evidence describing the violation

        Returns:
            ComplianceViolation object
        """
        return ComplianceViolation(
            rule_id=rule.rule_id,
            rule_name=rule.name,
            severity=rule.severity,
            description=rule.description,
            evidence=evidence,
            remediation=rule.remediation,
            entry_id=entry.entry_id,
            category=rule.category,
            framework=self._name,
        )

__init__

__init__()

Initialize the EU AI Act framework with all defined rules.

Source code in src/rotalabs_comply/frameworks/eu_ai_act.py
def __init__(self):
    """Initialize the EU AI Act framework with all defined rules."""
    rules = self._create_rules()
    super().__init__(name="EU AI Act", version="2024", rules=rules)

EU AI Act (2024) compliance framework.

Categories

Category Description
transparency User notification requirements
oversight Human oversight requirements
risk_management Risk assessment and handling
documentation Technical documentation
security Cybersecurity measures

Rules

Rule ID Name Severity Category
EUAI-001 Human Oversight Documentation HIGH oversight
EUAI-002 AI Interaction Notification HIGH transparency
EUAI-003 Risk Assessment CRITICAL risk_management
EUAI-004 Technical Documentation HIGH documentation
EUAI-005 Data Governance HIGH documentation
EUAI-006 Error Handling MEDIUM risk_management
EUAI-007 Accuracy Monitoring MEDIUM risk_management
EUAI-008 Cybersecurity Measures HIGH security

Usage

from rotalabs_comply.frameworks.eu_ai_act import EUAIActFramework
from rotalabs_comply.frameworks.base import AuditEntry, ComplianceProfile, RiskLevel
from datetime import datetime

framework = EUAIActFramework()

entry = AuditEntry(
    entry_id="test-001",
    timestamp=datetime.utcnow(),
    event_type="inference",
    actor="user@example.com",
    action="AI response",
    risk_level=RiskLevel.HIGH,
    user_notified=True,
    human_oversight=True,
    metadata={"risk_assessment_documented": True},
)

profile = ComplianceProfile(
    profile_id="eu-ai",
    name="EU AI Compliance",
)

result = await framework.check(entry, profile)

Key Requirements

High-risk operations require: - human_oversight=True - metadata["risk_assessment_documented"]=True

User-facing interactions require: - user_notified=True

Inference events require: - metadata["accuracy_monitored"]=True


SOC2 Framework

SOC2Framework

SOC2 Type II compliance framework.

Implements compliance checks based on the AICPA Trust Service Criteria for SOC2 Type II reporting. This framework evaluates audit entries against the five trust service principles: Security, Availability, Processing Integrity, Confidentiality, and Privacy.

SOC2 Type II reports assess both the design and operating effectiveness of controls over a specified period. This implementation focuses on controls relevant to AI systems and their operational characteristics.

Trust Service Categories: - CC (Common Criteria): Security-related controls - A: Availability controls - PI: Processing Integrity controls - C: Confidentiality controls - P: Privacy controls

Example

framework = SOC2Framework() result = await framework.check(entry, profile) if not result.is_compliant: ... for violation in result.violations: ... print(f"{violation.rule_id}: {violation.description}")

Source code in src/rotalabs_comply/frameworks/soc2.py
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
class SOC2Framework(BaseFramework):
    """
    SOC2 Type II compliance framework.

    Implements compliance checks based on the AICPA Trust Service Criteria
    for SOC2 Type II reporting. This framework evaluates audit entries against
    the five trust service principles: Security, Availability, Processing
    Integrity, Confidentiality, and Privacy.

    SOC2 Type II reports assess both the design and operating effectiveness
    of controls over a specified period. This implementation focuses on
    controls relevant to AI systems and their operational characteristics.

    Trust Service Categories:
    - CC (Common Criteria): Security-related controls
    - A: Availability controls
    - PI: Processing Integrity controls
    - C: Confidentiality controls
    - P: Privacy controls

    Example:
        >>> framework = SOC2Framework()
        >>> result = await framework.check(entry, profile)
        >>> if not result.is_compliant:
        ...     for violation in result.violations:
        ...         print(f"{violation.rule_id}: {violation.description}")
    """

    def __init__(self):
        """Initialize the SOC2 Type II framework with all defined rules."""
        rules = self._create_rules()
        super().__init__(name="SOC2 Type II", version="2017", rules=rules)

    def _create_rules(self) -> List[ComplianceRule]:
        """
        Create all SOC2 Type II compliance rules.

        Returns:
            List of ComplianceRule objects representing SOC2 Trust Service Criteria
        """
        return [
            # Security (Common Criteria)
            ComplianceRule(
                rule_id="SOC2-CC6.1",
                name="Logical Access Controls",
                description=(
                    "The entity implements logical access security software, "
                    "infrastructure, and architectures over protected information "
                    "assets to protect them from security events to meet the entity's "
                    "objectives. Logical access security measures restrict access to "
                    "information resources based on the user's identity, role, or other "
                    "criteria, and are designed to permit access only to authorized users."
                ),
                severity=RiskLevel.HIGH,
                category="security",
                remediation=(
                    "Implement role-based access control (RBAC) or attribute-based "
                    "access control (ABAC). Ensure all access to AI systems and data "
                    "is authenticated and authorized. Log all access attempts."
                ),
                references=[
                    "AICPA TSC CC6.1",
                    "NIST SP 800-53 AC-2, AC-3",
                ],
            ),
            ComplianceRule(
                rule_id="SOC2-CC6.2",
                name="System Boundary Definition",
                description=(
                    "Prior to issuing system credentials and granting system access, "
                    "the entity registers and authorizes new internal and external "
                    "users whose access is administered by the entity. For those users "
                    "whose access is administered by the entity, user system credentials "
                    "are removed when user access is no longer authorized."
                ),
                severity=RiskLevel.MEDIUM,
                category="security",
                remediation=(
                    "Maintain a clear inventory of system boundaries and authorized "
                    "users. Implement user provisioning and deprovisioning processes. "
                    "Conduct regular access reviews to ensure only authorized users "
                    "have access."
                ),
                references=[
                    "AICPA TSC CC6.2",
                    "NIST SP 800-53 AC-2",
                ],
            ),
            ComplianceRule(
                rule_id="SOC2-CC6.3",
                name="Change Management",
                description=(
                    "The entity authorizes, designs, develops or acquires, configures, "
                    "documents, tests, approves, and implements changes to "
                    "infrastructure, data, software, and procedures to meet its "
                    "objectives. Changes are authorized, documented, tested, and "
                    "approved before implementation."
                ),
                severity=RiskLevel.MEDIUM,
                category="security",
                remediation=(
                    "Establish formal change management procedures for AI systems. "
                    "Document all changes to models, configurations, and infrastructure. "
                    "Require approval before production deployment. Test changes in "
                    "non-production environments first."
                ),
                references=[
                    "AICPA TSC CC6.3",
                    "NIST SP 800-53 CM-3",
                ],
            ),
            ComplianceRule(
                rule_id="SOC2-CC7.1",
                name="System Monitoring",
                description=(
                    "To meet its objectives, the entity uses detection and monitoring "
                    "procedures to identify (1) changes to configurations that result "
                    "in the introduction of new vulnerabilities, and (2) susceptibilities "
                    "to newly discovered vulnerabilities. The entity monitors system "
                    "components for anomalies and investigates identified anomalies."
                ),
                severity=RiskLevel.HIGH,
                category="security",
                remediation=(
                    "Implement comprehensive monitoring for AI systems including: "
                    "performance metrics, error rates, drift detection, and security "
                    "events. Establish alerting thresholds and response procedures. "
                    "Review logs regularly for anomalies."
                ),
                references=[
                    "AICPA TSC CC7.1",
                    "NIST SP 800-53 AU-6, SI-4",
                ],
            ),
            ComplianceRule(
                rule_id="SOC2-CC7.2",
                name="Incident Response",
                description=(
                    "The entity monitors system components and the operation of those "
                    "components for anomalies that are indicative of malicious acts, "
                    "natural disasters, and errors affecting the entity's ability to "
                    "meet its objectives; anomalies are analyzed to determine whether "
                    "they represent security events."
                ),
                severity=RiskLevel.HIGH,
                category="security",
                remediation=(
                    "Establish an incident response plan specific to AI systems. "
                    "Define procedures for detecting, analyzing, containing, eradicating, "
                    "and recovering from incidents. Include procedures for model "
                    "rollback and bias/fairness incidents."
                ),
                references=[
                    "AICPA TSC CC7.2",
                    "NIST SP 800-53 IR-4, IR-5",
                ],
            ),

            # Availability
            ComplianceRule(
                rule_id="SOC2-CC8.1",
                name="Availability Monitoring",
                description=(
                    "The entity authorizes, designs, develops or acquires, implements, "
                    "operates, approves, maintains, and monitors environmental "
                    "protections, software, data backup processes, and recovery "
                    "infrastructure to meet its objectives. System availability is "
                    "monitored against service level commitments."
                ),
                severity=RiskLevel.MEDIUM,
                category="availability",
                remediation=(
                    "Implement availability monitoring for all AI system components. "
                    "Define and monitor SLAs for inference latency, throughput, and "
                    "uptime. Establish alerting for availability degradation. "
                    "Maintain redundancy for critical components."
                ),
                references=[
                    "AICPA TSC CC8.1",
                    "NIST SP 800-53 CP-2, CP-7",
                ],
            ),
            ComplianceRule(
                rule_id="SOC2-A1.1",
                name="Recovery Objectives Defined",
                description=(
                    "The entity maintains, monitors, and evaluates current processing "
                    "capacity and use of system components (infrastructure, data, and "
                    "software) to manage capacity demand and to enable the implementation "
                    "of additional capacity to help meet its objectives. Recovery time "
                    "objectives (RTO) and recovery point objectives (RPO) are defined."
                ),
                severity=RiskLevel.MEDIUM,
                category="availability",
                remediation=(
                    "Define and document RTO and RPO for AI systems. Implement backup "
                    "procedures for models, configurations, and data. Test recovery "
                    "procedures regularly. Ensure capacity planning considers peak loads."
                ),
                references=[
                    "AICPA TSC A1.1",
                    "NIST SP 800-53 CP-9, CP-10",
                ],
            ),

            # Processing Integrity
            ComplianceRule(
                rule_id="SOC2-PI1.1",
                name="Processing Integrity Validation",
                description=(
                    "The entity implements policies and procedures over system inputs "
                    "including controls over input processes that help ensure "
                    "completeness, accuracy, timeliness, and authorization of system "
                    "inputs. Processing integrity refers to the completeness, validity, "
                    "accuracy, timeliness, and authorization of system processing."
                ),
                severity=RiskLevel.MEDIUM,
                category="processing_integrity",
                remediation=(
                    "Implement input validation for all AI system inputs. Validate "
                    "data formats, ranges, and consistency. Log all inputs with "
                    "timestamps. Implement data quality checks and monitoring for "
                    "data drift."
                ),
                references=[
                    "AICPA TSC PI1.1",
                    "NIST SP 800-53 SI-10",
                ],
            ),

            # Confidentiality
            ComplianceRule(
                rule_id="SOC2-C1.1",
                name="Confidentiality Classification",
                description=(
                    "The entity identifies and maintains confidential information to "
                    "meet the entity's objectives related to confidentiality. "
                    "Information is classified by the entity according to its "
                    "sensitivity and is protected accordingly. Confidential information "
                    "is identified based on regulatory requirements, contractual "
                    "commitments, and business needs."
                ),
                severity=RiskLevel.HIGH,
                category="confidentiality",
                remediation=(
                    "Implement data classification for all data processed by AI systems. "
                    "Label data according to sensitivity levels (public, internal, "
                    "confidential, restricted). Apply appropriate protection measures "
                    "based on classification. Document handling procedures."
                ),
                references=[
                    "AICPA TSC C1.1",
                    "NIST SP 800-53 RA-2",
                ],
            ),

            # Privacy
            ComplianceRule(
                rule_id="SOC2-P1.1",
                name="Privacy Notice Provided",
                description=(
                    "The entity provides notice to data subjects about its privacy "
                    "practices to meet the entity's objectives related to privacy. "
                    "The notice is provided to data subjects at or before the time "
                    "their personal information is collected. The notice describes "
                    "the purposes for which personal information is collected, used, "
                    "retained, and disclosed."
                ),
                severity=RiskLevel.HIGH,
                category="privacy",
                remediation=(
                    "Provide clear privacy notices before collecting personal data "
                    "for AI processing. Document how personal data is used in AI "
                    "training and inference. Implement consent mechanisms where "
                    "required. Maintain records of privacy notices provided."
                ),
                references=[
                    "AICPA TSC P1.1",
                    "GDPR Article 13",
                ],
            ),
        ]

    def _check_rule(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check a single SOC2 rule against an audit entry.

        Evaluates the audit entry against the specific rule requirements
        and returns a violation if the entry does not comply.

        Args:
            entry: The audit entry to check
            rule: The rule to evaluate

        Returns:
            ComplianceViolation if the rule is violated, None otherwise
        """
        # Use custom check function if provided
        if rule.check_fn is not None:
            is_compliant = rule.check_fn(entry)
            if not is_compliant:
                return self._create_violation(entry, rule, "Custom check failed")
            return None

        # Framework-specific rule checks
        if rule.rule_id == "SOC2-CC6.1":
            return self._check_logical_access(entry, rule)
        elif rule.rule_id == "SOC2-CC6.2":
            return self._check_system_boundary(entry, rule)
        elif rule.rule_id == "SOC2-CC6.3":
            return self._check_change_management(entry, rule)
        elif rule.rule_id == "SOC2-CC7.1":
            return self._check_system_monitoring(entry, rule)
        elif rule.rule_id == "SOC2-CC7.2":
            return self._check_incident_response(entry, rule)
        elif rule.rule_id == "SOC2-CC8.1":
            return self._check_availability_monitoring(entry, rule)
        elif rule.rule_id == "SOC2-A1.1":
            return self._check_recovery_objectives(entry, rule)
        elif rule.rule_id == "SOC2-PI1.1":
            return self._check_processing_integrity(entry, rule)
        elif rule.rule_id == "SOC2-C1.1":
            return self._check_confidentiality_classification(entry, rule)
        elif rule.rule_id == "SOC2-P1.1":
            return self._check_privacy_notice(entry, rule)

        return None

    def _check_logical_access(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check SOC2-CC6.1: Logical access controls.

        All data access and model operations should have access controls.
        """
        access_events = {
            "data_access", "model_access", "api_call", "authentication",
            "inference", "training", "data_export"
        }
        if entry.event_type.lower() not in access_events:
            return None

        # Check for access control metadata
        has_authentication = bool(entry.actor and entry.actor != "anonymous")
        has_access_control = entry.metadata.get("access_controlled", False)

        if not has_authentication:
            return self._create_violation(
                entry,
                rule,
                f"Access event (type={entry.event_type}) performed by "
                f"unauthenticated or anonymous user",
            )

        if not has_access_control:
            return self._create_violation(
                entry,
                rule,
                f"Access event (type={entry.event_type}) performed without "
                f"documented access control validation",
            )

        return None

    def _check_system_boundary(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check SOC2-CC6.2: System boundary definition.

        Users should be registered and authorized before access.
        """
        # Check for external access events
        external_events = {"api_call", "external_integration", "data_import", "data_export"}
        if entry.event_type.lower() not in external_events:
            return None

        # Check that system_id is defined (system boundary is known)
        if not entry.system_id:
            return self._create_violation(
                entry,
                rule,
                f"External event (type={entry.event_type}) performed without "
                f"defined system boundary (missing system_id)",
            )

        return None

    def _check_change_management(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check SOC2-CC6.3: Change management.

        Changes should be authorized, documented, and tested.
        """
        change_events = {
            "deployment", "model_update", "config_change", "training",
            "fine_tuning", "rollback"
        }
        if entry.event_type.lower() not in change_events:
            return None

        has_change_approval = entry.metadata.get("change_approved", False)
        has_change_documentation = entry.documentation_ref is not None

        if not has_change_approval:
            return self._create_violation(
                entry,
                rule,
                f"Change event (type={entry.event_type}) performed without "
                f"documented change approval",
            )

        if not has_change_documentation:
            return self._create_violation(
                entry,
                rule,
                f"Change event (type={entry.event_type}) performed without "
                f"documentation reference",
            )

        return None

    def _check_system_monitoring(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check SOC2-CC7.1: System monitoring.

        System operations should be monitored for anomalies.
        """
        # All entries should have monitoring; check for monitoring metadata
        has_monitoring = entry.metadata.get("monitored", True)  # Default to true for basic entries

        # For significant operations, require explicit monitoring documentation
        significant_events = {"inference", "training", "deployment", "data_access"}
        if entry.event_type.lower() in significant_events:
            has_monitoring = entry.metadata.get("monitored", False)

            if not has_monitoring:
                return self._create_violation(
                    entry,
                    rule,
                    f"Significant operation (type={entry.event_type}) performed "
                    f"without documented monitoring",
                )

        return None

    def _check_incident_response(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check SOC2-CC7.2: Incident response.

        Security events should trigger incident response procedures.
        """
        # Check for error or security events
        if entry.error_handled is False:
            has_incident_response = entry.metadata.get("incident_logged", False)

            if not has_incident_response:
                return self._create_violation(
                    entry,
                    rule,
                    f"Error event (type={entry.event_type}) occurred without "
                    f"incident response logging",
                )

        # Check for security-related events
        security_events = {"authentication_failure", "access_denied", "security_alert"}
        if entry.event_type.lower() in security_events:
            has_incident_response = entry.metadata.get("incident_logged", False)

            if not has_incident_response:
                return self._create_violation(
                    entry,
                    rule,
                    f"Security event (type={entry.event_type}) without "
                    f"incident response logging",
                )

        return None

    def _check_availability_monitoring(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check SOC2-CC8.1: Availability monitoring.

        System availability should be monitored against SLAs.
        """
        availability_events = {"health_check", "deployment", "scaling", "recovery"}
        if entry.event_type.lower() not in availability_events:
            return None

        has_sla_monitoring = entry.metadata.get("sla_monitored", False)

        if not has_sla_monitoring:
            return self._create_violation(
                entry,
                rule,
                f"Availability event (type={entry.event_type}) without "
                f"documented SLA monitoring",
            )

        return None

    def _check_recovery_objectives(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check SOC2-A1.1: Recovery objectives defined.

        RTO and RPO should be defined for recovery operations.
        """
        recovery_events = {"backup", "restore", "recovery", "disaster_recovery"}
        if entry.event_type.lower() not in recovery_events:
            return None

        has_rto_defined = entry.metadata.get("rto_defined", False)
        has_rpo_defined = entry.metadata.get("rpo_defined", False)

        if not has_rto_defined or not has_rpo_defined:
            return self._create_violation(
                entry,
                rule,
                f"Recovery event (type={entry.event_type}) without defined "
                f"RTO/RPO objectives (rto={has_rto_defined}, rpo={has_rpo_defined})",
            )

        return None

    def _check_processing_integrity(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check SOC2-PI1.1: Processing integrity validation.

        Data processing should validate input integrity.
        """
        processing_events = {"inference", "training", "data_processing", "data_transformation"}
        if entry.event_type.lower() not in processing_events:
            return None

        has_input_validation = entry.metadata.get("input_validated", False)

        if not has_input_validation:
            return self._create_violation(
                entry,
                rule,
                f"Processing event (type={entry.event_type}) without "
                f"documented input validation",
            )

        return None

    def _check_confidentiality_classification(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check SOC2-C1.1: Confidentiality classification.

        Data should be classified according to sensitivity.
        """
        data_events = {"data_access", "data_processing", "inference", "training", "data_export"}
        if entry.event_type.lower() not in data_events:
            return None

        # Check if data classification is documented
        is_classified = entry.data_classification != "unclassified"

        if not is_classified:
            return self._create_violation(
                entry,
                rule,
                f"Data event (type={entry.event_type}) with unclassified data "
                f"(classification should be specified)",
            )

        return None

    def _check_privacy_notice(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check SOC2-P1.1: Privacy notice provided.

        Personal data collection should include privacy notice.
        """
        # Check for PII-related events
        pii_classifications = {"PII", "PHI", "personal", "sensitive"}
        if entry.data_classification.upper() not in {c.upper() for c in pii_classifications}:
            return None

        has_privacy_notice = entry.metadata.get("privacy_notice_provided", False)

        if not has_privacy_notice:
            return self._create_violation(
                entry,
                rule,
                f"Personal data event (type={entry.event_type}, "
                f"classification={entry.data_classification}) without "
                f"documented privacy notice",
            )

        return None

    def _create_violation(
        self, entry: AuditEntry, rule: ComplianceRule, evidence: str
    ) -> ComplianceViolation:
        """
        Create a compliance violation object.

        Args:
            entry: The audit entry that triggered the violation
            rule: The rule that was violated
            evidence: Specific evidence describing the violation

        Returns:
            ComplianceViolation object
        """
        return ComplianceViolation(
            rule_id=rule.rule_id,
            rule_name=rule.name,
            severity=rule.severity,
            description=rule.description,
            evidence=evidence,
            remediation=rule.remediation,
            entry_id=entry.entry_id,
            category=rule.category,
            framework=self._name,
        )

__init__

__init__()

Initialize the SOC2 Type II framework with all defined rules.

Source code in src/rotalabs_comply/frameworks/soc2.py
def __init__(self):
    """Initialize the SOC2 Type II framework with all defined rules."""
    rules = self._create_rules()
    super().__init__(name="SOC2 Type II", version="2017", rules=rules)

SOC2 Type II compliance framework.

Categories

Category TSC Description
security CC Common Criteria - Security controls
availability A System availability
processing_integrity PI Data processing accuracy
confidentiality C Confidential information protection
privacy P Personal information protection

Rules

Rule ID Name Severity Category
SOC2-CC6.1 Logical Access Controls HIGH security
SOC2-CC6.2 System Boundary Definition MEDIUM security
SOC2-CC6.3 Change Management MEDIUM security
SOC2-CC7.1 System Monitoring HIGH security
SOC2-CC7.2 Incident Response HIGH security
SOC2-CC8.1 Availability Monitoring MEDIUM availability
SOC2-A1.1 Recovery Objectives MEDIUM availability
SOC2-PI1.1 Processing Integrity MEDIUM processing_integrity
SOC2-C1.1 Confidentiality Classification HIGH confidentiality
SOC2-P1.1 Privacy Notice HIGH privacy

Usage

from rotalabs_comply.frameworks.soc2 import SOC2Framework

framework = SOC2Framework()

entry = AuditEntry(
    entry_id="soc2-001",
    timestamp=datetime.utcnow(),
    event_type="data_access",
    actor="admin@company.com",
    action="Query database",
    data_classification="confidential",
    metadata={
        "access_controlled": True,
        "monitored": True,
    },
)

result = await framework.check(entry, profile)

Key Requirements

Access events require: - Authenticated actor (not "anonymous") - metadata["access_controlled"]=True

Change events require: - metadata["change_approved"]=True - documentation_ref set

Data events require: - data_classification not "unclassified"


HIPAA Framework

HIPAAFramework

HIPAA compliance framework.

Implements compliance checks based on HIPAA Security Rule technical safeguards and Privacy Rule requirements. This framework evaluates audit entries for AI systems that process Protected Health Information (PHI) or electronic PHI (ePHI).

HIPAA requires covered entities and business associates to: - Ensure confidentiality, integrity, and availability of ePHI - Protect against anticipated threats and hazards - Protect against unauthorized uses or disclosures - Ensure workforce compliance

This implementation focuses on technical safeguards (164.312) which are most relevant to AI system operations: - Access controls (164.312(a)) - Audit controls (164.312(b)) - Integrity controls (164.312(c)) - Authentication (164.312(d)) - Transmission security (164.312(e))

Example

framework = HIPAAFramework() result = await framework.check(entry, profile) if not result.is_compliant: ... for violation in result.violations: ... print(f"{violation.rule_id}: {violation.description}")

Source code in src/rotalabs_comply/frameworks/hipaa.py
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
class HIPAAFramework(BaseFramework):
    """
    HIPAA compliance framework.

    Implements compliance checks based on HIPAA Security Rule technical
    safeguards and Privacy Rule requirements. This framework evaluates
    audit entries for AI systems that process Protected Health Information
    (PHI) or electronic PHI (ePHI).

    HIPAA requires covered entities and business associates to:
    - Ensure confidentiality, integrity, and availability of ePHI
    - Protect against anticipated threats and hazards
    - Protect against unauthorized uses or disclosures
    - Ensure workforce compliance

    This implementation focuses on technical safeguards (164.312) which
    are most relevant to AI system operations:
    - Access controls (164.312(a))
    - Audit controls (164.312(b))
    - Integrity controls (164.312(c))
    - Authentication (164.312(d))
    - Transmission security (164.312(e))

    Example:
        >>> framework = HIPAAFramework()
        >>> result = await framework.check(entry, profile)
        >>> if not result.is_compliant:
        ...     for violation in result.violations:
        ...         print(f"{violation.rule_id}: {violation.description}")
    """

    # PHI-related data classifications
    PHI_CLASSIFICATIONS = {
        "PHI", "ePHI", "protected_health_information",
        "health_data", "medical", "clinical"
    }

    def __init__(self):
        """Initialize the HIPAA framework with all defined rules."""
        rules = self._create_rules()
        super().__init__(name="HIPAA", version="1996/2013", rules=rules)

    def _create_rules(self) -> List[ComplianceRule]:
        """
        Create all HIPAA compliance rules.

        Returns:
            List of ComplianceRule objects representing HIPAA requirements
        """
        return [
            # Security Rule - Technical Safeguards
            ComplianceRule(
                rule_id="HIPAA-164.312(a)",
                name="Access Control",
                description=(
                    "Implement technical policies and procedures for electronic "
                    "information systems that maintain electronic protected health "
                    "information to allow access only to those persons or software "
                    "programs that have been granted access rights as specified in "
                    "164.308(a)(4). This includes: unique user identification, "
                    "emergency access procedures, automatic logoff, and encryption "
                    "and decryption mechanisms."
                ),
                severity=RiskLevel.CRITICAL,
                category="access_control",
                remediation=(
                    "Implement comprehensive access controls including: unique user "
                    "IDs for all users accessing ePHI, role-based access policies, "
                    "automatic session timeouts, emergency access procedures, and "
                    "encryption for ePHI at rest. Document all access control "
                    "policies and procedures."
                ),
                references=[
                    "45 CFR 164.312(a)(1)",
                    "45 CFR 164.312(a)(2)(i-iv)",
                ],
            ),
            ComplianceRule(
                rule_id="HIPAA-164.312(b)",
                name="Audit Controls",
                description=(
                    "Implement hardware, software, and/or procedural mechanisms that "
                    "record and examine activity in information systems that contain "
                    "or use electronic protected health information. Audit controls "
                    "must capture sufficient information to support review of system "
                    "activity, including who accessed what data and when."
                ),
                severity=RiskLevel.HIGH,
                category="audit",
                remediation=(
                    "Implement comprehensive audit logging for all systems containing "
                    "ePHI. Logs should capture: user identification, timestamp, type "
                    "of access, data accessed, and success/failure status. Implement "
                    "log retention policies and regular log review procedures."
                ),
                references=[
                    "45 CFR 164.312(b)",
                ],
            ),
            ComplianceRule(
                rule_id="HIPAA-164.312(c)",
                name="Integrity Controls",
                description=(
                    "Implement policies and procedures to protect electronic protected "
                    "health information from improper alteration or destruction. "
                    "Implement electronic mechanisms to corroborate that electronic "
                    "protected health information has not been altered or destroyed "
                    "in an unauthorized manner."
                ),
                severity=RiskLevel.HIGH,
                category="integrity",
                remediation=(
                    "Implement integrity controls including: checksums or digital "
                    "signatures for ePHI, change detection mechanisms, version "
                    "control for data modifications, and procedures for detecting "
                    "unauthorized changes. Document all integrity verification "
                    "procedures."
                ),
                references=[
                    "45 CFR 164.312(c)(1)",
                    "45 CFR 164.312(c)(2)",
                ],
            ),
            ComplianceRule(
                rule_id="HIPAA-164.312(d)",
                name="Person or Entity Authentication",
                description=(
                    "Implement procedures to verify that a person or entity seeking "
                    "access to electronic protected health information is the one "
                    "claimed. Authentication mechanisms should be appropriate for "
                    "the risk level of the systems and data being accessed."
                ),
                severity=RiskLevel.CRITICAL,
                category="authentication",
                remediation=(
                    "Implement strong authentication mechanisms for all ePHI access. "
                    "Consider multi-factor authentication for high-risk access. "
                    "Implement password policies meeting industry standards. "
                    "Document authentication procedures and verify identity before "
                    "granting access credentials."
                ),
                references=[
                    "45 CFR 164.312(d)",
                ],
            ),
            ComplianceRule(
                rule_id="HIPAA-164.312(e)",
                name="Transmission Security",
                description=(
                    "Implement technical security measures to guard against "
                    "unauthorized access to electronic protected health information "
                    "that is being transmitted over an electronic communications "
                    "network. This includes integrity controls and encryption for "
                    "data in transit."
                ),
                severity=RiskLevel.HIGH,
                category="transmission",
                remediation=(
                    "Implement encryption for all ePHI transmitted over networks "
                    "(TLS 1.2+ recommended). Use secure protocols for data transfer. "
                    "Implement integrity verification for transmitted data. "
                    "Document transmission security policies and procedures."
                ),
                references=[
                    "45 CFR 164.312(e)(1)",
                    "45 CFR 164.312(e)(2)(i-ii)",
                ],
            ),

            # Privacy Rule
            ComplianceRule(
                rule_id="HIPAA-164.502",
                name="Uses and Disclosures",
                description=(
                    "A covered entity or business associate may not use or disclose "
                    "protected health information, except as permitted or required. "
                    "The minimum necessary standard requires limiting PHI use, "
                    "disclosure, and requests to the minimum necessary to accomplish "
                    "the intended purpose. AI systems must respect these limitations."
                ),
                severity=RiskLevel.CRITICAL,
                category="privacy",
                remediation=(
                    "Implement minimum necessary controls for PHI access by AI "
                    "systems. Document the purpose for each PHI access. Limit data "
                    "exposure to only what is required for the specific use case. "
                    "Implement data masking or filtering where possible. Maintain "
                    "records of all PHI disclosures."
                ),
                references=[
                    "45 CFR 164.502",
                    "45 CFR 164.514(d)",
                ],
            ),
            ComplianceRule(
                rule_id="HIPAA-164.514",
                name="De-identification Standards",
                description=(
                    "Health information that does not identify an individual and "
                    "with respect to which there is no reasonable basis to believe "
                    "that the information can be used to identify an individual is "
                    "not individually identifiable health information. De-identification "
                    "may be achieved through expert determination or safe harbor methods."
                ),
                severity=RiskLevel.HIGH,
                category="privacy",
                remediation=(
                    "When using health data for AI training or analytics, implement "
                    "de-identification following HIPAA Safe Harbor (remove 18 "
                    "identifiers) or Expert Determination methods. Document "
                    "de-identification procedures and maintain records of "
                    "de-identification status for all datasets."
                ),
                references=[
                    "45 CFR 164.514(a)",
                    "45 CFR 164.514(b)",
                ],
            ),
            ComplianceRule(
                rule_id="HIPAA-164.530",
                name="Administrative Requirements",
                description=(
                    "A covered entity must maintain, until six years after the later "
                    "of the date of their creation or last effective date, its "
                    "privacy policies and procedures, its privacy practices notices, "
                    "disposition of complaints, and other actions, activities, and "
                    "designations that the Privacy Rule requires to be documented."
                ),
                severity=RiskLevel.MEDIUM,
                category="privacy",
                remediation=(
                    "Maintain comprehensive documentation of all privacy policies, "
                    "procedures, and practices related to AI systems processing PHI. "
                    "Retain all documentation for at least six years. Implement "
                    "procedures for responding to individual rights requests "
                    "(access, amendment, accounting of disclosures)."
                ),
                references=[
                    "45 CFR 164.530(j)",
                ],
            ),
        ]

    def _is_phi_related(self, entry: AuditEntry) -> bool:
        """
        Determine if an audit entry involves PHI.

        Args:
            entry: The audit entry to check

        Returns:
            True if the entry involves PHI, False otherwise
        """
        classification_upper = entry.data_classification.upper()
        return any(
            phi.upper() in classification_upper
            for phi in self.PHI_CLASSIFICATIONS
        )

    def _check_rule(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check a single HIPAA rule against an audit entry.

        HIPAA rules are only evaluated for entries involving PHI.
        Non-PHI entries are automatically compliant.

        Args:
            entry: The audit entry to check
            rule: The rule to evaluate

        Returns:
            ComplianceViolation if the rule is violated, None otherwise
        """
        # HIPAA rules only apply to PHI-related entries
        if not self._is_phi_related(entry):
            return None

        # Use custom check function if provided
        if rule.check_fn is not None:
            is_compliant = rule.check_fn(entry)
            if not is_compliant:
                return self._create_violation(entry, rule, "Custom check failed")
            return None

        # Framework-specific rule checks
        if rule.rule_id == "HIPAA-164.312(a)":
            return self._check_access_control(entry, rule)
        elif rule.rule_id == "HIPAA-164.312(b)":
            return self._check_audit_controls(entry, rule)
        elif rule.rule_id == "HIPAA-164.312(c)":
            return self._check_integrity_controls(entry, rule)
        elif rule.rule_id == "HIPAA-164.312(d)":
            return self._check_authentication(entry, rule)
        elif rule.rule_id == "HIPAA-164.312(e)":
            return self._check_transmission_security(entry, rule)
        elif rule.rule_id == "HIPAA-164.502":
            return self._check_uses_and_disclosures(entry, rule)
        elif rule.rule_id == "HIPAA-164.514":
            return self._check_deidentification(entry, rule)
        elif rule.rule_id == "HIPAA-164.530":
            return self._check_administrative_requirements(entry, rule)

        return None

    def _check_access_control(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check HIPAA-164.312(a): Access control required.

        PHI access must have proper access controls including unique user ID,
        authorization validation, and encryption.
        """
        # Check for unique user identification
        has_unique_user = bool(entry.actor and entry.actor != "anonymous")
        if not has_unique_user:
            return self._create_violation(
                entry,
                rule,
                f"PHI access (type={entry.event_type}) performed without "
                f"unique user identification (actor={entry.actor})",
            )

        # Check for access control validation
        has_access_control = entry.metadata.get("access_controlled", False)
        if not has_access_control:
            return self._create_violation(
                entry,
                rule,
                f"PHI access (type={entry.event_type}) without documented "
                f"access control validation",
            )

        # For data access, check for encryption
        data_events = {"data_access", "data_export", "inference"}
        if entry.event_type.lower() in data_events:
            has_encryption = entry.metadata.get("encryption_enabled", False)
            if not has_encryption:
                return self._create_violation(
                    entry,
                    rule,
                    f"PHI data access (type={entry.event_type}) without "
                    f"encryption enabled",
                )

        return None

    def _check_audit_controls(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check HIPAA-164.312(b): Audit controls required.

        All PHI access must be logged with sufficient detail.
        """
        # Check that entry has required audit fields
        required_fields = [
            ("entry_id", entry.entry_id),
            ("timestamp", entry.timestamp),
            ("actor", entry.actor),
            ("event_type", entry.event_type),
            ("action", entry.action),
        ]

        missing_fields = [
            field_name for field_name, field_value in required_fields
            if not field_value
        ]

        if missing_fields:
            return self._create_violation(
                entry,
                rule,
                f"PHI event missing required audit fields: {', '.join(missing_fields)}",
            )

        # Check for audit logging confirmation
        has_audit_logged = entry.metadata.get("audit_logged", True)  # Assume logged if entry exists
        if not has_audit_logged:
            return self._create_violation(
                entry,
                rule,
                f"PHI event (type={entry.event_type}) without confirmation "
                f"of audit logging",
            )

        return None

    def _check_integrity_controls(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check HIPAA-164.312(c): Integrity controls.

        PHI modifications must have integrity verification.
        """
        modification_events = {
            "update", "modify", "write", "training", "data_transformation",
            "data_processing"
        }
        if entry.event_type.lower() not in modification_events:
            return None

        has_integrity_check = entry.metadata.get("integrity_verified", False)
        if not has_integrity_check:
            return self._create_violation(
                entry,
                rule,
                f"PHI modification (type={entry.event_type}) without "
                f"integrity verification controls",
            )

        return None

    def _check_authentication(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check HIPAA-164.312(d): Person authentication.

        PHI access requires verified authentication.
        """
        # Must have authenticated user
        if not entry.actor or entry.actor == "anonymous":
            return self._create_violation(
                entry,
                rule,
                f"PHI access (type={entry.event_type}) without authenticated "
                f"user identification",
            )

        # Check for authentication verification
        has_authentication = entry.metadata.get("authenticated", False)
        if not has_authentication:
            return self._create_violation(
                entry,
                rule,
                f"PHI access (type={entry.event_type}) without documented "
                f"authentication verification",
            )

        # For high-risk operations, check for strong authentication
        high_risk_events = {"data_export", "bulk_access", "admin_access"}
        if entry.event_type.lower() in high_risk_events:
            has_mfa = entry.metadata.get("mfa_verified", False)
            if not has_mfa:
                return self._create_violation(
                    entry,
                    rule,
                    f"High-risk PHI operation (type={entry.event_type}) without "
                    f"multi-factor authentication",
                )

        return None

    def _check_transmission_security(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check HIPAA-164.312(e): Transmission security.

        PHI transmission must be encrypted and secured.
        """
        transmission_events = {
            "data_transfer", "data_export", "api_call", "external_integration",
            "inference"  # May involve PHI transmission
        }
        if entry.event_type.lower() not in transmission_events:
            return None

        has_encryption = entry.metadata.get("transmission_encrypted", False)
        if not has_encryption:
            return self._create_violation(
                entry,
                rule,
                f"PHI transmission (type={entry.event_type}) without "
                f"documented encryption",
            )

        # Check for secure protocol
        protocol = entry.metadata.get("protocol", "")
        insecure_protocols = {"http", "ftp", "telnet"}
        if protocol.lower() in insecure_protocols:
            return self._create_violation(
                entry,
                rule,
                f"PHI transmission (type={entry.event_type}) using insecure "
                f"protocol ({protocol})",
            )

        return None

    def _check_uses_and_disclosures(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check HIPAA-164.502: Uses and disclosures.

        PHI use must be limited to minimum necessary and properly authorized.
        """
        # Check for documented purpose
        has_purpose = entry.metadata.get("purpose_documented", False)
        if not has_purpose:
            return self._create_violation(
                entry,
                rule,
                f"PHI use (type={entry.event_type}) without documented "
                f"purpose for access",
            )

        # Check for minimum necessary compliance
        has_minimum_necessary = entry.metadata.get("minimum_necessary_applied", False)
        if not has_minimum_necessary:
            return self._create_violation(
                entry,
                rule,
                f"PHI use (type={entry.event_type}) without minimum necessary "
                f"standard applied",
            )

        # For disclosures, check for authorization
        disclosure_events = {"data_export", "data_share", "external_integration"}
        if entry.event_type.lower() in disclosure_events:
            has_authorization = entry.metadata.get("disclosure_authorized", False)
            if not has_authorization:
                return self._create_violation(
                    entry,
                    rule,
                    f"PHI disclosure (type={entry.event_type}) without "
                    f"documented authorization",
                )

        return None

    def _check_deidentification(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check HIPAA-164.514: De-identification standards.

        Training and analytics should use de-identified data where possible.
        """
        # Check events where de-identification is typically required
        deidentification_events = {"training", "analytics", "research", "data_aggregation"}
        if entry.event_type.lower() not in deidentification_events:
            return None

        # Check if de-identification was applied
        is_deidentified = entry.metadata.get("deidentified", False)
        has_deidentification_exception = entry.metadata.get(
            "deidentification_exception_documented", False
        )

        if not is_deidentified and not has_deidentification_exception:
            return self._create_violation(
                entry,
                rule,
                f"PHI used for {entry.event_type} without de-identification "
                f"or documented exception",
            )

        return None

    def _check_administrative_requirements(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check HIPAA-164.530: Administrative requirements.

        PHI operations should reference documentation and policies.
        """
        # Check for policy compliance documentation
        has_policy_ref = entry.documentation_ref is not None
        has_policy_compliance = entry.metadata.get("policy_compliant", False)

        if not has_policy_ref and not has_policy_compliance:
            return self._create_violation(
                entry,
                rule,
                f"PHI operation (type={entry.event_type}) without reference "
                f"to privacy policies or documented compliance",
            )

        return None

    def _create_violation(
        self, entry: AuditEntry, rule: ComplianceRule, evidence: str
    ) -> ComplianceViolation:
        """
        Create a compliance violation object.

        Args:
            entry: The audit entry that triggered the violation
            rule: The rule that was violated
            evidence: Specific evidence describing the violation

        Returns:
            ComplianceViolation object
        """
        return ComplianceViolation(
            rule_id=rule.rule_id,
            rule_name=rule.name,
            severity=rule.severity,
            description=rule.description,
            evidence=evidence,
            remediation=rule.remediation,
            entry_id=entry.entry_id,
            category=rule.category,
            framework=self._name,
        )

__init__

__init__()

Initialize the HIPAA framework with all defined rules.

Source code in src/rotalabs_comply/frameworks/hipaa.py
def __init__(self):
    """Initialize the HIPAA framework with all defined rules."""
    rules = self._create_rules()
    super().__init__(name="HIPAA", version="1996/2013", rules=rules)

HIPAA compliance framework for PHI handling.

Categories

Category Rule Section Description
access_control 164.312(a) System and data access
audit 164.312(b) Audit controls
integrity 164.312(c) Data integrity
authentication 164.312(d) Entity authentication
transmission 164.312(e) Transmission security
privacy 164.502/514/530 Privacy rule

Rules

Rule ID Name Severity Category
HIPAA-164.312(a) Access Control CRITICAL access_control
HIPAA-164.312(b) Audit Controls HIGH audit
HIPAA-164.312(c) Integrity Controls HIGH integrity
HIPAA-164.312(d) Authentication CRITICAL authentication
HIPAA-164.312(e) Transmission Security HIGH transmission
HIPAA-164.502 Uses and Disclosures CRITICAL privacy
HIPAA-164.514 De-identification HIGH privacy
HIPAA-164.530 Administrative Requirements MEDIUM privacy

PHI Detection

Rules only apply when data_classification contains:

  • "PHI"
  • "ePHI"
  • "protected_health_information"
  • "health_data"
  • "medical"
  • "clinical"

Usage

from rotalabs_comply.frameworks.hipaa import HIPAAFramework

framework = HIPAAFramework()

# PHI-related entry (rules apply)
entry = AuditEntry(
    entry_id="hipaa-001",
    timestamp=datetime.utcnow(),
    event_type="inference",
    actor="doctor@hospital.com",
    action="AI diagnostic",
    data_classification="PHI",
    metadata={
        "access_controlled": True,
        "encryption_enabled": True,
        "authenticated": True,
        "purpose_documented": True,
        "minimum_necessary_applied": True,
    },
)

result = await framework.check(entry, profile)

Key Requirements

All PHI access requires: - Authenticated actor - metadata["access_controlled"]=True - metadata["encryption_enabled"]=True

High-risk PHI operations require: - metadata["mfa_verified"]=True

PHI use requires: - metadata["purpose_documented"]=True - metadata["minimum_necessary_applied"]=True


GDPR Framework

GDPRFramework

GDPR compliance framework.

Implements compliance checks based on the General Data Protection Regulation (GDPR) requirements for processing personal data. The framework evaluates audit entries against the Regulation's requirements for data protection, consent, transparency, data subject rights, security, and accountability.

The GDPR applies to: - Organizations established in the EU processing personal data - Organizations outside the EU offering goods/services to EU residents - Organizations monitoring behavior of individuals in the EU

Key principles enforced: - Lawfulness, fairness, and transparency - Purpose limitation - Data minimization - Accuracy - Storage limitation - Integrity and confidentiality - Accountability

Example

framework = GDPRFramework() result = await framework.check(entry, profile) if not result.is_compliant: ... for violation in result.violations: ... print(f"{violation.rule_id}: {violation.description}")

Source code in src/rotalabs_comply/frameworks/gdpr.py
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
class GDPRFramework(BaseFramework):
    """
    GDPR compliance framework.

    Implements compliance checks based on the General Data Protection Regulation
    (GDPR) requirements for processing personal data. The framework evaluates
    audit entries against the Regulation's requirements for data protection,
    consent, transparency, data subject rights, security, and accountability.

    The GDPR applies to:
    - Organizations established in the EU processing personal data
    - Organizations outside the EU offering goods/services to EU residents
    - Organizations monitoring behavior of individuals in the EU

    Key principles enforced:
    - Lawfulness, fairness, and transparency
    - Purpose limitation
    - Data minimization
    - Accuracy
    - Storage limitation
    - Integrity and confidentiality
    - Accountability

    Example:
        >>> framework = GDPRFramework()
        >>> result = await framework.check(entry, profile)
        >>> if not result.is_compliant:
        ...     for violation in result.violations:
        ...         print(f"{violation.rule_id}: {violation.description}")
    """

    def __init__(self):
        """Initialize the GDPR framework with all defined rules."""
        rules = self._create_rules()
        super().__init__(name="GDPR", version="2016/679", rules=rules)

    def _create_rules(self) -> List[ComplianceRule]:
        """
        Create all GDPR compliance rules.

        Returns:
            List of ComplianceRule objects representing GDPR requirements
        """
        return [
            ComplianceRule(
                rule_id="GDPR-Art5",
                name="Data Processing Principles",
                description=(
                    "Personal data shall be processed lawfully, fairly and in a transparent "
                    "manner in relation to the data subject ('lawfulness, fairness and "
                    "transparency'). Data must be collected for specified, explicit and "
                    "legitimate purposes and not further processed in a manner incompatible "
                    "with those purposes. Data shall be adequate, relevant and limited to "
                    "what is necessary ('data minimisation'), accurate and kept up to date, "
                    "kept for no longer than necessary ('storage limitation'), and processed "
                    "in a manner that ensures appropriate security ('integrity and "
                    "confidentiality'). The controller shall be responsible for, and be able "
                    "to demonstrate compliance with these principles ('accountability'). "
                    "(Article 5)"
                ),
                severity=RiskLevel.CRITICAL,
                category="data_protection",
                remediation=(
                    "Ensure all personal data processing adheres to GDPR principles: "
                    "1) Document the lawful basis for processing, 2) Limit data collection "
                    "to what is necessary, 3) Implement data accuracy checks, 4) Define "
                    "retention periods, 5) Apply appropriate security measures, and "
                    "6) Maintain records demonstrating compliance."
                ),
                references=["GDPR Article 5(1)(2)", "Recitals 39-47"],
            ),
            ComplianceRule(
                rule_id="GDPR-Art6",
                name="Lawful Basis for Processing",
                description=(
                    "Processing shall be lawful only if and to the extent that at least one "
                    "of the following applies: (a) consent, (b) contract necessity, "
                    "(c) legal obligation, (d) vital interests, (e) public interest or "
                    "official authority, or (f) legitimate interests (except where "
                    "overridden by data subject's interests or fundamental rights). Each "
                    "processing activity must have a documented legal basis before "
                    "processing begins. (Article 6)"
                ),
                severity=RiskLevel.CRITICAL,
                category="legal_basis",
                remediation=(
                    "Identify and document the appropriate lawful basis for each processing "
                    "activity before processing begins. For consent, ensure it meets GDPR "
                    "requirements. For legitimate interests, conduct a balancing test. "
                    "Record the lawful basis in your processing records and privacy notices."
                ),
                references=["GDPR Article 6(1)", "Recitals 40-50"],
            ),
            ComplianceRule(
                rule_id="GDPR-Art7",
                name="Conditions for Consent",
                description=(
                    "Where processing is based on consent, the controller shall be able to "
                    "demonstrate that the data subject has consented to processing. Consent "
                    "must be freely given, specific, informed and unambiguous. The request "
                    "for consent shall be presented in a manner clearly distinguishable from "
                    "other matters, in an intelligible and easily accessible form, using "
                    "clear and plain language. The data subject shall have the right to "
                    "withdraw consent at any time, and withdrawal must be as easy as giving "
                    "consent. (Article 7)"
                ),
                severity=RiskLevel.HIGH,
                category="consent",
                remediation=(
                    "Implement consent mechanisms that: 1) Require affirmative action "
                    "(no pre-ticked boxes), 2) Are specific to each processing purpose, "
                    "3) Provide clear information about data use, 4) Are separate from "
                    "other terms, 5) Allow easy withdrawal, and 6) Maintain consent records. "
                    "Regularly review and refresh consent where appropriate."
                ),
                references=["GDPR Article 7(1-4)", "Recitals 32, 42, 43"],
            ),
            ComplianceRule(
                rule_id="GDPR-Art12",
                name="Transparent Information and Communication",
                description=(
                    "The controller shall take appropriate measures to provide any "
                    "information referred to in Articles 13 and 14 and any communication "
                    "under Articles 15 to 22 relating to processing to the data subject "
                    "in a concise, transparent, intelligible and easily accessible form, "
                    "using clear and plain language. Information shall be provided in "
                    "writing, or by other means including electronic means. The controller "
                    "shall facilitate the exercise of data subject rights. (Article 12)"
                ),
                severity=RiskLevel.HIGH,
                category="transparency",
                remediation=(
                    "Develop clear, accessible privacy notices using plain language. "
                    "Provide information through multiple channels (website, app, paper). "
                    "Establish procedures to respond to data subject requests within one "
                    "month. Train staff on handling requests. Use layered approaches for "
                    "complex information. Test readability of notices."
                ),
                references=["GDPR Article 12(1-6)", "Recitals 58-59"],
            ),
            ComplianceRule(
                rule_id="GDPR-Art13",
                name="Information at Collection",
                description=(
                    "Where personal data are collected from the data subject, the controller "
                    "shall, at the time when personal data are obtained, provide the data "
                    "subject with: controller identity and contact details, DPO contact "
                    "details, purposes and legal basis for processing, legitimate interests "
                    "pursued, recipients or categories of recipients, intention to transfer "
                    "data to third countries, retention period, data subject rights, right "
                    "to withdraw consent, right to lodge complaint, whether provision is "
                    "statutory/contractual requirement, and existence of automated "
                    "decision-making including profiling. (Article 13)"
                ),
                severity=RiskLevel.HIGH,
                category="transparency",
                remediation=(
                    "Create comprehensive privacy notices that include all required "
                    "information under Article 13. Provide this information at the point "
                    "of data collection. For AI systems, clearly explain any automated "
                    "decision-making, profiling, and the logic involved. Update notices "
                    "when processing changes."
                ),
                references=["GDPR Article 13(1-3)", "Recitals 60-62"],
            ),
            ComplianceRule(
                rule_id="GDPR-Art15",
                name="Right of Access",
                description=(
                    "The data subject shall have the right to obtain from the controller "
                    "confirmation as to whether or not personal data concerning him or her "
                    "are being processed, and, where that is the case, access to the personal "
                    "data and information including: purposes of processing, categories of "
                    "data, recipients, retention period, existence of rights (rectification, "
                    "erasure, restriction, objection), right to lodge complaint, source of "
                    "data, and existence of automated decision-making. The controller shall "
                    "provide a copy of the personal data undergoing processing. (Article 15)"
                ),
                severity=RiskLevel.HIGH,
                category="data_subject_rights",
                remediation=(
                    "Implement systems to: 1) Verify data subject identity, 2) Search and "
                    "retrieve all personal data across systems, 3) Generate comprehensive "
                    "response within one month, 4) Provide data in commonly used electronic "
                    "format, 5) Include all supplementary information required. Establish "
                    "processes for handling complex or repeated requests."
                ),
                references=["GDPR Article 15(1-4)", "Recitals 63-64"],
            ),
            ComplianceRule(
                rule_id="GDPR-Art17",
                name="Right to Erasure (Right to be Forgotten)",
                description=(
                    "The data subject shall have the right to obtain from the controller the "
                    "erasure of personal data without undue delay where: data no longer "
                    "necessary for original purposes, consent withdrawn, data subject objects "
                    "and no overriding legitimate grounds, data unlawfully processed, legal "
                    "obligation requires erasure, or data collected in relation to offer of "
                    "information society services to a child. Where data has been made public, "
                    "the controller must take reasonable steps to inform other controllers "
                    "processing the data. Exceptions apply for legal claims, legal obligations, "
                    "public health, archiving, and research. (Article 17)"
                ),
                severity=RiskLevel.HIGH,
                category="data_subject_rights",
                remediation=(
                    "Implement erasure capabilities that: 1) Can identify all instances of "
                    "personal data, 2) Securely delete data from all systems including backups, "
                    "3) Notify third parties who received the data, 4) Document the erasure "
                    "process, and 5) Respond within one month. For AI systems, consider "
                    "whether data in training sets can be removed or models retrained."
                ),
                references=["GDPR Article 17(1-3)", "Recitals 65-66"],
            ),
            ComplianceRule(
                rule_id="GDPR-Art20",
                name="Right to Data Portability",
                description=(
                    "The data subject shall have the right to receive personal data concerning "
                    "him or her, which he or she has provided to a controller, in a structured, "
                    "commonly used and machine-readable format and have the right to transmit "
                    "those data to another controller without hindrance where: processing is "
                    "based on consent or contract, and processing is carried out by automated "
                    "means. The data subject shall have the right to have data transmitted "
                    "directly from one controller to another, where technically feasible. "
                    "(Article 20)"
                ),
                severity=RiskLevel.MEDIUM,
                category="data_subject_rights",
                remediation=(
                    "Implement data export functionality that: 1) Provides data in structured, "
                    "machine-readable formats (JSON, CSV, XML), 2) Includes all data provided "
                    "by the data subject, 3) Allows direct transmission to other controllers "
                    "where feasible, 4) Responds within one month. Distinguish between data "
                    "'provided' by the subject and data 'derived' through processing."
                ),
                references=["GDPR Article 20(1-4)", "Recital 68"],
            ),
            ComplianceRule(
                rule_id="GDPR-Art22",
                name="Automated Decision-Making and Profiling",
                description=(
                    "The data subject shall have the right not to be subject to a decision "
                    "based solely on automated processing, including profiling, which produces "
                    "legal effects concerning him or her or similarly significantly affects "
                    "him or her. This does not apply if the decision: is necessary for a "
                    "contract, is authorised by law, or is based on explicit consent. In "
                    "those cases, the controller shall implement suitable measures to safeguard "
                    "the data subject's rights and freedoms and legitimate interests, at least "
                    "the right to obtain human intervention, to express his or her point of "
                    "view and to contest the decision. (Article 22)"
                ),
                severity=RiskLevel.CRITICAL,
                category="data_subject_rights",
                remediation=(
                    "For AI systems making automated decisions: 1) Implement human review "
                    "mechanisms for decisions with legal or significant effects, 2) Provide "
                    "meaningful information about the logic involved, 3) Allow data subjects "
                    "to express their views and contest decisions, 4) Conduct DPIAs for "
                    "profiling activities, and 5) Document the necessity and safeguards. "
                    "Consider whether purely automated decisions can be avoided."
                ),
                references=["GDPR Article 22(1-4)", "Recitals 71-72"],
            ),
            ComplianceRule(
                rule_id="GDPR-Art25",
                name="Data Protection by Design and Default",
                description=(
                    "The controller shall, both at the time of the determination of the means "
                    "for processing and at the time of the processing itself, implement "
                    "appropriate technical and organisational measures designed to implement "
                    "data-protection principles (such as data minimisation) in an effective "
                    "manner and to integrate the necessary safeguards into the processing. "
                    "The controller shall implement appropriate measures for ensuring that, "
                    "by default, only personal data which are necessary for each specific "
                    "purpose of the processing are processed. (Article 25)"
                ),
                severity=RiskLevel.HIGH,
                category="accountability",
                remediation=(
                    "Embed privacy into system design from the outset: 1) Conduct privacy "
                    "impact assessments during development, 2) Implement data minimisation "
                    "by default, 3) Use pseudonymisation and encryption, 4) Build in consent "
                    "mechanisms, 5) Design for data subject rights, 6) Limit access by default, "
                    "7) Document design decisions. Review and update as technology evolves."
                ),
                references=["GDPR Article 25(1-3)", "Recitals 78"],
            ),
            ComplianceRule(
                rule_id="GDPR-Art30",
                name="Records of Processing Activities",
                description=(
                    "Each controller shall maintain a record of processing activities under "
                    "its responsibility. That record shall contain: name and contact details "
                    "of controller and DPO, purposes of processing, description of categories "
                    "of data subjects and personal data, categories of recipients, transfers "
                    "to third countries, retention periods, and description of technical and "
                    "organisational security measures. These records shall be in writing, "
                    "including electronic form, and made available to the supervisory "
                    "authority on request. (Article 30)"
                ),
                severity=RiskLevel.HIGH,
                category="accountability",
                remediation=(
                    "Create and maintain comprehensive records of all processing activities "
                    "(ROPA) that include all required elements. Review and update records "
                    "regularly. Ensure records cover all systems including AI/ML systems. "
                    "Use a consistent format that can be provided to supervisory authorities. "
                    "Train staff responsible for maintaining records."
                ),
                references=["GDPR Article 30(1-5)", "Recital 82"],
            ),
            ComplianceRule(
                rule_id="GDPR-Art32",
                name="Security of Processing",
                description=(
                    "The controller and processor shall implement appropriate technical and "
                    "organisational measures to ensure a level of security appropriate to the "
                    "risk, including as appropriate: (a) pseudonymisation and encryption of "
                    "personal data, (b) ability to ensure ongoing confidentiality, integrity, "
                    "availability and resilience of systems, (c) ability to restore "
                    "availability and access to data in timely manner following an incident, "
                    "(d) process for regularly testing, assessing and evaluating effectiveness "
                    "of measures. The controller and processor shall take steps to ensure any "
                    "person acting under their authority with access to personal data processes "
                    "only on instructions. (Article 32)"
                ),
                severity=RiskLevel.CRITICAL,
                category="security",
                remediation=(
                    "Implement security measures appropriate to the risk: 1) Encrypt personal "
                    "data in transit and at rest, 2) Implement access controls and "
                    "authentication, 3) Maintain backup and recovery procedures, 4) Conduct "
                    "regular security testing and audits, 5) Train personnel on security "
                    "procedures, 6) Document security measures. For AI systems, also consider "
                    "model security and adversarial robustness."
                ),
                references=["GDPR Article 32(1-4)", "Recitals 83"],
            ),
            ComplianceRule(
                rule_id="GDPR-Art33",
                name="Personal Data Breach Notification",
                description=(
                    "In the case of a personal data breach, the controller shall without "
                    "undue delay and, where feasible, not later than 72 hours after having "
                    "become aware of it, notify the personal data breach to the supervisory "
                    "authority, unless the breach is unlikely to result in a risk to rights "
                    "and freedoms. Where notification is not made within 72 hours, it shall "
                    "be accompanied by reasons for the delay. The notification shall describe: "
                    "nature of breach including categories and approximate numbers of data "
                    "subjects and records, DPO contact details, likely consequences, and "
                    "measures taken or proposed to address the breach. (Article 33)"
                ),
                severity=RiskLevel.CRITICAL,
                category="security",
                remediation=(
                    "Establish breach detection and response procedures: 1) Implement "
                    "monitoring to detect breaches quickly, 2) Create incident response plan "
                    "with clear escalation paths, 3) Prepare notification templates, "
                    "4) Document all breaches in a breach register, 5) Conduct post-incident "
                    "reviews, 6) Train staff on breach identification and reporting. "
                    "Ensure 72-hour notification capability is tested."
                ),
                references=["GDPR Article 33(1-5)", "Recitals 85-88"],
            ),
            ComplianceRule(
                rule_id="GDPR-Art35",
                name="Data Protection Impact Assessment",
                description=(
                    "Where a type of processing, in particular using new technologies, and "
                    "taking into account the nature, scope, context and purposes of the "
                    "processing, is likely to result in a high risk to the rights and freedoms "
                    "of natural persons, the controller shall, prior to the processing, carry "
                    "out an assessment of the impact of the envisaged processing operations "
                    "on the protection of personal data. A DPIA is required in particular for: "
                    "(a) systematic and extensive evaluation of personal aspects based on "
                    "automated processing, including profiling, (b) large scale processing of "
                    "special categories of data or criminal convictions data, (c) systematic "
                    "monitoring of a publicly accessible area on a large scale. (Article 35)"
                ),
                severity=RiskLevel.HIGH,
                category="accountability",
                remediation=(
                    "Conduct DPIAs for high-risk processing, especially AI/ML systems: "
                    "1) Describe processing operations and purposes, 2) Assess necessity "
                    "and proportionality, 3) Identify and assess risks to data subjects, "
                    "4) Identify measures to address risks, 5) Consult with DPO, "
                    "6) Review and update as processing changes. For new AI systems, "
                    "complete DPIA before deployment."
                ),
                references=["GDPR Article 35(1-11)", "Recitals 89-92"],
            ),
        ]

    def _check_rule(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check a single GDPR rule against an audit entry.

        Evaluates the audit entry against the specific rule requirements
        and returns a violation if the entry does not comply.

        Args:
            entry: The audit entry to check
            rule: The rule to evaluate

        Returns:
            ComplianceViolation if the rule is violated, None otherwise
        """
        # Use custom check function if provided
        if rule.check_fn is not None:
            is_compliant = rule.check_fn(entry)
            if not is_compliant:
                return self._create_violation(entry, rule, "Custom check failed")
            return None

        # Framework-specific rule checks
        if rule.rule_id == "GDPR-Art5":
            return self._check_data_processing_principles(entry, rule)
        elif rule.rule_id == "GDPR-Art6":
            return self._check_lawful_basis(entry, rule)
        elif rule.rule_id == "GDPR-Art7":
            return self._check_consent(entry, rule)
        elif rule.rule_id == "GDPR-Art12":
            return self._check_transparent_communication(entry, rule)
        elif rule.rule_id == "GDPR-Art13":
            return self._check_information_at_collection(entry, rule)
        elif rule.rule_id == "GDPR-Art15":
            return self._check_right_of_access(entry, rule)
        elif rule.rule_id == "GDPR-Art17":
            return self._check_right_to_erasure(entry, rule)
        elif rule.rule_id == "GDPR-Art20":
            return self._check_data_portability(entry, rule)
        elif rule.rule_id == "GDPR-Art22":
            return self._check_automated_decision_making(entry, rule)
        elif rule.rule_id == "GDPR-Art25":
            return self._check_privacy_by_design(entry, rule)
        elif rule.rule_id == "GDPR-Art30":
            return self._check_processing_records(entry, rule)
        elif rule.rule_id == "GDPR-Art32":
            return self._check_security(entry, rule)
        elif rule.rule_id == "GDPR-Art33":
            return self._check_breach_notification(entry, rule)
        elif rule.rule_id == "GDPR-Art35":
            return self._check_dpia(entry, rule)

        return None

    def _check_data_processing_principles(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check GDPR-Art5: Data processing must follow core GDPR principles.

        Operations involving personal data must demonstrate compliance with
        lawfulness, fairness, transparency, purpose limitation, data minimisation,
        accuracy, storage limitation, integrity, confidentiality, and accountability.
        """
        # Check if processing involves personal data
        pii_classifications = {"pii", "personal", "sensitive", "special_category"}
        if entry.data_classification.lower() not in pii_classifications:
            return None

        # Check for documented principles compliance
        has_lawful_basis = entry.metadata.get("lawful_basis_documented", False)
        has_purpose_limitation = entry.metadata.get("purpose_documented", False)

        if not (has_lawful_basis and has_purpose_limitation):
            return self._create_violation(
                entry,
                rule,
                f"Personal data processing (classification={entry.data_classification}) "
                f"without documented lawful basis or purpose limitation",
            )
        return None

    def _check_lawful_basis(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check GDPR-Art6: Processing must have a documented lawful basis.

        Each processing operation must identify one of the six lawful bases:
        consent, contract, legal obligation, vital interests, public interest,
        or legitimate interests.
        """
        pii_classifications = {"pii", "personal", "sensitive", "special_category"}
        if entry.data_classification.lower() not in pii_classifications:
            return None

        lawful_basis = entry.metadata.get("lawful_basis")
        valid_bases = {
            "consent", "contract", "legal_obligation",
            "vital_interests", "public_interest", "legitimate_interests"
        }

        if not lawful_basis or lawful_basis.lower() not in valid_bases:
            return self._create_violation(
                entry,
                rule,
                f"Personal data processing (classification={entry.data_classification}) "
                f"without valid lawful basis. Provided: {lawful_basis or 'None'}",
            )
        return None

    def _check_consent(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check GDPR-Art7: When consent is the lawful basis, it must meet requirements.

        Consent must be freely given, specific, informed, unambiguous, and
        demonstrable. The data subject must be able to withdraw consent easily.
        """
        # Only applies when consent is the lawful basis
        lawful_basis = entry.metadata.get("lawful_basis", "").lower()
        if lawful_basis != "consent":
            return None

        consent_recorded = entry.metadata.get("consent_recorded", False)
        consent_specific = entry.metadata.get("consent_specific", False)
        consent_informed = entry.metadata.get("consent_informed", False)

        if not all([consent_recorded, consent_specific, consent_informed]):
            missing = []
            if not consent_recorded:
                missing.append("recorded")
            if not consent_specific:
                missing.append("specific")
            if not consent_informed:
                missing.append("informed")
            return self._create_violation(
                entry,
                rule,
                f"Consent-based processing without valid consent. Missing: {', '.join(missing)}",
            )
        return None

    def _check_transparent_communication(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check GDPR-Art12: Information must be provided transparently.

        Communications about data processing must be concise, transparent,
        intelligible, and easily accessible, using clear and plain language.
        """
        # Check for user-facing data collection events
        collection_events = {"data_collection", "registration", "signup", "form_submission"}
        if entry.event_type.lower() not in collection_events:
            return None

        privacy_notice_provided = entry.metadata.get("privacy_notice_provided", False)
        if not privacy_notice_provided:
            return self._create_violation(
                entry,
                rule,
                f"Data collection event (type={entry.event_type}) without "
                f"transparent privacy information provided to data subject",
            )
        return None

    def _check_information_at_collection(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check GDPR-Art13: Required information must be provided at collection.

        When collecting personal data, data subjects must receive comprehensive
        information about the processing including controller identity, purposes,
        legal basis, rights, and retention periods.
        """
        collection_events = {"data_collection", "registration", "signup", "form_submission"}
        if entry.event_type.lower() not in collection_events:
            return None

        pii_classifications = {"pii", "personal", "sensitive", "special_category"}
        if entry.data_classification.lower() not in pii_classifications:
            return None

        disclosure_complete = entry.metadata.get("art13_disclosure_complete", False)
        if not disclosure_complete:
            return self._create_violation(
                entry,
                rule,
                f"Personal data collection (type={entry.event_type}) without "
                f"complete Article 13 information disclosure",
            )
        return None

    def _check_right_of_access(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check GDPR-Art15: Data subject access requests must be handled properly.

        When a data subject requests access, the controller must provide
        confirmation of processing and a copy of personal data within one month.
        """
        access_events = {"data_subject_access_request", "dsar", "subject_access_request"}
        if entry.event_type.lower() not in access_events:
            return None

        response_within_deadline = entry.metadata.get("response_within_deadline", False)
        complete_response = entry.metadata.get("complete_response_provided", False)

        if not response_within_deadline:
            return self._create_violation(
                entry,
                rule,
                f"Data subject access request (type={entry.event_type}) not "
                f"responded to within the required timeframe",
            )

        if not complete_response:
            return self._create_violation(
                entry,
                rule,
                f"Data subject access request (type={entry.event_type}) response "
                f"incomplete - must include all personal data and required information",
            )
        return None

    def _check_right_to_erasure(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check GDPR-Art17: Erasure requests must be handled properly.

        When a data subject requests erasure and it is valid, the controller
        must erase data without undue delay and notify third parties.
        """
        erasure_events = {"erasure_request", "deletion_request", "right_to_be_forgotten"}
        if entry.event_type.lower() not in erasure_events:
            return None

        erasure_complete = entry.metadata.get("erasure_complete", False)
        third_parties_notified = entry.metadata.get("third_parties_notified", True)  # Default True if N/A

        if not erasure_complete:
            return self._create_violation(
                entry,
                rule,
                f"Erasure request (type={entry.event_type}) not completed - "
                f"personal data must be erased from all systems",
            )

        if not third_parties_notified:
            return self._create_violation(
                entry,
                rule,
                f"Erasure request (type={entry.event_type}) - third party "
                f"recipients of data not notified of erasure requirement",
            )
        return None

    def _check_data_portability(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check GDPR-Art20: Portability requests must provide data in machine-readable format.

        Data subjects have the right to receive their data in a structured,
        commonly used, and machine-readable format.
        """
        portability_events = {"portability_request", "data_export_request"}
        if entry.event_type.lower() not in portability_events:
            return None

        machine_readable_format = entry.metadata.get("machine_readable_format", False)
        if not machine_readable_format:
            return self._create_violation(
                entry,
                rule,
                f"Data portability request (type={entry.event_type}) - data not "
                f"provided in structured, machine-readable format",
            )
        return None

    def _check_automated_decision_making(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check GDPR-Art22: Automated decisions with significant effects need safeguards.

        Data subjects have the right not to be subject to solely automated
        decisions with legal or similarly significant effects without safeguards.
        """
        automated_decision_events = {
            "automated_decision", "profiling", "scoring",
            "credit_decision", "hiring_decision", "eligibility_decision"
        }
        if entry.event_type.lower() not in automated_decision_events:
            return None

        # Check if decision has significant effects
        has_significant_effect = entry.metadata.get("significant_effect", False)
        if not has_significant_effect:
            return None

        # Check for required safeguards
        human_intervention_available = entry.metadata.get("human_intervention_available", False)
        right_to_contest_enabled = entry.metadata.get("right_to_contest_enabled", False)
        logic_explained = entry.metadata.get("logic_explained", False)

        if not human_intervention_available:
            return self._create_violation(
                entry,
                rule,
                f"Automated decision with significant effect (type={entry.event_type}) "
                f"without human intervention mechanism available",
            )

        if not right_to_contest_enabled:
            return self._create_violation(
                entry,
                rule,
                f"Automated decision with significant effect (type={entry.event_type}) "
                f"without right to contest the decision",
            )

        if not logic_explained:
            return self._create_violation(
                entry,
                rule,
                f"Automated decision with significant effect (type={entry.event_type}) "
                f"without meaningful information about the logic involved",
            )
        return None

    def _check_privacy_by_design(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check GDPR-Art25: Systems must implement privacy by design and default.

        Controllers must implement technical and organisational measures
        to implement data protection principles and ensure minimal data
        processing by default.
        """
        design_events = {"system_deployment", "feature_launch", "processing_change"}
        if entry.event_type.lower() not in design_events:
            return None

        privacy_by_design_assessment = entry.metadata.get("privacy_by_design_assessment", False)
        data_minimisation_default = entry.metadata.get("data_minimisation_default", False)

        if not privacy_by_design_assessment:
            return self._create_violation(
                entry,
                rule,
                f"System deployment/change (type={entry.event_type}) without "
                f"documented privacy by design assessment",
            )

        if not data_minimisation_default:
            return self._create_violation(
                entry,
                rule,
                f"System deployment/change (type={entry.event_type}) without "
                f"data minimisation implemented by default",
            )
        return None

    def _check_processing_records(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check GDPR-Art30: Records of processing activities must be maintained.

        Controllers must maintain written records of processing activities
        including purposes, data categories, recipients, and security measures.
        """
        pii_classifications = {"pii", "personal", "sensitive", "special_category"}
        if entry.data_classification.lower() not in pii_classifications:
            return None

        # Check for significant processing operations
        significant_events = {"data_processing", "data_transfer", "new_processing_activity"}
        if entry.event_type.lower() not in significant_events:
            return None

        ropa_entry_exists = entry.metadata.get("ropa_entry_exists", False)
        if not ropa_entry_exists:
            return self._create_violation(
                entry,
                rule,
                f"Processing activity (type={entry.event_type}) not recorded in "
                f"the Records of Processing Activities (ROPA)",
            )
        return None

    def _check_security(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check GDPR-Art32: Appropriate security measures must be in place.

        Processing must implement security measures appropriate to the risk,
        including encryption, access controls, and regular security testing.
        """
        pii_classifications = {"pii", "personal", "sensitive", "special_category"}
        if entry.data_classification.lower() not in pii_classifications:
            return None

        # Check for security-relevant operations
        security_relevant_events = {
            "data_access", "data_transfer", "data_processing",
            "data_export", "api_call", "model_inference"
        }
        if entry.event_type.lower() not in security_relevant_events:
            return None

        encryption_applied = entry.metadata.get("encryption_applied", False)
        access_controlled = entry.metadata.get("access_controlled", False)

        if not encryption_applied:
            return self._create_violation(
                entry,
                rule,
                f"Personal data operation (type={entry.event_type}) without "
                f"appropriate encryption measures",
            )

        if not access_controlled:
            return self._create_violation(
                entry,
                rule,
                f"Personal data operation (type={entry.event_type}) without "
                f"appropriate access control measures",
            )
        return None

    def _check_breach_notification(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check GDPR-Art33: Breaches must be notified within 72 hours.

        Personal data breaches must be notified to the supervisory authority
        within 72 hours of becoming aware, unless unlikely to result in risk.
        """
        breach_events = {"data_breach", "security_incident", "unauthorized_access"}
        if entry.event_type.lower() not in breach_events:
            return None

        # Check if this is a reportable breach
        risk_to_rights = entry.metadata.get("risk_to_rights_freedoms", True)
        if not risk_to_rights:
            return None  # No notification required if no risk

        notification_sent = entry.metadata.get("supervisory_authority_notified", False)
        notification_within_72h = entry.metadata.get("notification_within_72_hours", False)

        if not notification_sent:
            return self._create_violation(
                entry,
                rule,
                f"Personal data breach (type={entry.event_type}) not notified "
                f"to supervisory authority",
            )

        if not notification_within_72h:
            return self._create_violation(
                entry,
                rule,
                f"Personal data breach (type={entry.event_type}) notification "
                f"exceeded 72-hour requirement without documented justification",
            )
        return None

    def _check_dpia(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check GDPR-Art35: High-risk processing requires DPIA.

        A Data Protection Impact Assessment is required for processing
        likely to result in high risk, including profiling, large-scale
        special category processing, and systematic monitoring.
        """
        # Check for high-risk processing types
        high_risk_events = {
            "profiling", "automated_decision", "large_scale_processing",
            "systematic_monitoring", "special_category_processing",
            "new_technology_deployment", "ai_model_deployment"
        }
        if entry.event_type.lower() not in high_risk_events:
            return None

        # Also trigger for high-risk level entries
        if entry.risk_level not in (RiskLevel.HIGH, RiskLevel.CRITICAL):
            return None

        dpia_completed = entry.metadata.get("dpia_completed", False)
        dpia_reviewed = entry.metadata.get("dpia_reviewed_by_dpo", False)

        if not dpia_completed:
            return self._create_violation(
                entry,
                rule,
                f"High-risk processing (type={entry.event_type}) commenced "
                f"without completing Data Protection Impact Assessment",
            )

        if not dpia_reviewed:
            return self._create_violation(
                entry,
                rule,
                f"High-risk processing (type={entry.event_type}) DPIA not "
                f"reviewed by Data Protection Officer",
            )
        return None

    def _create_violation(
        self, entry: AuditEntry, rule: ComplianceRule, evidence: str
    ) -> ComplianceViolation:
        """
        Create a compliance violation object.

        Args:
            entry: The audit entry that triggered the violation
            rule: The rule that was violated
            evidence: Specific evidence describing the violation

        Returns:
            ComplianceViolation object
        """
        return ComplianceViolation(
            rule_id=rule.rule_id,
            rule_name=rule.name,
            severity=rule.severity,
            description=rule.description,
            evidence=evidence,
            remediation=rule.remediation,
            entry_id=entry.entry_id,
            category=rule.category,
            framework=self._name,
        )

__init__

__init__()

Initialize the GDPR framework with all defined rules.

Source code in src/rotalabs_comply/frameworks/gdpr.py
def __init__(self):
    """Initialize the GDPR framework with all defined rules."""
    rules = self._create_rules()
    super().__init__(name="GDPR", version="2016/679", rules=rules)

GDPR (EU General Data Protection Regulation 2016/679) compliance framework for processing personal data.

Categories

Category Description
data_protection Core data protection principles (Article 5)
legal_basis Lawful processing requirements (Article 6)
consent Valid consent conditions (Article 7)
transparency Information provision and communication (Articles 12-13)
data_subject_rights Individual rights (Articles 15, 17, 20, 22)
security Data security measures (Articles 32-33)
accountability Demonstrating compliance (Articles 25, 30, 35)

Rules

Rule ID Name Category Severity
GDPR-Art5 Data Processing Principles data_protection CRITICAL
GDPR-Art6 Lawful Basis for Processing legal_basis CRITICAL
GDPR-Art7 Conditions for Consent consent HIGH
GDPR-Art12 Transparent Information and Communication transparency HIGH
GDPR-Art13 Information at Collection transparency HIGH
GDPR-Art15 Right of Access data_subject_rights HIGH
GDPR-Art17 Right to Erasure (Right to be Forgotten) data_subject_rights HIGH
GDPR-Art20 Right to Data Portability data_subject_rights MEDIUM
GDPR-Art22 Automated Decision-Making and Profiling data_subject_rights CRITICAL
GDPR-Art25 Data Protection by Design and Default accountability HIGH
GDPR-Art30 Records of Processing Activities accountability HIGH
GDPR-Art32 Security of Processing security CRITICAL
GDPR-Art33 Personal Data Breach Notification security CRITICAL
GDPR-Art35 Data Protection Impact Assessment accountability HIGH

Usage

from rotalabs_comply.frameworks.gdpr import GDPRFramework
from rotalabs_comply.frameworks.base import AuditEntry, ComplianceProfile, RiskLevel
from datetime import datetime

framework = GDPRFramework()

entry = AuditEntry(
    entry_id="gdpr-001",
    timestamp=datetime.utcnow(),
    event_type="data_processing",
    actor="analyst@company.eu",
    action="Process customer data",
    data_classification="pii",
    metadata={
        "lawful_basis_documented": True,
        "lawful_basis": "consent",
        "purpose_documented": True,
        "consent_recorded": True,
        "consent_specific": True,
        "consent_informed": True,
        "encryption_applied": True,
        "access_controlled": True,
    },
)

profile = ComplianceProfile(
    profile_id="gdpr-profile",
    name="GDPR Compliance",
)

result = await framework.check(entry, profile)

Key Requirements

Personal data processing requires: - data_classification set to "pii", "personal", "sensitive", or "special_category" - metadata["lawful_basis_documented"]=True - metadata["purpose_documented"]=True - metadata["lawful_basis"] set to one of: "consent", "contract", "legal_obligation", "vital_interests", "public_interest", "legitimate_interests"

Consent-based processing requires: - metadata["consent_recorded"]=True - metadata["consent_specific"]=True - metadata["consent_informed"]=True

Automated decisions with significant effects require: - metadata["human_intervention_available"]=True - metadata["right_to_contest_enabled"]=True - metadata["logic_explained"]=True


NIST AI RMF Framework

NISTAIRMFFramework

NIST AI Risk Management Framework compliance framework.

Implements compliance checks based on the NIST AI RMF 1.0 (January 2023) requirements for managing AI system risks. The framework evaluates audit entries against requirements for governance, context mapping, risk measurement, and risk management.

The NIST AI RMF is built on four core functions:

  1. GOVERN: Cross-cutting function that infuses the AI risk management culture into the organization. Establishes accountability structures, policies, and processes for AI risk management.

  2. MAP: Establishes the context for framing risks related to an AI system. Identifies and documents AI system characteristics, intended purposes, and potential impacts.

  3. MEASURE: Employs quantitative and qualitative methods to analyze, assess, and track AI risks and their impacts. Includes identification of appropriate metrics and evaluation methods.

  4. MANAGE: Allocates risk resources and implements responses to mapped and measured risks. Includes deployment decisions, post-deployment monitoring, and incident response.

The framework emphasizes trustworthy AI characteristics: - Valid and Reliable - Safe - Secure and Resilient - Accountable and Transparent - Explainable and Interpretable - Privacy-Enhanced - Fair with Harmful Bias Managed

Example

framework = NISTAIRMFFramework() result = await framework.check(entry, profile) if not result.is_compliant: ... for violation in result.violations: ... print(f"{violation.rule_id}: {violation.description}")

Source code in src/rotalabs_comply/frameworks/nist_ai_rmf.py
  40
  41
  42
  43
  44
  45
  46
  47
  48
  49
  50
  51
  52
  53
  54
  55
  56
  57
  58
  59
  60
  61
  62
  63
  64
  65
  66
  67
  68
  69
  70
  71
  72
  73
  74
  75
  76
  77
  78
  79
  80
  81
  82
  83
  84
  85
  86
  87
  88
  89
  90
  91
  92
  93
  94
  95
  96
  97
  98
  99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
 111
 112
 113
 114
 115
 116
 117
 118
 119
 120
 121
 122
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
class NISTAIRMFFramework(BaseFramework):
    """
    NIST AI Risk Management Framework compliance framework.

    Implements compliance checks based on the NIST AI RMF 1.0 (January 2023)
    requirements for managing AI system risks. The framework evaluates audit
    entries against requirements for governance, context mapping, risk
    measurement, and risk management.

    The NIST AI RMF is built on four core functions:

    1. GOVERN: Cross-cutting function that infuses the AI risk management
       culture into the organization. Establishes accountability structures,
       policies, and processes for AI risk management.

    2. MAP: Establishes the context for framing risks related to an AI system.
       Identifies and documents AI system characteristics, intended purposes,
       and potential impacts.

    3. MEASURE: Employs quantitative and qualitative methods to analyze,
       assess, and track AI risks and their impacts. Includes identification
       of appropriate metrics and evaluation methods.

    4. MANAGE: Allocates risk resources and implements responses to mapped
       and measured risks. Includes deployment decisions, post-deployment
       monitoring, and incident response.

    The framework emphasizes trustworthy AI characteristics:
    - Valid and Reliable
    - Safe
    - Secure and Resilient
    - Accountable and Transparent
    - Explainable and Interpretable
    - Privacy-Enhanced
    - Fair with Harmful Bias Managed

    Example:
        >>> framework = NISTAIRMFFramework()
        >>> result = await framework.check(entry, profile)
        >>> if not result.is_compliant:
        ...     for violation in result.violations:
        ...         print(f"{violation.rule_id}: {violation.description}")
    """

    def __init__(self):
        """Initialize the NIST AI RMF framework with all defined rules."""
        rules = self._create_rules()
        super().__init__(name="NIST AI RMF", version="1.0", rules=rules)

    def _create_rules(self) -> List[ComplianceRule]:
        """
        Create all NIST AI RMF compliance rules.

        Returns:
            List of ComplianceRule objects representing NIST AI RMF requirements
        """
        return [
            # ================================================================
            # GOVERN Function - Organizational Governance
            # ================================================================
            ComplianceRule(
                rule_id="NIST-GOV-1",
                name="AI Risk Management Governance Structure",
                description=(
                    "Organizations should establish and maintain AI risk management "
                    "governance structures that define clear accountability, roles, "
                    "and decision-making processes. Governance includes policies, "
                    "processes, and procedures to manage AI risks and opportunities "
                    "throughout the AI lifecycle. Senior leadership should demonstrate "
                    "commitment to AI risk management through resource allocation and "
                    "organizational culture. (GOVERN 1.1, 1.2, 1.3)"
                ),
                severity=RiskLevel.HIGH,
                category="governance",
                remediation=(
                    "Establish an AI governance committee or designate responsible "
                    "leadership. Document AI governance policies and procedures. "
                    "Ensure governance structures are integrated with enterprise "
                    "risk management. Define escalation paths for AI-related decisions."
                ),
                references=[
                    "NIST AI RMF GOVERN 1.1",
                    "NIST AI RMF GOVERN 1.2",
                    "NIST AI RMF GOVERN 1.3",
                    "NIST AI 100-1 Section 3",
                ],
            ),
            ComplianceRule(
                rule_id="NIST-GOV-2",
                name="Organizational AI Principles and Values",
                description=(
                    "Organizations should document and communicate AI principles and "
                    "values that guide AI development and deployment decisions. These "
                    "principles should address trustworthy AI characteristics including "
                    "fairness, accountability, transparency, privacy, safety, and "
                    "security. Principles should be operationalized through specific "
                    "policies and integrated into organizational processes. "
                    "(GOVERN 1.4, 1.5)"
                ),
                severity=RiskLevel.MEDIUM,
                category="governance",
                remediation=(
                    "Develop and document organizational AI principles aligned with "
                    "trustworthy AI characteristics. Communicate principles to all "
                    "stakeholders. Create mechanisms to operationalize principles in "
                    "AI development and deployment processes. Regularly review and "
                    "update principles based on evolving standards and learnings."
                ),
                references=[
                    "NIST AI RMF GOVERN 1.4",
                    "NIST AI RMF GOVERN 1.5",
                    "NIST AI 100-1 Appendix A",
                ],
            ),
            ComplianceRule(
                rule_id="NIST-GOV-3",
                name="Roles and Responsibilities Defined",
                description=(
                    "Organizations should clearly define and document roles and "
                    "responsibilities for AI risk management across the AI lifecycle. "
                    "This includes designating individuals or teams responsible for "
                    "AI governance, risk assessment, monitoring, and incident response. "
                    "Responsibilities should span development, deployment, and "
                    "decommissioning phases. (GOVERN 2.1, 2.2)"
                ),
                severity=RiskLevel.HIGH,
                category="governance",
                remediation=(
                    "Document specific roles and responsibilities for AI risk management. "
                    "Assign accountability for each phase of the AI lifecycle. Ensure "
                    "cross-functional representation in AI governance. Define clear "
                    "escalation procedures and decision authority. Provide training "
                    "appropriate to assigned responsibilities."
                ),
                references=[
                    "NIST AI RMF GOVERN 2.1",
                    "NIST AI RMF GOVERN 2.2",
                    "NIST AI 100-1 Section 3",
                ],
            ),
            ComplianceRule(
                rule_id="NIST-GOV-4",
                name="Third-Party AI Risk Management",
                description=(
                    "Organizations should establish processes to assess and manage "
                    "risks from third-party AI components, including AI services, "
                    "models, data, and infrastructure. Due diligence should be "
                    "conducted on third-party AI providers. Contracts should address "
                    "AI risk management requirements, and third-party risks should be "
                    "monitored throughout the relationship. (GOVERN 6.1, 6.2)"
                ),
                severity=RiskLevel.HIGH,
                category="governance",
                remediation=(
                    "Implement third-party AI risk assessment processes. Include AI "
                    "risk requirements in vendor contracts and SLAs. Conduct due "
                    "diligence on AI providers including model provenance and data "
                    "practices. Establish ongoing monitoring of third-party AI "
                    "performance and compliance. Maintain inventory of third-party "
                    "AI dependencies."
                ),
                references=[
                    "NIST AI RMF GOVERN 6.1",
                    "NIST AI RMF GOVERN 6.2",
                    "NIST AI 100-1 Section 3",
                ],
            ),
            # ================================================================
            # MAP Function - Context and Risk Identification
            # ================================================================
            ComplianceRule(
                rule_id="NIST-MAP-1",
                name="AI System Context Established",
                description=(
                    "Organizations should establish and document the context for "
                    "AI systems including the operating environment, stakeholders, "
                    "and potential impacts. Context includes organizational goals, "
                    "intended users, deployment environment, and societal context. "
                    "Understanding context is essential for identifying and assessing "
                    "AI risks appropriately. (MAP 1.1, 1.2, 1.3)"
                ),
                severity=RiskLevel.MEDIUM,
                category="context",
                remediation=(
                    "Document the AI system's intended operating environment and "
                    "deployment context. Identify all stakeholders including direct "
                    "users, affected individuals, and oversight bodies. Analyze "
                    "organizational, technical, and societal context factors. "
                    "Assess how context may change over the system lifecycle."
                ),
                references=[
                    "NIST AI RMF MAP 1.1",
                    "NIST AI RMF MAP 1.2",
                    "NIST AI RMF MAP 1.3",
                    "NIST AI 100-1 Section 4",
                ],
            ),
            ComplianceRule(
                rule_id="NIST-MAP-2",
                name="AI Categorization and Intended Use Documented",
                description=(
                    "Organizations should categorize AI systems and document their "
                    "intended use, including the specific tasks the AI is designed "
                    "to perform, the target users, and the decision-making contexts. "
                    "Documentation should address potential misuse scenarios and "
                    "out-of-scope applications. Limitations and constraints should "
                    "be clearly specified. (MAP 2.1, 2.2, 2.3)"
                ),
                severity=RiskLevel.HIGH,
                category="context",
                remediation=(
                    "Create comprehensive documentation of AI system purpose and "
                    "intended use cases. Categorize the AI system based on risk "
                    "factors and application domain. Document known limitations, "
                    "constraints, and out-of-scope uses. Specify conditions under "
                    "which the AI should and should not be used."
                ),
                references=[
                    "NIST AI RMF MAP 2.1",
                    "NIST AI RMF MAP 2.2",
                    "NIST AI RMF MAP 2.3",
                    "NIST AI 100-1 Section 4",
                ],
            ),
            ComplianceRule(
                rule_id="NIST-MAP-3",
                name="AI Benefits and Costs Assessed",
                description=(
                    "Organizations should assess and document the benefits and costs "
                    "of AI systems, including potential positive and negative impacts "
                    "on individuals, organizations, communities, and society. Assessment "
                    "should consider both intended outcomes and unintended consequences. "
                    "Trade-offs between benefits and risks should be analyzed and "
                    "documented. (MAP 3.1, 3.2)"
                ),
                severity=RiskLevel.MEDIUM,
                category="context",
                remediation=(
                    "Conduct benefit-cost analysis for AI systems including tangible "
                    "and intangible impacts. Document potential positive outcomes and "
                    "risks to different stakeholder groups. Analyze trade-offs and "
                    "document decision rationale. Consider long-term and systemic "
                    "effects. Re-evaluate periodically as context changes."
                ),
                references=[
                    "NIST AI RMF MAP 3.1",
                    "NIST AI RMF MAP 3.2",
                    "NIST AI 100-1 Section 4",
                ],
            ),
            ComplianceRule(
                rule_id="NIST-MAP-4",
                name="Risks from Third-Party Components Mapped",
                description=(
                    "Organizations should identify and map risks arising from "
                    "third-party AI components including pre-trained models, datasets, "
                    "APIs, and cloud services. Risk mapping should address model "
                    "provenance, data quality, supply chain integrity, and dependency "
                    "risks. Organizations should understand how third-party components "
                    "affect overall system trustworthiness. (MAP 4.1, 4.2)"
                ),
                severity=RiskLevel.HIGH,
                category="risk_identification",
                remediation=(
                    "Maintain inventory of all third-party AI components and data "
                    "sources. Assess risks associated with each third-party dependency "
                    "including provenance, quality, and support continuity. Document "
                    "how third-party components affect system behavior and risk profile. "
                    "Establish processes for evaluating new third-party AI components."
                ),
                references=[
                    "NIST AI RMF MAP 4.1",
                    "NIST AI RMF MAP 4.2",
                    "NIST AI 100-1 Section 4",
                ],
            ),
            # ================================================================
            # MEASURE Function - Risk Analysis
            # ================================================================
            ComplianceRule(
                rule_id="NIST-MEAS-1",
                name="Appropriate Metrics Identified",
                description=(
                    "Organizations should identify and implement appropriate metrics "
                    "for measuring AI system performance, trustworthiness characteristics, "
                    "and risks. Metrics should be relevant to the AI system context, "
                    "measurable, and aligned with organizational goals. Measurement "
                    "approaches should be documented and validated for reliability. "
                    "(MEASURE 1.1, 1.2, 1.3)"
                ),
                severity=RiskLevel.MEDIUM,
                category="measurement",
                remediation=(
                    "Define metrics for each trustworthy AI characteristic relevant "
                    "to the system. Establish baselines and thresholds for acceptable "
                    "performance. Document measurement methodologies and their "
                    "limitations. Validate metrics are meaningful for the intended "
                    "context. Review and update metrics as system context evolves."
                ),
                references=[
                    "NIST AI RMF MEASURE 1.1",
                    "NIST AI RMF MEASURE 1.2",
                    "NIST AI RMF MEASURE 1.3",
                    "NIST AI 100-1 Section 5",
                ],
            ),
            ComplianceRule(
                rule_id="NIST-MEAS-2",
                name="AI Systems Evaluated for Trustworthy Characteristics",
                description=(
                    "Organizations should evaluate AI systems against trustworthy AI "
                    "characteristics including validity, reliability, safety, security, "
                    "resilience, accountability, transparency, explainability, "
                    "interpretability, privacy protection, and fairness. Evaluations "
                    "should be conducted throughout the AI lifecycle using appropriate "
                    "testing and assessment methods. (MEASURE 2.1, 2.2, 2.3)"
                ),
                severity=RiskLevel.HIGH,
                category="measurement",
                remediation=(
                    "Implement evaluation processes for trustworthy AI characteristics. "
                    "Conduct testing for accuracy, robustness, fairness, and other "
                    "relevant characteristics. Document evaluation results and track "
                    "trends over time. Use multiple evaluation methods appropriate to "
                    "each characteristic. Address identified gaps through system "
                    "improvements or risk mitigations."
                ),
                references=[
                    "NIST AI RMF MEASURE 2.1",
                    "NIST AI RMF MEASURE 2.2",
                    "NIST AI RMF MEASURE 2.3",
                    "NIST AI 100-1 Section 5",
                    "NIST AI 100-1 Appendix B",
                ],
            ),
            ComplianceRule(
                rule_id="NIST-MEAS-3",
                name="Mechanisms for Tracking Identified Risks",
                description=(
                    "Organizations should establish mechanisms for tracking identified "
                    "AI risks throughout the system lifecycle. Risk tracking should "
                    "include monitoring of risk indicators, documentation of risk "
                    "status changes, and communication of risk information to relevant "
                    "stakeholders. Risk tracking should be integrated with broader "
                    "organizational risk management processes. (MEASURE 3.1, 3.2, 3.3)"
                ),
                severity=RiskLevel.MEDIUM,
                category="measurement",
                remediation=(
                    "Implement risk tracking systems or integrate with existing risk "
                    "management tools. Define risk indicators and monitoring processes. "
                    "Establish regular risk review cadence. Document risk status and "
                    "changes over time. Create communication processes for risk "
                    "information sharing with relevant stakeholders."
                ),
                references=[
                    "NIST AI RMF MEASURE 3.1",
                    "NIST AI RMF MEASURE 3.2",
                    "NIST AI RMF MEASURE 3.3",
                    "NIST AI 100-1 Section 5",
                ],
            ),
            # ================================================================
            # MANAGE Function - Risk Treatment
            # ================================================================
            ComplianceRule(
                rule_id="NIST-MAN-1",
                name="AI Risks Prioritized and Responded To",
                description=(
                    "Organizations should prioritize AI risks based on their likelihood "
                    "and potential impact, and develop appropriate risk responses. "
                    "Risk responses may include risk avoidance, mitigation, transfer, "
                    "or acceptance. Resource allocation for risk treatment should align "
                    "with risk priorities. Risk response decisions should be documented "
                    "and communicated. (MANAGE 1.1, 1.2, 1.3)"
                ),
                severity=RiskLevel.HIGH,
                category="risk_treatment",
                remediation=(
                    "Establish risk prioritization criteria and processes. Document "
                    "risk response decisions including rationale. Allocate resources "
                    "proportionate to risk priority. Implement risk mitigation measures "
                    "and track effectiveness. Review and adjust risk responses based "
                    "on changing conditions and new information."
                ),
                references=[
                    "NIST AI RMF MANAGE 1.1",
                    "NIST AI RMF MANAGE 1.2",
                    "NIST AI RMF MANAGE 1.3",
                    "NIST AI 100-1 Section 6",
                ],
            ),
            ComplianceRule(
                rule_id="NIST-MAN-2",
                name="AI System Deployment Decisions Documented",
                description=(
                    "Organizations should document deployment decisions for AI systems "
                    "including the criteria used, risks considered, and approval process. "
                    "Deployment decisions should consider whether risks have been "
                    "adequately addressed and whether appropriate safeguards are in "
                    "place. Staged deployment approaches should be considered for "
                    "high-risk systems. (MANAGE 2.1, 2.2)"
                ),
                severity=RiskLevel.HIGH,
                category="risk_treatment",
                remediation=(
                    "Establish deployment decision criteria and approval processes. "
                    "Document risk assessment results informing deployment decisions. "
                    "Implement staged deployment approaches where appropriate. Define "
                    "conditions for full deployment, limited deployment, or non-deployment. "
                    "Document deployment decisions and supporting rationale."
                ),
                references=[
                    "NIST AI RMF MANAGE 2.1",
                    "NIST AI RMF MANAGE 2.2",
                    "NIST AI 100-1 Section 6",
                ],
            ),
            ComplianceRule(
                rule_id="NIST-MAN-3",
                name="Post-Deployment Monitoring in Place",
                description=(
                    "Organizations should implement post-deployment monitoring for "
                    "AI systems to detect performance degradation, emerging risks, "
                    "and unintended impacts. Monitoring should cover system performance, "
                    "user feedback, and environmental changes that may affect risk. "
                    "Monitoring findings should trigger appropriate review and response "
                    "processes. (MANAGE 3.1, 3.2)"
                ),
                severity=RiskLevel.HIGH,
                category="risk_treatment",
                remediation=(
                    "Implement monitoring systems for deployed AI applications. Define "
                    "metrics and thresholds for detecting performance issues. Establish "
                    "processes for collecting and analyzing user feedback. Monitor for "
                    "data drift, concept drift, and environmental changes. Create "
                    "escalation procedures for monitoring alerts."
                ),
                references=[
                    "NIST AI RMF MANAGE 3.1",
                    "NIST AI RMF MANAGE 3.2",
                    "NIST AI 100-1 Section 6",
                ],
            ),
            ComplianceRule(
                rule_id="NIST-MAN-4",
                name="Incident Response and Recovery Procedures",
                description=(
                    "Organizations should establish incident response and recovery "
                    "procedures for AI-related incidents including system failures, "
                    "security breaches, safety incidents, and harmful outputs. Procedures "
                    "should address incident detection, containment, investigation, "
                    "remediation, and communication. Lessons learned should inform "
                    "system improvements and risk management updates. (MANAGE 4.1, 4.2, 4.3)"
                ),
                severity=RiskLevel.CRITICAL,
                category="risk_treatment",
                remediation=(
                    "Develop AI-specific incident response procedures. Define incident "
                    "severity levels and response protocols. Establish incident "
                    "communication plans for internal and external stakeholders. "
                    "Implement procedures for system rollback or shutdown when needed. "
                    "Conduct post-incident reviews and update risk management based "
                    "on lessons learned."
                ),
                references=[
                    "NIST AI RMF MANAGE 4.1",
                    "NIST AI RMF MANAGE 4.2",
                    "NIST AI RMF MANAGE 4.3",
                    "NIST AI 100-1 Section 6",
                ],
            ),
        ]

    def _check_rule(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check a single NIST AI RMF rule against an audit entry.

        Evaluates the audit entry against the specific rule requirements
        and returns a violation if the entry does not comply.

        Args:
            entry: The audit entry to check
            rule: The rule to evaluate

        Returns:
            ComplianceViolation if the rule is violated, None otherwise
        """
        # Use custom check function if provided
        if rule.check_fn is not None:
            is_compliant = rule.check_fn(entry)
            if not is_compliant:
                return self._create_violation(entry, rule, "Custom check failed")
            return None

        # Framework-specific rule checks
        if rule.rule_id == "NIST-GOV-1":
            return self._check_governance_structure(entry, rule)
        elif rule.rule_id == "NIST-GOV-2":
            return self._check_ai_principles(entry, rule)
        elif rule.rule_id == "NIST-GOV-3":
            return self._check_roles_responsibilities(entry, rule)
        elif rule.rule_id == "NIST-GOV-4":
            return self._check_third_party_governance(entry, rule)
        elif rule.rule_id == "NIST-MAP-1":
            return self._check_system_context(entry, rule)
        elif rule.rule_id == "NIST-MAP-2":
            return self._check_categorization_documented(entry, rule)
        elif rule.rule_id == "NIST-MAP-3":
            return self._check_benefits_costs_assessed(entry, rule)
        elif rule.rule_id == "NIST-MAP-4":
            return self._check_third_party_risks_mapped(entry, rule)
        elif rule.rule_id == "NIST-MEAS-1":
            return self._check_metrics_identified(entry, rule)
        elif rule.rule_id == "NIST-MEAS-2":
            return self._check_trustworthy_evaluation(entry, rule)
        elif rule.rule_id == "NIST-MEAS-3":
            return self._check_risk_tracking(entry, rule)
        elif rule.rule_id == "NIST-MAN-1":
            return self._check_risk_prioritization(entry, rule)
        elif rule.rule_id == "NIST-MAN-2":
            return self._check_deployment_decisions(entry, rule)
        elif rule.rule_id == "NIST-MAN-3":
            return self._check_post_deployment_monitoring(entry, rule)
        elif rule.rule_id == "NIST-MAN-4":
            return self._check_incident_response(entry, rule)

        return None

    # ========================================================================
    # GOVERN Function Checks
    # ========================================================================

    def _check_governance_structure(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check NIST-GOV-1: AI risk management governance structure required.

        High-risk AI operations must have documented governance oversight.
        This is evaluated based on the risk_level and governance metadata.
        """
        # Only applies to high-risk operations
        if entry.risk_level not in (RiskLevel.HIGH, RiskLevel.CRITICAL):
            return None

        has_governance = entry.metadata.get("governance_documented", False)
        has_approval = entry.metadata.get("governance_approval", False)

        if not (has_governance or has_approval):
            return self._create_violation(
                entry,
                rule,
                f"High-risk operation (level={entry.risk_level.value}) performed "
                f"without documented AI governance structure or approval",
            )
        return None

    def _check_ai_principles(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check NIST-GOV-2: Organizational AI principles and values documented.

        Operations involving significant AI decisions should reference
        organizational AI principles.
        """
        # Check for significant decision-making operations
        decision_events = {"deployment", "model_selection", "training", "policy_update"}
        if entry.event_type.lower() not in decision_events:
            return None

        has_principles_ref = entry.metadata.get("ai_principles_aligned", False)
        if not has_principles_ref:
            return self._create_violation(
                entry,
                rule,
                f"AI decision operation (type={entry.event_type}) performed "
                f"without reference to organizational AI principles",
            )
        return None

    def _check_roles_responsibilities(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check NIST-GOV-3: Roles and responsibilities defined.

        High-risk operations must have clear accountability documented.
        """
        if entry.risk_level not in (RiskLevel.HIGH, RiskLevel.CRITICAL):
            return None

        has_owner = bool(entry.actor and entry.actor != "system")
        has_accountability = entry.metadata.get("accountability_documented", False)

        if not (has_owner or has_accountability):
            return self._create_violation(
                entry,
                rule,
                f"High-risk operation (level={entry.risk_level.value}) performed "
                f"without clear accountability or responsible party documented",
            )
        return None

    def _check_third_party_governance(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check NIST-GOV-4: Third-party AI risk management.

        Operations involving third-party AI components must have
        appropriate risk governance.
        """
        # Check if this involves third-party components
        third_party_events = {
            "api_call", "external_model", "third_party_inference",
            "vendor_integration", "model_import"
        }
        if entry.event_type.lower() not in third_party_events:
            return None

        has_third_party_assessment = entry.metadata.get("third_party_assessed", False)
        has_vendor_agreement = entry.metadata.get("vendor_agreement_documented", False)

        if not (has_third_party_assessment or has_vendor_agreement):
            return self._create_violation(
                entry,
                rule,
                f"Third-party AI operation (type={entry.event_type}) performed "
                f"without documented third-party risk assessment",
            )
        return None

    # ========================================================================
    # MAP Function Checks
    # ========================================================================

    def _check_system_context(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check NIST-MAP-1: AI system context established.

        New deployments and significant system changes must have
        documented context.
        """
        context_events = {"deployment", "system_change", "environment_update"}
        if entry.event_type.lower() not in context_events:
            return None

        has_context = entry.metadata.get("system_context_documented", False)
        if not has_context:
            return self._create_violation(
                entry,
                rule,
                f"System operation (type={entry.event_type}) performed "
                f"without documented AI system context",
            )
        return None

    def _check_categorization_documented(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check NIST-MAP-2: AI categorization and intended use documented.

        Deployment and training operations must have categorization
        and intended use documentation.
        """
        significant_events = {"deployment", "training", "fine_tuning", "model_release"}
        if entry.event_type.lower() not in significant_events:
            return None

        has_categorization = entry.metadata.get("ai_categorization_documented", False)
        has_intended_use = entry.metadata.get("intended_use_documented", False)

        if not (has_categorization or has_intended_use or entry.documentation_ref):
            return self._create_violation(
                entry,
                rule,
                f"Significant operation (type={entry.event_type}) performed "
                f"without documented AI categorization or intended use",
            )
        return None

    def _check_benefits_costs_assessed(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check NIST-MAP-3: AI benefits and costs assessed.

        Deployment decisions should include benefit-cost assessment.
        """
        # Only check deployment-related events
        if entry.event_type.lower() != "deployment":
            return None

        has_assessment = entry.metadata.get("benefit_cost_assessed", False)
        has_impact_analysis = entry.metadata.get("impact_analysis_documented", False)

        if not (has_assessment or has_impact_analysis):
            return self._create_violation(
                entry,
                rule,
                f"Deployment operation performed without documented "
                f"benefit-cost or impact assessment",
            )
        return None

    def _check_third_party_risks_mapped(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check NIST-MAP-4: Risks from third-party components mapped.

        Operations using third-party AI must have risks identified.
        """
        third_party_events = {
            "api_call", "external_model", "third_party_inference",
            "vendor_integration", "model_import", "data_import"
        }
        if entry.event_type.lower() not in third_party_events:
            return None

        has_risk_mapping = entry.metadata.get("third_party_risks_mapped", False)
        has_component_inventory = entry.metadata.get("component_inventory_updated", False)

        if not (has_risk_mapping or has_component_inventory):
            return self._create_violation(
                entry,
                rule,
                f"Third-party operation (type={entry.event_type}) performed "
                f"without documented risk mapping for third-party components",
            )
        return None

    # ========================================================================
    # MEASURE Function Checks
    # ========================================================================

    def _check_metrics_identified(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check NIST-MEAS-1: Appropriate metrics identified.

        Performance and risk-related operations should reference
        defined metrics.
        """
        metric_events = {
            "inference", "evaluation", "testing", "monitoring",
            "performance_review"
        }
        if entry.event_type.lower() not in metric_events:
            return None

        has_metrics = entry.metadata.get("metrics_documented", False)
        has_baseline = entry.metadata.get("baseline_established", False)

        if not (has_metrics or has_baseline):
            return self._create_violation(
                entry,
                rule,
                f"Measurement operation (type={entry.event_type}) performed "
                f"without reference to documented metrics",
            )
        return None

    def _check_trustworthy_evaluation(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check NIST-MEAS-2: AI systems evaluated for trustworthy characteristics.

        Significant operations should include trustworthiness evaluation.
        """
        # Only applies to high-risk operations and significant events
        if entry.risk_level not in (RiskLevel.HIGH, RiskLevel.CRITICAL):
            return None

        evaluation_events = {
            "deployment", "inference", "training", "evaluation",
            "model_update", "testing"
        }
        if entry.event_type.lower() not in evaluation_events:
            return None

        has_trustworthy_eval = entry.metadata.get("trustworthiness_evaluated", False)
        has_fairness_check = entry.metadata.get("fairness_assessed", False)
        has_safety_check = entry.metadata.get("safety_evaluated", False)

        if not (has_trustworthy_eval or has_fairness_check or has_safety_check):
            return self._create_violation(
                entry,
                rule,
                f"High-risk operation (type={entry.event_type}) performed "
                f"without trustworthy AI characteristics evaluation",
            )
        return None

    def _check_risk_tracking(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check NIST-MEAS-3: Mechanisms for tracking identified risks.

        High-risk operations should have risk tracking in place.
        """
        if entry.risk_level not in (RiskLevel.HIGH, RiskLevel.CRITICAL):
            return None

        has_risk_tracking = entry.metadata.get("risk_tracked", False)
        has_risk_registry = entry.metadata.get("risk_registry_updated", False)

        if not (has_risk_tracking or has_risk_registry):
            return self._create_violation(
                entry,
                rule,
                f"High-risk operation (level={entry.risk_level.value}) performed "
                f"without documented risk tracking mechanism",
            )
        return None

    # ========================================================================
    # MANAGE Function Checks
    # ========================================================================

    def _check_risk_prioritization(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check NIST-MAN-1: AI risks prioritized and responded to.

        High-risk operations should have prioritized risk response.
        """
        if entry.risk_level not in (RiskLevel.HIGH, RiskLevel.CRITICAL):
            return None

        has_risk_response = entry.metadata.get("risk_response_documented", False)
        has_prioritization = entry.metadata.get("risk_prioritized", False)
        has_risk_assessment = entry.metadata.get("risk_assessment_documented", False)

        if not (has_risk_response or has_prioritization or has_risk_assessment):
            return self._create_violation(
                entry,
                rule,
                f"High-risk operation (level={entry.risk_level.value}) performed "
                f"without documented risk prioritization or response",
            )
        return None

    def _check_deployment_decisions(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check NIST-MAN-2: AI system deployment decisions documented.

        Deployment operations must have documented decision rationale.
        """
        if entry.event_type.lower() != "deployment":
            return None

        has_decision_doc = entry.metadata.get("deployment_decision_documented", False)
        has_approval = entry.metadata.get("deployment_approved", False)

        if not (has_decision_doc or has_approval or entry.documentation_ref):
            return self._create_violation(
                entry,
                rule,
                f"Deployment operation performed without documented "
                f"deployment decision or approval",
            )
        return None

    def _check_post_deployment_monitoring(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check NIST-MAN-3: Post-deployment monitoring in place.

        Production operations should have monitoring documented.
        """
        production_events = {
            "inference", "prediction", "completion", "production_query",
            "user_interaction"
        }
        if entry.event_type.lower() not in production_events:
            return None

        # Only check for operations that should be monitored
        has_monitoring = entry.metadata.get("monitoring_enabled", False)
        has_performance_tracking = entry.metadata.get("performance_tracked", False)

        if not (has_monitoring or has_performance_tracking):
            return self._create_violation(
                entry,
                rule,
                f"Production operation (type={entry.event_type}) performed "
                f"without documented post-deployment monitoring",
            )
        return None

    def _check_incident_response(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check NIST-MAN-4: Incident response and recovery procedures.

        Error and incident events must have response procedures.
        """
        incident_events = {
            "incident", "error", "failure", "security_event",
            "safety_incident", "model_failure", "system_error"
        }
        if entry.event_type.lower() not in incident_events:
            return None

        has_incident_response = entry.metadata.get("incident_response_followed", False)
        has_recovery_plan = entry.metadata.get("recovery_plan_executed", False)
        has_incident_documented = entry.metadata.get("incident_documented", False)

        if not (has_incident_response or has_recovery_plan or has_incident_documented):
            return self._create_violation(
                entry,
                rule,
                f"Incident event (type={entry.event_type}) without documented "
                f"incident response or recovery procedure",
            )
        return None

    # ========================================================================
    # Helper Methods
    # ========================================================================

    def _create_violation(
        self, entry: AuditEntry, rule: ComplianceRule, evidence: str
    ) -> ComplianceViolation:
        """
        Create a compliance violation object.

        Args:
            entry: The audit entry that triggered the violation
            rule: The rule that was violated
            evidence: Specific evidence describing the violation

        Returns:
            ComplianceViolation object
        """
        return ComplianceViolation(
            rule_id=rule.rule_id,
            rule_name=rule.name,
            severity=rule.severity,
            description=rule.description,
            evidence=evidence,
            remediation=rule.remediation,
            entry_id=entry.entry_id,
            category=rule.category,
            framework=self._name,
        )

__init__

__init__()

Initialize the NIST AI RMF framework with all defined rules.

Source code in src/rotalabs_comply/frameworks/nist_ai_rmf.py
def __init__(self):
    """Initialize the NIST AI RMF framework with all defined rules."""
    rules = self._create_rules()
    super().__init__(name="NIST AI RMF", version="1.0", rules=rules)

NIST AI Risk Management Framework (AI RMF 1.0, January 2023) compliance framework.

Categories

Category Function Description
governance GOVERN Organizational AI governance structures and accountability
context MAP AI system context, intended use, and stakeholder analysis
risk_identification MAP Identification of risks from AI systems and components
measurement MEASURE Metrics, evaluation, and tracking of AI characteristics
risk_treatment MANAGE Risk prioritization, response, and post-deployment monitoring

Rules

Rule ID Name Category Severity
NIST-GOV-1 AI Risk Management Governance Structure governance HIGH
NIST-GOV-2 Organizational AI Principles and Values governance MEDIUM
NIST-GOV-3 Roles and Responsibilities Defined governance HIGH
NIST-GOV-4 Third-Party AI Risk Management governance HIGH
NIST-MAP-1 AI System Context Established context MEDIUM
NIST-MAP-2 AI Categorization and Intended Use Documented context HIGH
NIST-MAP-3 AI Benefits and Costs Assessed context MEDIUM
NIST-MAP-4 Risks from Third-Party Components Mapped risk_identification HIGH
NIST-MEAS-1 Appropriate Metrics Identified measurement MEDIUM
NIST-MEAS-2 AI Systems Evaluated for Trustworthy Characteristics measurement HIGH
NIST-MEAS-3 Mechanisms for Tracking Identified Risks measurement MEDIUM
NIST-MAN-1 AI Risks Prioritized and Responded To risk_treatment HIGH
NIST-MAN-2 AI System Deployment Decisions Documented risk_treatment HIGH
NIST-MAN-3 Post-Deployment Monitoring in Place risk_treatment HIGH
NIST-MAN-4 Incident Response and Recovery Procedures risk_treatment CRITICAL

Usage

from rotalabs_comply.frameworks.nist_ai_rmf import NISTAIRMFFramework
from rotalabs_comply.frameworks.base import AuditEntry, ComplianceProfile, RiskLevel
from datetime import datetime

framework = NISTAIRMFFramework()

entry = AuditEntry(
    entry_id="nist-001",
    timestamp=datetime.utcnow(),
    event_type="deployment",
    actor="mlops@company.com",
    action="Deploy production model",
    risk_level=RiskLevel.HIGH,
    documentation_ref="DOC-DEPLOY-001",
    metadata={
        "governance_documented": True,
        "governance_approval": True,
        "system_context_documented": True,
        "ai_categorization_documented": True,
        "intended_use_documented": True,
        "benefit_cost_assessed": True,
        "deployment_decision_documented": True,
        "deployment_approved": True,
        "risk_assessment_documented": True,
    },
)

profile = ComplianceProfile(
    profile_id="nist-profile",
    name="NIST AI RMF Compliance",
)

result = await framework.check(entry, profile)

Key Requirements

High-risk operations require: - metadata["governance_documented"]=True or metadata["governance_approval"]=True - metadata["risk_assessment_documented"]=True or metadata["risk_prioritized"]=True - metadata["risk_tracked"]=True or metadata["risk_registry_updated"]=True

Deployment operations require: - metadata["deployment_decision_documented"]=True or metadata["deployment_approved"]=True - documentation_ref set

Third-party AI operations require: - metadata["third_party_assessed"]=True or metadata["vendor_agreement_documented"]=True - metadata["third_party_risks_mapped"]=True or metadata["component_inventory_updated"]=True

Incident events require: - metadata["incident_response_followed"]=True or metadata["recovery_plan_executed"]=True


ISO/IEC 42001 Framework

ISO42001Framework

ISO/IEC 42001:2023 AI Management System compliance framework.

Implements compliance checks based on ISO 42001:2023 requirements for establishing, implementing, maintaining, and continually improving an AI management system. The framework evaluates audit entries against the standard's requirements across seven key areas.

ISO 42001 is structured around the Plan-Do-Check-Act (PDCA) cycle: - Plan: Establish AIMS objectives and processes (Clauses 4-6) - Do: Implement the AIMS and its processes (Clauses 7-8) - Check: Monitor and evaluate performance (Clause 9) - Act: Take actions to improve performance (Clause 10)

The standard emphasizes: - Risk-based thinking throughout the AI lifecycle - Responsible AI development and deployment - Transparency and accountability - Continual improvement

Example

framework = ISO42001Framework() result = await framework.check(entry, profile) if not result.is_compliant: ... for violation in result.violations: ... print(f"{violation.rule_id}: {violation.description}")

Source code in src/rotalabs_comply/frameworks/iso_42001.py
  42
  43
  44
  45
  46
  47
  48
  49
  50
  51
  52
  53
  54
  55
  56
  57
  58
  59
  60
  61
  62
  63
  64
  65
  66
  67
  68
  69
  70
  71
  72
  73
  74
  75
  76
  77
  78
  79
  80
  81
  82
  83
  84
  85
  86
  87
  88
  89
  90
  91
  92
  93
  94
  95
  96
  97
  98
  99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
 111
 112
 113
 114
 115
 116
 117
 118
 119
 120
 121
 122
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
class ISO42001Framework(BaseFramework):
    """
    ISO/IEC 42001:2023 AI Management System compliance framework.

    Implements compliance checks based on ISO 42001:2023 requirements for
    establishing, implementing, maintaining, and continually improving an
    AI management system. The framework evaluates audit entries against
    the standard's requirements across seven key areas.

    ISO 42001 is structured around the Plan-Do-Check-Act (PDCA) cycle:
    - Plan: Establish AIMS objectives and processes (Clauses 4-6)
    - Do: Implement the AIMS and its processes (Clauses 7-8)
    - Check: Monitor and evaluate performance (Clause 9)
    - Act: Take actions to improve performance (Clause 10)

    The standard emphasizes:
    - Risk-based thinking throughout the AI lifecycle
    - Responsible AI development and deployment
    - Transparency and accountability
    - Continual improvement

    Example:
        >>> framework = ISO42001Framework()
        >>> result = await framework.check(entry, profile)
        >>> if not result.is_compliant:
        ...     for violation in result.violations:
        ...         print(f"{violation.rule_id}: {violation.description}")
    """

    def __init__(self):
        """Initialize the ISO 42001 framework with all defined rules."""
        rules = self._create_rules()
        super().__init__(name="ISO/IEC 42001", version="2023", rules=rules)

    def _create_rules(self) -> List[ComplianceRule]:
        """
        Create all ISO 42001 compliance rules.

        Returns:
            List of ComplianceRule objects representing ISO 42001 requirements
        """
        return [
            # =================================================================
            # Clause 4: Context of the Organization
            # =================================================================
            ComplianceRule(
                rule_id="ISO42001-4.1",
                name="Understanding Organization and Context",
                description=(
                    "The organization shall determine external and internal issues that "
                    "are relevant to its purpose and that affect its ability to achieve "
                    "the intended outcome(s) of its AI management system. This includes "
                    "understanding the organization's role as an AI provider, deployer, "
                    "or other relevant stakeholder, and the applicable legal, regulatory, "
                    "and contractual requirements. (Clause 4.1)"
                ),
                severity=RiskLevel.HIGH,
                category="context",
                remediation=(
                    "Document the organizational context including: internal factors "
                    "(governance structure, capabilities, culture), external factors "
                    "(legal/regulatory environment, technology trends, stakeholder "
                    "expectations), and the organization's role in the AI value chain."
                ),
                references=["ISO/IEC 42001:2023 Clause 4.1"],
            ),
            ComplianceRule(
                rule_id="ISO42001-4.2",
                name="Understanding Needs of Interested Parties",
                description=(
                    "The organization shall determine the interested parties that are "
                    "relevant to the AI management system, the relevant requirements of "
                    "these interested parties, and which of these requirements will be "
                    "addressed through the AIMS. Interested parties may include customers, "
                    "regulators, employees, AI system users, and affected communities. "
                    "(Clause 4.2)"
                ),
                severity=RiskLevel.HIGH,
                category="context",
                remediation=(
                    "Identify and document all relevant interested parties and their "
                    "requirements. Create a stakeholder register that includes: party "
                    "identification, their needs and expectations, relevance to AIMS, "
                    "and how requirements will be addressed."
                ),
                references=["ISO/IEC 42001:2023 Clause 4.2"],
            ),
            ComplianceRule(
                rule_id="ISO42001-4.3",
                name="Scope of AIMS Determined",
                description=(
                    "The organization shall determine the boundaries and applicability "
                    "of the AI management system to establish its scope. The scope shall "
                    "be available as documented information. When determining the scope, "
                    "the organization shall consider the internal and external issues, "
                    "requirements of interested parties, and interfaces with other "
                    "management systems. (Clause 4.3)"
                ),
                severity=RiskLevel.HIGH,
                category="context",
                remediation=(
                    "Define and document the AIMS scope including: organizational units "
                    "covered, AI systems included, physical locations, processes within "
                    "scope, and any exclusions with justification. Ensure the scope "
                    "statement is available to relevant interested parties."
                ),
                references=["ISO/IEC 42001:2023 Clause 4.3"],
            ),
            # =================================================================
            # Clause 5: Leadership
            # =================================================================
            ComplianceRule(
                rule_id="ISO42001-5.1",
                name="Leadership Commitment Demonstrated",
                description=(
                    "Top management shall demonstrate leadership and commitment to the "
                    "AI management system by ensuring the AI policy and objectives are "
                    "established and compatible with strategic direction, ensuring "
                    "integration into business processes, ensuring resources are available, "
                    "communicating importance of effective AIMS, and promoting continual "
                    "improvement. (Clause 5.1)"
                ),
                severity=RiskLevel.HIGH,
                category="leadership",
                remediation=(
                    "Document evidence of top management commitment including: meeting "
                    "minutes showing AIMS discussions, resource allocation decisions, "
                    "communication materials, and management review participation. "
                    "Leadership must actively champion responsible AI practices."
                ),
                references=["ISO/IEC 42001:2023 Clause 5.1"],
            ),
            ComplianceRule(
                rule_id="ISO42001-5.2",
                name="AI Policy Established",
                description=(
                    "Top management shall establish an AI policy that is appropriate to "
                    "the organization's purpose, provides a framework for setting AI "
                    "objectives, includes a commitment to satisfy applicable requirements, "
                    "includes a commitment to continual improvement, and addresses "
                    "responsible AI principles including transparency, fairness, and "
                    "accountability. (Clause 5.2)"
                ),
                severity=RiskLevel.CRITICAL,
                category="leadership",
                remediation=(
                    "Develop and publish an AI policy that: aligns with organizational "
                    "strategy, establishes responsible AI principles, commits to "
                    "compliance and improvement, is communicated throughout the "
                    "organization, and is available to interested parties as appropriate."
                ),
                references=["ISO/IEC 42001:2023 Clause 5.2"],
            ),
            ComplianceRule(
                rule_id="ISO42001-5.3",
                name="Roles and Responsibilities Assigned",
                description=(
                    "Top management shall ensure that the responsibilities and authorities "
                    "for relevant roles are assigned and communicated within the "
                    "organization. This includes assigning responsibility for ensuring "
                    "AIMS conformance to ISO 42001 and reporting on AIMS performance. "
                    "(Clause 5.3)"
                ),
                severity=RiskLevel.HIGH,
                category="leadership",
                remediation=(
                    "Define and document roles related to AI governance including: AIMS "
                    "owner/manager, AI ethics officer, risk owners, system owners, and "
                    "oversight committees. Create RACI matrices for AI-related processes "
                    "and communicate assignments to all relevant personnel."
                ),
                references=["ISO/IEC 42001:2023 Clause 5.3"],
            ),
            # =================================================================
            # Clause 6: Planning
            # =================================================================
            ComplianceRule(
                rule_id="ISO42001-6.1",
                name="AI Risk Assessment Conducted",
                description=(
                    "The organization shall plan and implement a process to identify, "
                    "analyze, and evaluate AI-related risks. The risk assessment shall "
                    "consider risks to the organization, to individuals, to groups, and "
                    "to society arising from AI system development and use. Risk criteria "
                    "shall be established and maintained. (Clause 6.1)"
                ),
                severity=RiskLevel.CRITICAL,
                category="planning",
                remediation=(
                    "Implement a comprehensive AI risk assessment process that: defines "
                    "risk criteria and acceptance thresholds, identifies AI-specific risks "
                    "(bias, safety, privacy, security), evaluates likelihood and impact, "
                    "documents risk treatment decisions, and maintains a risk register."
                ),
                references=["ISO/IEC 42001:2023 Clause 6.1", "Annex A"],
            ),
            ComplianceRule(
                rule_id="ISO42001-6.2",
                name="AI Objectives Established",
                description=(
                    "The organization shall establish AI objectives at relevant functions, "
                    "levels, and processes. Objectives shall be consistent with the AI "
                    "policy, measurable, take into account applicable requirements, be "
                    "monitored, communicated, and updated as appropriate. Plans to achieve "
                    "objectives shall define what will be done, resources required, "
                    "responsibilities, timelines, and evaluation methods. (Clause 6.2)"
                ),
                severity=RiskLevel.HIGH,
                category="planning",
                remediation=(
                    "Define measurable AI objectives that support the AI policy. For each "
                    "objective, document: target metrics, responsible parties, required "
                    "resources, implementation timeline, and progress monitoring approach. "
                    "Review and update objectives regularly."
                ),
                references=["ISO/IEC 42001:2023 Clause 6.2"],
            ),
            ComplianceRule(
                rule_id="ISO42001-6.3",
                name="AI Impact Assessment Performed",
                description=(
                    "The organization shall perform AI system impact assessments to "
                    "identify and evaluate the potential impacts of AI systems on "
                    "individuals, groups, and society. The assessment shall consider "
                    "impacts throughout the AI system lifecycle including development, "
                    "deployment, use, and decommissioning. (Clause 6.1.4)"
                ),
                severity=RiskLevel.CRITICAL,
                category="planning",
                remediation=(
                    "Conduct impact assessments for AI systems covering: intended use "
                    "cases and users, potential beneficial and harmful impacts, effects "
                    "on fundamental rights and freedoms, environmental considerations, "
                    "and cumulative effects. Document assessment results and mitigation "
                    "measures."
                ),
                references=["ISO/IEC 42001:2023 Clause 6.1.4", "Annex B"],
            ),
            # =================================================================
            # Clause 7: Support
            # =================================================================
            ComplianceRule(
                rule_id="ISO42001-7.1",
                name="Resources Provided",
                description=(
                    "The organization shall determine and provide the resources needed "
                    "for the establishment, implementation, maintenance, and continual "
                    "improvement of the AI management system. This includes human "
                    "resources, infrastructure, technology, and financial resources "
                    "appropriate for the scale and complexity of AI operations. (Clause 7.1)"
                ),
                severity=RiskLevel.HIGH,
                category="support",
                remediation=(
                    "Document resource requirements for AIMS implementation including: "
                    "personnel allocation, training budgets, technology infrastructure, "
                    "tool procurement, and ongoing operational support. Ensure resource "
                    "planning is part of organizational budgeting processes."
                ),
                references=["ISO/IEC 42001:2023 Clause 7.1"],
            ),
            ComplianceRule(
                rule_id="ISO42001-7.2",
                name="Competence Ensured",
                description=(
                    "The organization shall determine the necessary competence of persons "
                    "doing work under its control that affects AI management system "
                    "performance, ensure these persons are competent on the basis of "
                    "appropriate education, training, or experience, take actions to "
                    "acquire the necessary competence, and retain documented evidence "
                    "of competence. (Clause 7.2)"
                ),
                severity=RiskLevel.HIGH,
                category="support",
                remediation=(
                    "Establish competency requirements for AI-related roles covering: "
                    "technical skills, ethical considerations, risk management, and "
                    "regulatory awareness. Implement training programs, maintain competency "
                    "matrices, and retain evidence of qualifications and training completion."
                ),
                references=["ISO/IEC 42001:2023 Clause 7.2"],
            ),
            ComplianceRule(
                rule_id="ISO42001-7.3",
                name="Awareness Maintained",
                description=(
                    "Persons doing work under the organization's control shall be aware "
                    "of the AI policy, their contribution to the AIMS effectiveness, the "
                    "implications of not conforming to AIMS requirements, and the "
                    "importance of responsible AI practices. (Clause 7.3)"
                ),
                severity=RiskLevel.MEDIUM,
                category="support",
                remediation=(
                    "Implement an awareness program that communicates: the AI policy and "
                    "its relevance, individual responsibilities, consequences of non-"
                    "conformance, and channels for raising concerns. Use multiple formats "
                    "including onboarding, regular communications, and refresher training."
                ),
                references=["ISO/IEC 42001:2023 Clause 7.3"],
            ),
            ComplianceRule(
                rule_id="ISO42001-7.4",
                name="Communication Processes Established",
                description=(
                    "The organization shall determine the internal and external "
                    "communications relevant to the AI management system including what "
                    "to communicate, when, with whom, how, and who is responsible. "
                    "Communication shall address both routine and incident-related "
                    "notifications. (Clause 7.4)"
                ),
                severity=RiskLevel.MEDIUM,
                category="support",
                remediation=(
                    "Define communication processes covering: stakeholder identification, "
                    "communication channels, frequency, content requirements, approval "
                    "workflows, and records retention. Include both internal (employees, "
                    "management) and external (regulators, customers, public) communications."
                ),
                references=["ISO/IEC 42001:2023 Clause 7.4"],
            ),
            ComplianceRule(
                rule_id="ISO42001-7.5",
                name="Documented Information Controlled",
                description=(
                    "The AI management system shall include documented information "
                    "required by ISO 42001 and determined by the organization as necessary "
                    "for AIMS effectiveness. Documented information shall be controlled to "
                    "ensure availability, suitability, and adequate protection including "
                    "distribution, access, retrieval, storage, and preservation. (Clause 7.5)"
                ),
                severity=RiskLevel.HIGH,
                category="support",
                remediation=(
                    "Implement document control procedures covering: identification and "
                    "format requirements, review and approval, version control, access "
                    "controls, retention periods, and disposal. Maintain a document register "
                    "and ensure documents are available to those who need them."
                ),
                references=["ISO/IEC 42001:2023 Clause 7.5"],
            ),
            # =================================================================
            # Clause 8: Operation
            # =================================================================
            ComplianceRule(
                rule_id="ISO42001-8.1",
                name="Operational Planning and Control",
                description=(
                    "The organization shall plan, implement, and control the processes "
                    "needed to meet AI management system requirements. This includes "
                    "establishing criteria for processes, implementing control of processes "
                    "in accordance with criteria, maintaining documented information to "
                    "have confidence processes are carried out as planned, and controlling "
                    "planned changes. (Clause 8.1)"
                ),
                severity=RiskLevel.HIGH,
                category="operation",
                remediation=(
                    "Document operational procedures for AI processes including: process "
                    "objectives and criteria, input/output specifications, roles and "
                    "responsibilities, monitoring requirements, and change control "
                    "procedures. Implement controls appropriate to process criticality."
                ),
                references=["ISO/IEC 42001:2023 Clause 8.1"],
            ),
            ComplianceRule(
                rule_id="ISO42001-8.2",
                name="AI System Lifecycle Processes",
                description=(
                    "The organization shall establish, implement, and maintain processes "
                    "for AI system lifecycle management including: design and development, "
                    "verification and validation, deployment, operation and monitoring, "
                    "and retirement/decommissioning. Processes shall address data "
                    "management, model development, and responsible AI considerations "
                    "throughout the lifecycle. (Clause 8.2)"
                ),
                severity=RiskLevel.CRITICAL,
                category="operation",
                remediation=(
                    "Define lifecycle processes covering: requirements analysis, data "
                    "acquisition and preparation, model development and training, testing "
                    "and validation, deployment and release, monitoring and maintenance, "
                    "and decommissioning. Include stage gates and approval requirements."
                ),
                references=["ISO/IEC 42001:2023 Clause 8.2", "Annex A.6"],
            ),
            ComplianceRule(
                rule_id="ISO42001-8.3",
                name="Third-Party Considerations",
                description=(
                    "The organization shall determine and apply criteria for the evaluation, "
                    "selection, monitoring, and re-evaluation of external providers of "
                    "AI-related products and services. The organization shall ensure that "
                    "externally provided processes, products, and services conform to "
                    "AIMS requirements. (Clause 8.3)"
                ),
                severity=RiskLevel.HIGH,
                category="operation",
                remediation=(
                    "Establish third-party management processes including: vendor "
                    "qualification criteria, contractual requirements, due diligence "
                    "procedures, ongoing monitoring, and performance evaluation. Address "
                    "AI-specific considerations such as model provenance and data handling."
                ),
                references=["ISO/IEC 42001:2023 Clause 8.3", "Annex A.8"],
            ),
            ComplianceRule(
                rule_id="ISO42001-8.4",
                name="AI System Impact Assessment",
                description=(
                    "The organization shall perform and document AI system impact "
                    "assessments prior to deployment and periodically during operation. "
                    "The assessment shall evaluate actual and potential impacts on "
                    "stakeholders, identifying both intended benefits and unintended "
                    "consequences. Assessment results shall inform risk treatment and "
                    "system modifications. (Clause 8.4)"
                ),
                severity=RiskLevel.CRITICAL,
                category="operation",
                remediation=(
                    "Conduct operational impact assessments that: identify affected "
                    "stakeholders, evaluate impact severity and likelihood, assess "
                    "cumulative effects, compare actual vs. expected outcomes, and "
                    "trigger reviews when significant changes occur. Document findings "
                    "and resulting actions."
                ),
                references=["ISO/IEC 42001:2023 Clause 8.4", "Annex B"],
            ),
            # =================================================================
            # Clause 9: Performance Evaluation
            # =================================================================
            ComplianceRule(
                rule_id="ISO42001-9.1",
                name="Monitoring and Measurement",
                description=(
                    "The organization shall determine what needs to be monitored and "
                    "measured, the methods for monitoring, measurement, analysis, and "
                    "evaluation, when monitoring and measuring shall be performed, when "
                    "results shall be analyzed and evaluated, and who shall analyze and "
                    "evaluate. The organization shall retain documented evidence of "
                    "monitoring and measurement results. (Clause 9.1)"
                ),
                severity=RiskLevel.HIGH,
                category="performance",
                remediation=(
                    "Define monitoring and measurement program including: key performance "
                    "indicators for AIMS effectiveness, AI system performance metrics, "
                    "measurement methods and tools, frequency of measurement, analysis "
                    "procedures, and reporting requirements. Establish baselines and targets."
                ),
                references=["ISO/IEC 42001:2023 Clause 9.1"],
            ),
            ComplianceRule(
                rule_id="ISO42001-9.2",
                name="Internal Audit Conducted",
                description=(
                    "The organization shall conduct internal audits at planned intervals "
                    "to provide information on whether the AIMS conforms to the "
                    "organization's own requirements and ISO 42001 requirements, and is "
                    "effectively implemented and maintained. The organization shall define "
                    "audit criteria, scope, frequency, and methods, and ensure objectivity "
                    "and impartiality of the audit process. (Clause 9.2)"
                ),
                severity=RiskLevel.HIGH,
                category="performance",
                remediation=(
                    "Establish an internal audit program that: defines audit scope and "
                    "criteria based on ISO 42001, schedules audits considering process "
                    "importance and previous results, ensures auditor competence and "
                    "independence, documents findings and corrective actions, and reports "
                    "results to management."
                ),
                references=["ISO/IEC 42001:2023 Clause 9.2"],
            ),
            ComplianceRule(
                rule_id="ISO42001-9.3",
                name="Management Review",
                description=(
                    "Top management shall review the AI management system at planned "
                    "intervals to ensure its continuing suitability, adequacy, and "
                    "effectiveness. The review shall consider status of actions from "
                    "previous reviews, changes in issues and requirements, AIMS "
                    "performance including nonconformities, monitoring results, audit "
                    "results, and opportunities for improvement. (Clause 9.3)"
                ),
                severity=RiskLevel.HIGH,
                category="performance",
                remediation=(
                    "Conduct management reviews that address: AIMS performance trends, "
                    "audit findings and corrective actions, stakeholder feedback, "
                    "resource adequacy, risk treatment effectiveness, and improvement "
                    "opportunities. Document review inputs, discussions, and decisions "
                    "including required actions."
                ),
                references=["ISO/IEC 42001:2023 Clause 9.3"],
            ),
            # =================================================================
            # Clause 10: Improvement
            # =================================================================
            ComplianceRule(
                rule_id="ISO42001-10.1",
                name="Nonconformity and Corrective Action",
                description=(
                    "When a nonconformity occurs, the organization shall react to the "
                    "nonconformity and take action to control and correct it, evaluate "
                    "the need for action to eliminate causes, implement any action needed, "
                    "review effectiveness of corrective action, and make changes to the "
                    "AIMS if necessary. The organization shall retain documented "
                    "information as evidence. (Clause 10.1)"
                ),
                severity=RiskLevel.HIGH,
                category="improvement",
                remediation=(
                    "Implement a corrective action process that: captures nonconformities "
                    "from multiple sources (audits, incidents, feedback), performs root "
                    "cause analysis, defines and implements corrections, verifies "
                    "effectiveness, and updates processes/documentation as needed. "
                    "Maintain a corrective action log."
                ),
                references=["ISO/IEC 42001:2023 Clause 10.1"],
            ),
            ComplianceRule(
                rule_id="ISO42001-10.2",
                name="Continual Improvement",
                description=(
                    "The organization shall continually improve the suitability, adequacy, "
                    "and effectiveness of the AI management system. This shall include "
                    "consideration of the results of analysis and evaluation, and outputs "
                    "from management review, to determine opportunities for improvement. "
                    "The organization shall take actions to improve AIMS performance and "
                    "responsible AI practices. (Clause 10.2)"
                ),
                severity=RiskLevel.MEDIUM,
                category="improvement",
                remediation=(
                    "Establish improvement mechanisms including: systematic collection "
                    "of improvement opportunities, prioritization based on impact and "
                    "feasibility, implementation planning, and tracking of improvement "
                    "initiatives. Promote a culture of continuous improvement in AI "
                    "governance and responsible AI practices."
                ),
                references=["ISO/IEC 42001:2023 Clause 10.2"],
            ),
        ]

    def _check_rule(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check a single ISO 42001 rule against an audit entry.

        Evaluates the audit entry against the specific rule requirements
        and returns a violation if the entry does not comply.

        Args:
            entry: The audit entry to check
            rule: The rule to evaluate

        Returns:
            ComplianceViolation if the rule is violated, None otherwise
        """
        # Use custom check function if provided
        if rule.check_fn is not None:
            is_compliant = rule.check_fn(entry)
            if not is_compliant:
                return self._create_violation(entry, rule, "Custom check failed")
            return None

        # Framework-specific rule checks
        if rule.rule_id == "ISO42001-4.1":
            return self._check_organizational_context(entry, rule)
        elif rule.rule_id == "ISO42001-4.2":
            return self._check_interested_parties(entry, rule)
        elif rule.rule_id == "ISO42001-4.3":
            return self._check_aims_scope(entry, rule)
        elif rule.rule_id == "ISO42001-5.1":
            return self._check_leadership_commitment(entry, rule)
        elif rule.rule_id == "ISO42001-5.2":
            return self._check_ai_policy(entry, rule)
        elif rule.rule_id == "ISO42001-5.3":
            return self._check_roles_responsibilities(entry, rule)
        elif rule.rule_id == "ISO42001-6.1":
            return self._check_risk_assessment(entry, rule)
        elif rule.rule_id == "ISO42001-6.2":
            return self._check_ai_objectives(entry, rule)
        elif rule.rule_id == "ISO42001-6.3":
            return self._check_impact_assessment(entry, rule)
        elif rule.rule_id == "ISO42001-7.1":
            return self._check_resources(entry, rule)
        elif rule.rule_id == "ISO42001-7.2":
            return self._check_competence(entry, rule)
        elif rule.rule_id == "ISO42001-7.3":
            return self._check_awareness(entry, rule)
        elif rule.rule_id == "ISO42001-7.4":
            return self._check_communication(entry, rule)
        elif rule.rule_id == "ISO42001-7.5":
            return self._check_documented_information(entry, rule)
        elif rule.rule_id == "ISO42001-8.1":
            return self._check_operational_planning(entry, rule)
        elif rule.rule_id == "ISO42001-8.2":
            return self._check_lifecycle_processes(entry, rule)
        elif rule.rule_id == "ISO42001-8.3":
            return self._check_third_party(entry, rule)
        elif rule.rule_id == "ISO42001-8.4":
            return self._check_system_impact(entry, rule)
        elif rule.rule_id == "ISO42001-9.1":
            return self._check_monitoring(entry, rule)
        elif rule.rule_id == "ISO42001-9.2":
            return self._check_internal_audit(entry, rule)
        elif rule.rule_id == "ISO42001-9.3":
            return self._check_management_review(entry, rule)
        elif rule.rule_id == "ISO42001-10.1":
            return self._check_corrective_action(entry, rule)
        elif rule.rule_id == "ISO42001-10.2":
            return self._check_continual_improvement(entry, rule)

        return None

    # =========================================================================
    # Clause 4: Context of the Organization
    # =========================================================================

    def _check_organizational_context(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-4.1: Organizational context must be documented.

        For system-level operations, verify that organizational context
        documentation exists.
        """
        system_events = {"system_registration", "deployment", "system_update", "configuration"}
        if entry.event_type.lower() not in system_events:
            return None

        has_context_documented = entry.metadata.get("organizational_context_documented", False)
        if not has_context_documented:
            return self._create_violation(
                entry,
                rule,
                f"System operation (type={entry.event_type}) performed without "
                f"documented organizational context",
            )
        return None

    def _check_interested_parties(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-4.2: Interested parties must be identified.

        For deployment and external-facing operations, verify stakeholder
        identification.
        """
        stakeholder_relevant_events = {"deployment", "release", "public_api", "data_sharing"}
        if entry.event_type.lower() not in stakeholder_relevant_events:
            return None

        has_stakeholders_identified = entry.metadata.get("stakeholders_identified", False)
        if not has_stakeholders_identified:
            return self._create_violation(
                entry,
                rule,
                f"External-facing operation (type={entry.event_type}) performed "
                f"without documented stakeholder identification",
            )
        return None

    def _check_aims_scope(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-4.3: AIMS scope must be defined.

        For system operations, verify that the system is within defined
        AIMS scope.
        """
        scope_relevant_events = {"system_registration", "deployment", "new_system", "expansion"}
        if entry.event_type.lower() not in scope_relevant_events:
            return None

        has_scope_defined = entry.metadata.get("aims_scope_defined", False)
        in_scope = entry.metadata.get("within_aims_scope", False)

        if not has_scope_defined or not in_scope:
            return self._create_violation(
                entry,
                rule,
                f"System operation (type={entry.event_type}) performed for system "
                f"without verified AIMS scope coverage",
            )
        return None

    # =========================================================================
    # Clause 5: Leadership
    # =========================================================================

    def _check_leadership_commitment(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-5.1: Leadership commitment must be demonstrated.

        For significant decisions and resource allocations, verify
        management approval.
        """
        leadership_events = {
            "policy_change", "resource_allocation", "strategic_decision",
            "deployment", "system_decommission"
        }
        if entry.event_type.lower() not in leadership_events:
            return None

        has_leadership_approval = entry.metadata.get("leadership_approved", False)
        if not has_leadership_approval:
            return self._create_violation(
                entry,
                rule,
                f"Significant operation (type={entry.event_type}) performed "
                f"without documented leadership approval",
            )
        return None

    def _check_ai_policy(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-5.2: AI policy must be established.

        All AI operations should reference compliance with the AI policy.
        """
        ai_events = {
            "inference", "training", "deployment", "model_update",
            "data_processing", "prediction"
        }
        if entry.event_type.lower() not in ai_events:
            return None

        has_policy_reference = entry.metadata.get("ai_policy_compliant", False)
        if not has_policy_reference:
            return self._create_violation(
                entry,
                rule,
                f"AI operation (type={entry.event_type}) performed without "
                f"verified AI policy compliance",
            )
        return None

    def _check_roles_responsibilities(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-5.3: Roles and responsibilities must be assigned.

        Verify that actors performing operations have defined roles.
        """
        # For critical operations, verify role assignment
        critical_events = {
            "deployment", "training", "model_update", "access_grant",
            "configuration_change", "incident_response"
        }
        if entry.event_type.lower() not in critical_events:
            return None

        has_role_defined = entry.metadata.get("role_defined", False)
        has_authorization = entry.metadata.get("authorized_role", False)

        if not (has_role_defined and has_authorization):
            return self._create_violation(
                entry,
                rule,
                f"Critical operation (type={entry.event_type}) performed by actor "
                f"without defined/authorized role: {entry.actor}",
            )
        return None

    # =========================================================================
    # Clause 6: Planning
    # =========================================================================

    def _check_risk_assessment(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-6.1: AI risk assessment must be conducted.

        High-risk operations must have documented risk assessments.
        """
        if entry.risk_level not in (RiskLevel.HIGH, RiskLevel.CRITICAL):
            return None

        has_risk_assessment = entry.metadata.get("risk_assessment_documented", False)
        if not has_risk_assessment:
            return self._create_violation(
                entry,
                rule,
                f"High-risk operation (level={entry.risk_level.value}) performed "
                f"without documented AI risk assessment",
            )
        return None

    def _check_ai_objectives(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-6.2: AI objectives must be established.

        Strategic and planning operations should align with AI objectives.
        """
        objective_relevant_events = {
            "project_initiation", "planning", "deployment",
            "system_design", "milestone_review"
        }
        if entry.event_type.lower() not in objective_relevant_events:
            return None

        has_objectives_alignment = entry.metadata.get("ai_objectives_aligned", False)
        if not has_objectives_alignment:
            return self._create_violation(
                entry,
                rule,
                f"Planning operation (type={entry.event_type}) performed without "
                f"documented alignment to AI objectives",
            )
        return None

    def _check_impact_assessment(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-6.3: AI impact assessment must be performed.

        Deployment and high-impact operations require impact assessments.
        """
        impact_events = {
            "deployment", "release", "model_update", "expansion",
            "new_use_case", "user_facing_change"
        }
        if entry.event_type.lower() not in impact_events:
            return None

        has_impact_assessment = entry.metadata.get("impact_assessment_documented", False)
        if not has_impact_assessment:
            return self._create_violation(
                entry,
                rule,
                f"Impact-relevant operation (type={entry.event_type}) performed "
                f"without documented AI impact assessment",
            )
        return None

    # =========================================================================
    # Clause 7: Support
    # =========================================================================

    def _check_resources(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-7.1: Resources must be provided.

        Resource-intensive operations should have resource allocation documented.
        """
        resource_events = {
            "training", "deployment", "infrastructure_change",
            "capacity_expansion", "project_initiation"
        }
        if entry.event_type.lower() not in resource_events:
            return None

        has_resources_allocated = entry.metadata.get("resources_allocated", False)
        if not has_resources_allocated:
            return self._create_violation(
                entry,
                rule,
                f"Resource-intensive operation (type={entry.event_type}) performed "
                f"without documented resource allocation",
            )
        return None

    def _check_competence(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-7.2: Competence must be ensured.

        Technical operations should be performed by competent personnel.
        """
        competence_required_events = {
            "training", "model_development", "deployment", "incident_response",
            "security_assessment", "audit"
        }
        if entry.event_type.lower() not in competence_required_events:
            return None

        has_competence_verified = entry.metadata.get("competence_verified", False)
        if not has_competence_verified:
            return self._create_violation(
                entry,
                rule,
                f"Technical operation (type={entry.event_type}) performed by actor "
                f"without verified competence: {entry.actor}",
            )
        return None

    def _check_awareness(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-7.3: Awareness must be maintained.

        User-initiated operations should have awareness acknowledgment.
        """
        awareness_events = {
            "user_onboarding", "access_grant", "training_completion",
            "policy_acknowledgment"
        }
        if entry.event_type.lower() not in awareness_events:
            return None

        has_awareness_confirmed = entry.metadata.get("awareness_confirmed", False)
        if not has_awareness_confirmed:
            return self._create_violation(
                entry,
                rule,
                f"Awareness-related operation (type={entry.event_type}) completed "
                f"without confirmed awareness acknowledgment",
            )
        return None

    def _check_communication(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-7.4: Communication processes must be established.

        External and stakeholder communications should follow defined processes.
        """
        communication_events = {
            "external_communication", "stakeholder_notification",
            "incident_notification", "regulatory_report", "public_disclosure"
        }
        if entry.event_type.lower() not in communication_events:
            return None

        has_communication_process = entry.metadata.get("communication_process_followed", False)
        if not has_communication_process:
            return self._create_violation(
                entry,
                rule,
                f"Communication operation (type={entry.event_type}) performed "
                f"without following established communication processes",
            )
        return None

    def _check_documented_information(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-7.5: Documented information must be controlled.

        Document-related operations should follow document control procedures.
        """
        document_events = {
            "document_creation", "document_update", "policy_change",
            "procedure_change", "record_creation"
        }
        if entry.event_type.lower() not in document_events:
            return None

        has_document_control = entry.metadata.get("document_control_applied", False)
        if not has_document_control:
            return self._create_violation(
                entry,
                rule,
                f"Document operation (type={entry.event_type}) performed without "
                f"following document control procedures",
            )
        return None

    # =========================================================================
    # Clause 8: Operation
    # =========================================================================

    def _check_operational_planning(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-8.1: Operational planning and control required.

        Significant operations should have documented planning.
        """
        operational_events = {
            "deployment", "release", "migration", "integration",
            "process_change", "configuration_change"
        }
        if entry.event_type.lower() not in operational_events:
            return None

        has_operational_plan = entry.metadata.get("operational_plan_documented", False)
        if not has_operational_plan:
            return self._create_violation(
                entry,
                rule,
                f"Operational activity (type={entry.event_type}) performed "
                f"without documented operational planning",
            )
        return None

    def _check_lifecycle_processes(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-8.2: AI system lifecycle processes required.

        Lifecycle events should follow defined processes.
        """
        lifecycle_events = {
            "design", "development", "training", "validation", "testing",
            "deployment", "monitoring", "maintenance", "decommission"
        }
        if entry.event_type.lower() not in lifecycle_events:
            return None

        has_lifecycle_process = entry.metadata.get("lifecycle_process_followed", False)
        if not has_lifecycle_process:
            return self._create_violation(
                entry,
                rule,
                f"Lifecycle operation (type={entry.event_type}) performed "
                f"without following defined lifecycle processes",
            )
        return None

    def _check_third_party(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-8.3: Third-party considerations required.

        Operations involving external parties should have due diligence.
        """
        third_party_events = {
            "vendor_engagement", "external_api_call", "model_import",
            "data_acquisition", "outsourcing", "third_party_integration"
        }
        if entry.event_type.lower() not in third_party_events:
            return None

        has_third_party_eval = entry.metadata.get("third_party_evaluated", False)
        if not has_third_party_eval:
            return self._create_violation(
                entry,
                rule,
                f"Third-party operation (type={entry.event_type}) performed "
                f"without documented third-party evaluation",
            )
        return None

    def _check_system_impact(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-8.4: AI system impact assessment required.

        Deployment and significant changes require impact assessment.
        """
        impact_events = {
            "deployment", "major_update", "user_expansion",
            "new_market", "feature_release"
        }
        if entry.event_type.lower() not in impact_events:
            return None

        has_system_impact_assessment = entry.metadata.get(
            "system_impact_assessment_documented", False
        )
        if not has_system_impact_assessment:
            return self._create_violation(
                entry,
                rule,
                f"System change (type={entry.event_type}) deployed without "
                f"documented system impact assessment",
            )
        return None

    # =========================================================================
    # Clause 9: Performance Evaluation
    # =========================================================================

    def _check_monitoring(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-9.1: Monitoring and measurement required.

        Production systems should have monitoring in place.
        """
        monitoring_events = {"inference", "prediction", "production_operation"}
        if entry.event_type.lower() not in monitoring_events:
            return None

        has_monitoring = entry.metadata.get("monitoring_enabled", False)
        if not has_monitoring:
            return self._create_violation(
                entry,
                rule,
                f"Production operation (type={entry.event_type}) performed "
                f"without enabled monitoring and measurement",
            )
        return None

    def _check_internal_audit(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-9.2: Internal audit must be conducted.

        Audit-related operations should follow audit procedures.
        """
        audit_events = {"audit", "audit_finding", "compliance_check"}
        if entry.event_type.lower() not in audit_events:
            return None

        has_audit_procedure = entry.metadata.get("audit_procedure_followed", False)
        if not has_audit_procedure:
            return self._create_violation(
                entry,
                rule,
                f"Audit operation (type={entry.event_type}) performed "
                f"without following internal audit procedures",
            )
        return None

    def _check_management_review(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-9.3: Management review required.

        Management review operations should be documented.
        """
        review_events = {"management_review", "executive_briefing", "governance_meeting"}
        if entry.event_type.lower() not in review_events:
            return None

        has_review_documentation = entry.metadata.get("review_documented", False)
        if not has_review_documentation:
            return self._create_violation(
                entry,
                rule,
                f"Management review (type={entry.event_type}) conducted "
                f"without proper documentation of inputs and outputs",
            )
        return None

    # =========================================================================
    # Clause 10: Improvement
    # =========================================================================

    def _check_corrective_action(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-10.1: Nonconformity and corrective action required.

        Nonconformities should have corrective actions documented.
        """
        nonconformity_events = {
            "nonconformity", "incident", "audit_finding",
            "complaint", "failure", "error"
        }
        if entry.event_type.lower() not in nonconformity_events:
            return None

        has_corrective_action = entry.metadata.get("corrective_action_documented", False)
        if not has_corrective_action:
            return self._create_violation(
                entry,
                rule,
                f"Nonconformity (type={entry.event_type}) identified without "
                f"documented corrective action plan",
            )
        return None

    def _check_continual_improvement(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check ISO42001-10.2: Continual improvement required.

        Improvement opportunities should be captured and tracked.
        """
        improvement_events = {
            "improvement_opportunity", "lessons_learned",
            "process_optimization", "enhancement_request"
        }
        if entry.event_type.lower() not in improvement_events:
            return None

        has_improvement_tracking = entry.metadata.get("improvement_tracked", False)
        if not has_improvement_tracking:
            return self._create_violation(
                entry,
                rule,
                f"Improvement opportunity (type={entry.event_type}) identified "
                f"without being tracked in improvement register",
            )
        return None

    def _create_violation(
        self, entry: AuditEntry, rule: ComplianceRule, evidence: str
    ) -> ComplianceViolation:
        """
        Create a compliance violation object.

        Args:
            entry: The audit entry that triggered the violation
            rule: The rule that was violated
            evidence: Specific evidence describing the violation

        Returns:
            ComplianceViolation object
        """
        return ComplianceViolation(
            rule_id=rule.rule_id,
            rule_name=rule.name,
            severity=rule.severity,
            description=rule.description,
            evidence=evidence,
            remediation=rule.remediation,
            entry_id=entry.entry_id,
            category=rule.category,
            framework=self._name,
        )

__init__

__init__()

Initialize the ISO 42001 framework with all defined rules.

Source code in src/rotalabs_comply/frameworks/iso_42001.py
def __init__(self):
    """Initialize the ISO 42001 framework with all defined rules."""
    rules = self._create_rules()
    super().__init__(name="ISO/IEC 42001", version="2023", rules=rules)

ISO/IEC 42001:2023 AI Management System (AIMS) compliance framework.

Categories

Category Clause Description
context 4 Organizational context and AIMS scope
leadership 5 Leadership commitment, AI policy, and roles
planning 6 Risk assessment, objectives, and impact assessment
support 7 Resources, competence, awareness, communication, documentation
operation 8 Operational planning, lifecycle, third-party, impact
performance 9 Monitoring, internal audit, management review
improvement 10 Corrective action and continual improvement

Rules

Rule ID Name Category Severity
ISO42001-4.1 Understanding Organization and Context context HIGH
ISO42001-4.2 Understanding Needs of Interested Parties context HIGH
ISO42001-4.3 Scope of AIMS Determined context HIGH
ISO42001-5.1 Leadership Commitment Demonstrated leadership HIGH
ISO42001-5.2 AI Policy Established leadership CRITICAL
ISO42001-5.3 Roles and Responsibilities Assigned leadership HIGH
ISO42001-6.1 AI Risk Assessment Conducted planning CRITICAL
ISO42001-6.2 AI Objectives Established planning HIGH
ISO42001-6.3 AI Impact Assessment Performed planning CRITICAL
ISO42001-7.1 Resources Provided support HIGH
ISO42001-7.2 Competence Ensured support HIGH
ISO42001-7.3 Awareness Maintained support MEDIUM
ISO42001-7.4 Communication Processes Established support MEDIUM
ISO42001-7.5 Documented Information Controlled support HIGH
ISO42001-8.1 Operational Planning and Control operation HIGH
ISO42001-8.2 AI System Lifecycle Processes operation CRITICAL
ISO42001-8.3 Third-Party Considerations operation HIGH
ISO42001-8.4 AI System Impact Assessment operation CRITICAL
ISO42001-9.1 Monitoring and Measurement performance HIGH
ISO42001-9.2 Internal Audit Conducted performance HIGH
ISO42001-9.3 Management Review performance HIGH
ISO42001-10.1 Nonconformity and Corrective Action improvement HIGH
ISO42001-10.2 Continual Improvement improvement MEDIUM

Usage

from rotalabs_comply.frameworks.iso_42001 import ISO42001Framework
from rotalabs_comply.frameworks.base import AuditEntry, ComplianceProfile, RiskLevel
from datetime import datetime

framework = ISO42001Framework()

entry = AuditEntry(
    entry_id="iso-001",
    timestamp=datetime.utcnow(),
    event_type="deployment",
    actor="ai-engineer@company.com",
    action="Deploy AI system",
    risk_level=RiskLevel.HIGH,
    metadata={
        "organizational_context_documented": True,
        "stakeholders_identified": True,
        "aims_scope_defined": True,
        "within_aims_scope": True,
        "leadership_approved": True,
        "ai_policy_compliant": True,
        "role_defined": True,
        "authorized_role": True,
        "risk_assessment_documented": True,
        "impact_assessment_documented": True,
        "lifecycle_process_followed": True,
        "operational_plan_documented": True,
        "monitoring_enabled": True,
    },
)

profile = ComplianceProfile(
    profile_id="iso42001-profile",
    name="ISO 42001 Compliance",
)

result = await framework.check(entry, profile)

Key Requirements

All AI operations require: - metadata["ai_policy_compliant"]=True

System deployments require: - metadata["organizational_context_documented"]=True - metadata["aims_scope_defined"]=True and metadata["within_aims_scope"]=True - metadata["leadership_approved"]=True - metadata["lifecycle_process_followed"]=True - metadata["operational_plan_documented"]=True

High-risk operations require: - metadata["risk_assessment_documented"]=True

Critical operations require: - metadata["role_defined"]=True and metadata["authorized_role"]=True - metadata["competence_verified"]=True


MAS FEAT Framework

MASFramework

MAS (Monetary Authority of Singapore) AI governance compliance framework.

Implements compliance checks based on MAS FEAT principles and AI governance guidelines for financial institutions operating in Singapore. The framework evaluates audit entries against requirements for fairness, ethics, accountability, transparency, model risk management, data governance, and operational resilience.

The FEAT principles establish expectations for financial institutions to: - Ensure AI-driven decisions are fair and do not result in unfair treatment - Use data and AI in an ethical manner aligned with firm values - Maintain clear accountability structures for AI decisions - Provide transparency to customers about AI use and decision-making

Additionally, the framework incorporates MAS model risk management requirements and technology risk management guidelines relevant to AI systems.

Example

framework = MASFramework() result = await framework.check(entry, profile) if not result.is_compliant: ... for violation in result.violations: ... print(f"{violation.rule_id}: {violation.description}")

Note

This framework is specifically designed for financial institutions regulated by MAS. Organizations outside MAS jurisdiction should use other appropriate frameworks.

Source code in src/rotalabs_comply/frameworks/mas.py
  52
  53
  54
  55
  56
  57
  58
  59
  60
  61
  62
  63
  64
  65
  66
  67
  68
  69
  70
  71
  72
  73
  74
  75
  76
  77
  78
  79
  80
  81
  82
  83
  84
  85
  86
  87
  88
  89
  90
  91
  92
  93
  94
  95
  96
  97
  98
  99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
 111
 112
 113
 114
 115
 116
 117
 118
 119
 120
 121
 122
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
class MASFramework(BaseFramework):
    """
    MAS (Monetary Authority of Singapore) AI governance compliance framework.

    Implements compliance checks based on MAS FEAT principles and AI governance
    guidelines for financial institutions operating in Singapore. The framework
    evaluates audit entries against requirements for fairness, ethics, accountability,
    transparency, model risk management, data governance, and operational resilience.

    The FEAT principles establish expectations for financial institutions to:
    - Ensure AI-driven decisions are fair and do not result in unfair treatment
    - Use data and AI in an ethical manner aligned with firm values
    - Maintain clear accountability structures for AI decisions
    - Provide transparency to customers about AI use and decision-making

    Additionally, the framework incorporates MAS model risk management requirements
    and technology risk management guidelines relevant to AI systems.

    Example:
        >>> framework = MASFramework()
        >>> result = await framework.check(entry, profile)
        >>> if not result.is_compliant:
        ...     for violation in result.violations:
        ...         print(f"{violation.rule_id}: {violation.description}")

    Note:
        This framework is specifically designed for financial institutions
        regulated by MAS. Organizations outside MAS jurisdiction should
        use other appropriate frameworks.
    """

    def __init__(self):
        """Initialize the MAS framework with all defined rules."""
        rules = self._create_rules()
        super().__init__(name="MAS FEAT", version="2022", rules=rules)

    def _create_rules(self) -> List[ComplianceRule]:
        """
        Create all MAS compliance rules.

        Returns:
            List of ComplianceRule objects representing MAS FEAT principles
            and AI governance requirements
        """
        return [
            # ================================================================
            # FEAT Principles - Fairness
            # ================================================================
            ComplianceRule(
                rule_id="MAS-FEAT-F1",
                name="Fair AI-Driven Decisions",
                description=(
                    "Financial institutions should ensure that AI-driven decisions are "
                    "fair and do not systematically disadvantage individuals or groups. "
                    "AI systems used in customer-facing decisions (e.g., credit scoring, "
                    "insurance underwriting, fraud detection) must be designed to avoid "
                    "unfair discrimination based on protected attributes such as race, "
                    "gender, age, religion, or nationality, except where such attributes "
                    "are legitimate risk factors permitted by law."
                ),
                severity=RiskLevel.HIGH,
                category="fairness",
                remediation=(
                    "Implement fairness testing procedures that evaluate AI outcomes "
                    "across different demographic groups. Document fairness metrics and "
                    "thresholds, and conduct regular fairness audits. Consider using "
                    "fairness-aware algorithms and establish governance processes for "
                    "reviewing and addressing fairness concerns."
                ),
                references=[
                    "MAS FEAT Principles - Fairness",
                    "MAS Information Paper on FEAT (2018)",
                ],
            ),
            ComplianceRule(
                rule_id="MAS-FEAT-F2",
                name="Bias Detection and Mitigation",
                description=(
                    "Financial institutions must implement measures to detect and mitigate "
                    "biases in AI systems throughout the model lifecycle. This includes "
                    "bias detection during model development, ongoing monitoring for "
                    "emergent biases, and corrective actions when biases are identified. "
                    "Institutions should assess training data for historical biases and "
                    "implement appropriate debiasing techniques where necessary."
                ),
                severity=RiskLevel.HIGH,
                category="fairness",
                remediation=(
                    "Establish bias detection processes including statistical analysis "
                    "of training data and model outputs. Implement bias monitoring "
                    "dashboards that track fairness metrics over time. Document bias "
                    "mitigation strategies and maintain records of debiasing actions "
                    "taken. Conduct periodic bias assessments and reviews."
                ),
                references=[
                    "MAS FEAT Principles - Fairness",
                    "MAS Veritas Framework for Responsible AI",
                ],
            ),
            # ================================================================
            # FEAT Principles - Ethics
            # ================================================================
            ComplianceRule(
                rule_id="MAS-FEAT-E1",
                name="Ethical Use of Data and AI",
                description=(
                    "Financial institutions must ensure that data and AI are used in an "
                    "ethical manner, respecting customer privacy, data protection requirements, "
                    "and legitimate customer expectations. AI systems should not be used "
                    "in ways that manipulate, deceive, or exploit customers. The use of "
                    "alternative data sources must be evaluated for ethical implications "
                    "and potential for unfair discrimination."
                ),
                severity=RiskLevel.HIGH,
                category="ethics",
                remediation=(
                    "Establish an AI ethics review process for new AI use cases. "
                    "Document ethical considerations in AI system design documents. "
                    "Implement data usage policies that ensure ethical data practices. "
                    "Create mechanisms for stakeholders to raise ethical concerns about "
                    "AI systems. Review alternative data sources for ethical implications."
                ),
                references=[
                    "MAS FEAT Principles - Ethics",
                    "MAS Personal Data Protection Guidelines",
                ],
            ),
            ComplianceRule(
                rule_id="MAS-FEAT-E2",
                name="AI Alignment with Firm's Ethical Standards",
                description=(
                    "AI systems must be developed and operated in alignment with the "
                    "financial institution's ethical standards, corporate values, and "
                    "professional codes of conduct. The use of AI should support, not "
                    "undermine, the institution's commitment to treating customers fairly "
                    "and maintaining market integrity."
                ),
                severity=RiskLevel.MEDIUM,
                category="ethics",
                remediation=(
                    "Document how AI systems align with the firm's ethical standards "
                    "and corporate values. Include ethics compliance as part of AI "
                    "system design reviews. Ensure AI development teams are trained "
                    "on the firm's ethical standards. Establish escalation procedures "
                    "for ethical concerns related to AI systems."
                ),
                references=[
                    "MAS FEAT Principles - Ethics",
                    "MAS Guidelines on Fair Dealing",
                ],
            ),
            # ================================================================
            # FEAT Principles - Accountability
            # ================================================================
            ComplianceRule(
                rule_id="MAS-FEAT-A1",
                name="Clear Accountability for AI Decisions",
                description=(
                    "Financial institutions must establish clear accountability structures "
                    "for AI-driven decisions. This includes identifying individuals or "
                    "committees responsible for AI system outcomes, ensuring appropriate "
                    "governance oversight, and maintaining documentation of decision-making "
                    "authority and responsibility. Senior management must be accountable "
                    "for material AI systems and their outcomes."
                ),
                severity=RiskLevel.HIGH,
                category="accountability",
                remediation=(
                    "Define and document clear ownership and accountability for each "
                    "AI system, including business owners, model owners, and technical "
                    "owners. Establish AI governance committees with appropriate "
                    "senior management representation. Create RACI matrices for AI "
                    "decision-making processes. Ensure accountability is traceable "
                    "in audit logs."
                ),
                references=[
                    "MAS FEAT Principles - Accountability",
                    "MAS Guidelines on Individual Accountability and Conduct",
                ],
            ),
            ComplianceRule(
                rule_id="MAS-FEAT-A2",
                name="Human Oversight for Material AI Decisions",
                description=(
                    "Material AI-driven decisions must include appropriate human oversight. "
                    "Financial institutions should implement human-in-the-loop or "
                    "human-on-the-loop mechanisms for AI systems that significantly impact "
                    "customers or business operations. Humans must have the ability to "
                    "intervene, override, or stop AI system operations when necessary."
                ),
                severity=RiskLevel.CRITICAL,
                category="accountability",
                remediation=(
                    "Implement human oversight mechanisms appropriate to the risk level "
                    "of AI decisions. Define criteria for when human review is mandatory. "
                    "Provide tools and interfaces for humans to review, override, and "
                    "intervene in AI decisions. Document human oversight procedures and "
                    "ensure adequate training for personnel involved in oversight roles."
                ),
                references=[
                    "MAS FEAT Principles - Accountability",
                    "MAS Model Risk Management Guidelines",
                ],
            ),
            # ================================================================
            # FEAT Principles - Transparency
            # ================================================================
            ComplianceRule(
                rule_id="MAS-FEAT-T1",
                name="Explainable AI Decisions",
                description=(
                    "Financial institutions should ensure that AI-driven decisions can be "
                    "explained in a manner appropriate to the context and audience. "
                    "Explanations should be provided for material decisions affecting "
                    "customers, and internal stakeholders should have access to more "
                    "detailed technical explanations. The level of explainability should "
                    "be proportionate to the significance of the decision."
                ),
                severity=RiskLevel.HIGH,
                category="transparency",
                remediation=(
                    "Implement explainability mechanisms appropriate to each AI use case. "
                    "Use interpretable models where possible, or implement post-hoc "
                    "explanation techniques for complex models. Document the explanation "
                    "methodology and ensure explanations are understandable by the "
                    "intended audience. Maintain explanation logs for audit purposes."
                ),
                references=[
                    "MAS FEAT Principles - Transparency",
                    "MAS Information Paper on Responsible AI in Finance",
                ],
            ),
            ComplianceRule(
                rule_id="MAS-FEAT-T2",
                name="Customer Notification of AI Use",
                description=(
                    "Customers should be informed when AI is used to make or significantly "
                    "influence decisions that affect them. Financial institutions should "
                    "communicate the role of AI in decision-making processes, the types "
                    "of data used, and how customers can seek recourse or human review "
                    "of AI-driven decisions. Notification should be clear, timely, and "
                    "accessible."
                ),
                severity=RiskLevel.HIGH,
                category="transparency",
                remediation=(
                    "Implement clear notification mechanisms to inform customers when AI "
                    "is involved in decisions affecting them. Include AI disclosure in "
                    "customer communications and terms of service. Establish processes "
                    "for customers to request human review of AI decisions. Maintain "
                    "records of customer notifications for audit purposes."
                ),
                references=[
                    "MAS FEAT Principles - Transparency",
                    "MAS Guidelines on Fair Dealing",
                ],
            ),
            # ================================================================
            # Model Risk Management
            # ================================================================
            ComplianceRule(
                rule_id="MAS-MRM-1",
                name="Model Development Standards",
                description=(
                    "Financial institutions must establish robust standards for AI/ML "
                    "model development. This includes documented development methodologies, "
                    "data quality requirements, feature engineering standards, model "
                    "selection criteria, and performance benchmarks. Development processes "
                    "should ensure models are fit for purpose and align with business "
                    "requirements."
                ),
                severity=RiskLevel.HIGH,
                category="model_risk",
                remediation=(
                    "Establish and document model development standards and methodologies. "
                    "Define data quality requirements for model development. Implement "
                    "version control for models and code. Document model assumptions, "
                    "limitations, and intended use cases. Ensure development processes "
                    "are reviewed and approved by appropriate stakeholders."
                ),
                references=[
                    "MAS Model Risk Management Guidelines",
                    "MAS Technology Risk Management Guidelines",
                ],
            ),
            ComplianceRule(
                rule_id="MAS-MRM-2",
                name="Model Validation Requirements",
                description=(
                    "All material AI/ML models must undergo independent validation before "
                    "deployment and periodically thereafter. Validation should assess "
                    "model conceptual soundness, data quality, model performance, and "
                    "outcome analysis. The validation function should be independent of "
                    "the model development function."
                ),
                severity=RiskLevel.HIGH,
                category="model_risk",
                remediation=(
                    "Establish an independent model validation function. Define validation "
                    "scope, methodology, and frequency based on model materiality. "
                    "Document validation findings and remediation actions. Ensure "
                    "validation coverage includes conceptual soundness, implementation, "
                    "and ongoing performance monitoring. Maintain validation records."
                ),
                references=[
                    "MAS Model Risk Management Guidelines",
                    "MAS Supervisory Expectations on Model Risk",
                ],
            ),
            ComplianceRule(
                rule_id="MAS-MRM-3",
                name="Model Monitoring and Review",
                description=(
                    "Financial institutions must implement ongoing monitoring of AI/ML "
                    "models to detect performance degradation, data drift, concept drift, "
                    "and unexpected behaviors. Models should be subject to periodic review "
                    "and revalidation. Monitoring should include performance metrics, "
                    "stability metrics, and business outcome tracking."
                ),
                severity=RiskLevel.HIGH,
                category="model_risk",
                remediation=(
                    "Implement comprehensive model monitoring frameworks that track "
                    "performance metrics, input data distributions, and output patterns. "
                    "Define alert thresholds and escalation procedures. Establish "
                    "periodic review schedules based on model materiality. Document "
                    "monitoring results and actions taken in response to issues."
                ),
                references=[
                    "MAS Model Risk Management Guidelines",
                    "MAS Technology Risk Management Guidelines",
                ],
            ),
            ComplianceRule(
                rule_id="MAS-MRM-4",
                name="Model Inventory Maintained",
                description=(
                    "Financial institutions must maintain a comprehensive inventory of "
                    "all AI/ML models in use. The inventory should include model metadata, "
                    "risk classifications, ownership information, validation status, and "
                    "deployment details. The inventory enables effective governance and "
                    "risk management of the model portfolio."
                ),
                severity=RiskLevel.MEDIUM,
                category="model_risk",
                remediation=(
                    "Establish and maintain a centralized model inventory. Include "
                    "essential metadata such as model purpose, risk tier, owner, "
                    "validation status, and performance metrics. Implement processes "
                    "to keep the inventory up to date. Use the inventory for portfolio "
                    "risk assessment and resource allocation decisions."
                ),
                references=[
                    "MAS Model Risk Management Guidelines",
                    "MAS Technology Risk Management Guidelines",
                ],
            ),
            # ================================================================
            # Data Governance
            # ================================================================
            ComplianceRule(
                rule_id="MAS-DATA-1",
                name="Data Quality Standards",
                description=(
                    "Financial institutions must establish and maintain data quality "
                    "standards for AI systems. Data used in AI/ML models should be "
                    "accurate, complete, consistent, timely, and relevant. Data quality "
                    "should be assessed and documented, with processes in place to "
                    "address data quality issues."
                ),
                severity=RiskLevel.HIGH,
                category="data_governance",
                remediation=(
                    "Define data quality standards and metrics for AI use cases. "
                    "Implement data quality checks and validation procedures. "
                    "Document data quality assessments and remediation actions. "
                    "Establish data quality monitoring and alerting mechanisms. "
                    "Ensure data quality issues are escalated and resolved promptly."
                ),
                references=[
                    "MAS Data Management Guidelines",
                    "MAS Technology Risk Management Guidelines",
                ],
            ),
            ComplianceRule(
                rule_id="MAS-DATA-2",
                name="Data Lineage Documentation",
                description=(
                    "Financial institutions must maintain documentation of data lineage "
                    "for AI systems. This includes tracking data sources, transformations, "
                    "aggregations, and dependencies throughout the data pipeline. Data "
                    "lineage supports auditability, debugging, and impact analysis."
                ),
                severity=RiskLevel.MEDIUM,
                category="data_governance",
                remediation=(
                    "Implement data lineage tracking for AI data pipelines. Document "
                    "data sources, transformations, and dependencies. Use data lineage "
                    "tools or metadata management systems where appropriate. Ensure "
                    "data lineage is available for audit and investigation purposes. "
                    "Maintain lineage documentation for the data retention period."
                ),
                references=[
                    "MAS Data Management Guidelines",
                    "MAS Technology Risk Management Guidelines",
                ],
            ),
            ComplianceRule(
                rule_id="MAS-DATA-3",
                name="Data Privacy Compliance",
                description=(
                    "AI systems must comply with data privacy requirements including "
                    "Singapore's Personal Data Protection Act (PDPA) and MAS-specific "
                    "data protection requirements. This includes obtaining appropriate "
                    "consent, limiting data use to stated purposes, implementing data "
                    "minimization, and ensuring secure data handling."
                ),
                severity=RiskLevel.CRITICAL,
                category="data_governance",
                remediation=(
                    "Ensure AI systems comply with PDPA and MAS data protection requirements. "
                    "Implement appropriate consent mechanisms for data collection and use. "
                    "Apply data minimization principles - collect and retain only necessary "
                    "data. Implement access controls and encryption for personal data. "
                    "Conduct privacy impact assessments for AI use cases involving personal data."
                ),
                references=[
                    "Singapore Personal Data Protection Act (PDPA)",
                    "MAS Guidelines on Fair Dealing",
                    "MAS Data Management Guidelines",
                ],
            ),
            # ================================================================
            # Operational Resilience
            # ================================================================
            ComplianceRule(
                rule_id="MAS-OPS-1",
                name="AI System Resilience",
                description=(
                    "AI systems must be designed and operated with appropriate resilience "
                    "measures to ensure continued availability and performance. This "
                    "includes redundancy, failover mechanisms, capacity management, and "
                    "graceful degradation capabilities. Systems should be resilient to "
                    "input anomalies and adversarial inputs."
                ),
                severity=RiskLevel.HIGH,
                category="operations",
                remediation=(
                    "Design AI systems with appropriate redundancy and failover "
                    "capabilities. Implement input validation and anomaly detection. "
                    "Test system resilience through chaos engineering and stress testing. "
                    "Define and implement graceful degradation strategies. Document "
                    "resilience requirements and validate compliance during deployment."
                ),
                references=[
                    "MAS Technology Risk Management Guidelines",
                    "MAS Business Continuity Management Guidelines",
                ],
            ),
            ComplianceRule(
                rule_id="MAS-OPS-2",
                name="Incident Management for AI Failures",
                description=(
                    "Financial institutions must have incident management procedures "
                    "specifically addressing AI system failures. This includes detection "
                    "mechanisms, escalation procedures, impact assessment, root cause "
                    "analysis, and communication protocols. AI-related incidents should "
                    "be reported to MAS where required."
                ),
                severity=RiskLevel.HIGH,
                category="operations",
                remediation=(
                    "Establish incident management procedures for AI systems. Define "
                    "AI-specific incident categories and severity classifications. "
                    "Implement monitoring and alerting for AI system failures. "
                    "Document escalation procedures and communication protocols. "
                    "Conduct post-incident reviews and implement lessons learned."
                ),
                references=[
                    "MAS Technology Risk Management Guidelines",
                    "MAS Notice on Cyber Hygiene",
                    "MAS Incident Reporting Requirements",
                ],
            ),
            ComplianceRule(
                rule_id="MAS-OPS-3",
                name="Business Continuity for AI Systems",
                description=(
                    "Financial institutions must include AI systems in their business "
                    "continuity planning. This includes identifying critical AI dependencies, "
                    "establishing recovery procedures, defining backup and fallback options, "
                    "and testing continuity plans. Business continuity plans should address "
                    "scenarios where AI systems are unavailable."
                ),
                severity=RiskLevel.MEDIUM,
                category="operations",
                remediation=(
                    "Include AI systems in business continuity planning and testing. "
                    "Identify critical AI system dependencies and recovery requirements. "
                    "Define fallback procedures for AI system unavailability (e.g., "
                    "manual processing, simplified models). Test business continuity "
                    "plans including AI failure scenarios. Document recovery time "
                    "objectives and recovery point objectives for AI systems."
                ),
                references=[
                    "MAS Technology Risk Management Guidelines",
                    "MAS Business Continuity Management Guidelines",
                ],
            ),
        ]

    def _check_rule(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check a single MAS rule against an audit entry.

        Evaluates the audit entry against the specific rule requirements
        and returns a violation if the entry does not comply.

        Args:
            entry: The audit entry to check
            rule: The rule to evaluate

        Returns:
            ComplianceViolation if the rule is violated, None otherwise
        """
        # Use custom check function if provided
        if rule.check_fn is not None:
            is_compliant = rule.check_fn(entry)
            if not is_compliant:
                return self._create_violation(entry, rule, "Custom check failed")
            return None

        # Framework-specific rule checks
        if rule.rule_id == "MAS-FEAT-F1":
            return self._check_fair_decisions(entry, rule)
        elif rule.rule_id == "MAS-FEAT-F2":
            return self._check_bias_mitigation(entry, rule)
        elif rule.rule_id == "MAS-FEAT-E1":
            return self._check_ethical_data_use(entry, rule)
        elif rule.rule_id == "MAS-FEAT-E2":
            return self._check_ethical_alignment(entry, rule)
        elif rule.rule_id == "MAS-FEAT-A1":
            return self._check_accountability(entry, rule)
        elif rule.rule_id == "MAS-FEAT-A2":
            return self._check_human_oversight(entry, rule)
        elif rule.rule_id == "MAS-FEAT-T1":
            return self._check_explainability(entry, rule)
        elif rule.rule_id == "MAS-FEAT-T2":
            return self._check_customer_notification(entry, rule)
        elif rule.rule_id == "MAS-MRM-1":
            return self._check_development_standards(entry, rule)
        elif rule.rule_id == "MAS-MRM-2":
            return self._check_model_validation(entry, rule)
        elif rule.rule_id == "MAS-MRM-3":
            return self._check_model_monitoring(entry, rule)
        elif rule.rule_id == "MAS-MRM-4":
            return self._check_model_inventory(entry, rule)
        elif rule.rule_id == "MAS-DATA-1":
            return self._check_data_quality(entry, rule)
        elif rule.rule_id == "MAS-DATA-2":
            return self._check_data_lineage(entry, rule)
        elif rule.rule_id == "MAS-DATA-3":
            return self._check_data_privacy(entry, rule)
        elif rule.rule_id == "MAS-OPS-1":
            return self._check_system_resilience(entry, rule)
        elif rule.rule_id == "MAS-OPS-2":
            return self._check_incident_management(entry, rule)
        elif rule.rule_id == "MAS-OPS-3":
            return self._check_business_continuity(entry, rule)

        return None

    # ========================================================================
    # FEAT Fairness Checks
    # ========================================================================

    def _check_fair_decisions(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check MAS-FEAT-F1: AI-driven decisions must be fair and unbiased.

        Customer-impacting AI decisions should have fairness assessments documented.
        """
        # Customer-impacting decision events
        customer_decision_events = {
            "credit_decision", "underwriting", "pricing", "fraud_detection",
            "risk_assessment", "loan_approval", "insurance_decision",
            "customer_scoring", "eligibility_check"
        }
        if entry.event_type.lower() not in customer_decision_events:
            return None

        has_fairness_assessment = entry.metadata.get("fairness_assessed", False)
        if not has_fairness_assessment:
            return self._create_violation(
                entry,
                rule,
                f"Customer-impacting AI decision (type={entry.event_type}) performed "
                f"without documented fairness assessment",
            )
        return None

    def _check_bias_mitigation(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check MAS-FEAT-F2: Bias detection and mitigation measures in place.

        Model training and deployment should include bias mitigation documentation.
        """
        model_lifecycle_events = {
            "training", "fine_tuning", "deployment", "model_update",
            "model_release", "model_promotion"
        }
        if entry.event_type.lower() not in model_lifecycle_events:
            return None

        has_bias_mitigation = entry.metadata.get("bias_mitigation_documented", False)
        if not has_bias_mitigation:
            return self._create_violation(
                entry,
                rule,
                f"Model lifecycle event (type={entry.event_type}) performed "
                f"without documented bias detection and mitigation measures",
            )
        return None

    # ========================================================================
    # FEAT Ethics Checks
    # ========================================================================

    def _check_ethical_data_use(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check MAS-FEAT-E1: Ethical use of data and AI.

        Data processing and AI operations should comply with ethical data use policies.
        """
        data_use_events = {
            "data_ingestion", "data_processing", "feature_engineering",
            "training", "data_access", "data_export"
        }
        if entry.event_type.lower() not in data_use_events:
            return None

        has_ethics_review = entry.metadata.get("ethics_reviewed", False)
        if not has_ethics_review:
            return self._create_violation(
                entry,
                rule,
                f"Data/AI operation (type={entry.event_type}) performed "
                f"without documented ethical review",
            )
        return None

    def _check_ethical_alignment(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check MAS-FEAT-E2: AI aligns with firm's ethical standards.

        AI deployments should document alignment with firm ethical standards.
        """
        deployment_events = {"deployment", "model_release", "go_live", "production_release"}
        if entry.event_type.lower() not in deployment_events:
            return None

        has_ethics_alignment = entry.metadata.get("ethics_aligned", False)
        if not has_ethics_alignment:
            return self._create_violation(
                entry,
                rule,
                f"AI deployment (type={entry.event_type}) performed "
                f"without documented alignment with firm's ethical standards",
            )
        return None

    # ========================================================================
    # FEAT Accountability Checks
    # ========================================================================

    def _check_accountability(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check MAS-FEAT-A1: Clear accountability for AI decisions.

        AI operations should have documented accountability and ownership.
        """
        # All material AI events should have accountability
        material_events = {
            "inference", "prediction", "decision", "credit_decision",
            "underwriting", "fraud_detection", "risk_assessment",
            "deployment", "model_update"
        }
        if entry.event_type.lower() not in material_events:
            return None

        has_accountability = (
            entry.metadata.get("accountable_owner", "") != "" or
            entry.metadata.get("accountability_documented", False)
        )
        if not has_accountability:
            return self._create_violation(
                entry,
                rule,
                f"Material AI operation (type={entry.event_type}) performed "
                f"without documented accountability structure",
            )
        return None

    def _check_human_oversight(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check MAS-FEAT-A2: Human oversight for material AI decisions.

        High-risk and material AI decisions require human oversight.
        """
        # Only applies to high-risk and critical operations
        if entry.risk_level not in (RiskLevel.HIGH, RiskLevel.CRITICAL):
            return None

        if not entry.human_oversight:
            return self._create_violation(
                entry,
                rule,
                f"Material AI operation (level={entry.risk_level.value}, "
                f"type={entry.event_type}) performed without human oversight",
            )
        return None

    # ========================================================================
    # FEAT Transparency Checks
    # ========================================================================

    def _check_explainability(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check MAS-FEAT-T1: AI decisions are explainable.

        Customer-impacting decisions should have explanations available.
        """
        decision_events = {
            "inference", "prediction", "decision", "credit_decision",
            "underwriting", "pricing", "fraud_detection", "risk_assessment"
        }
        if entry.event_type.lower() not in decision_events:
            return None

        has_explanation = (
            entry.metadata.get("explanation_available", False) or
            entry.metadata.get("explainability_method", "") != ""
        )
        if not has_explanation:
            return self._create_violation(
                entry,
                rule,
                f"AI decision (type={entry.event_type}) performed "
                f"without explainability mechanism documented",
            )
        return None

    def _check_customer_notification(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check MAS-FEAT-T2: Customers informed of AI use.

        Customer-facing AI interactions should include notification of AI involvement.
        """
        customer_facing_events = {
            "inference", "chat", "interaction", "response", "recommendation",
            "credit_decision", "underwriting", "customer_service"
        }
        if entry.event_type.lower() not in customer_facing_events:
            return None

        if not entry.user_notified:
            return self._create_violation(
                entry,
                rule,
                f"Customer-facing AI operation (type={entry.event_type}) performed "
                f"without notifying customer of AI involvement",
            )
        return None

    # ========================================================================
    # Model Risk Management Checks
    # ========================================================================

    def _check_development_standards(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check MAS-MRM-1: Model development standards.

        Model development activities should follow documented standards.
        """
        development_events = {
            "training", "fine_tuning", "model_development", "feature_engineering",
            "model_selection"
        }
        if entry.event_type.lower() not in development_events:
            return None

        has_development_standards = (
            entry.metadata.get("development_standards_followed", False) or
            entry.documentation_ref is not None
        )
        if not has_development_standards:
            return self._create_violation(
                entry,
                rule,
                f"Model development activity (type={entry.event_type}) performed "
                f"without reference to development standards documentation",
            )
        return None

    def _check_model_validation(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check MAS-MRM-2: Model validation requirements.

        Model deployments should have validation documentation.
        """
        deployment_events = {"deployment", "model_release", "go_live", "production_release"}
        if entry.event_type.lower() not in deployment_events:
            return None

        has_validation = entry.metadata.get("validation_completed", False)
        if not has_validation:
            return self._create_violation(
                entry,
                rule,
                f"Model deployment (type={entry.event_type}) performed "
                f"without documented model validation",
            )
        return None

    def _check_model_monitoring(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check MAS-MRM-3: Model monitoring and review.

        Inference operations should have monitoring in place.
        """
        inference_events = {"inference", "prediction", "scoring", "decision"}
        if entry.event_type.lower() not in inference_events:
            return None

        has_monitoring = (
            entry.metadata.get("monitoring_enabled", False) or
            entry.metadata.get("performance_tracked", False)
        )
        if not has_monitoring:
            return self._create_violation(
                entry,
                rule,
                f"Model inference (type={entry.event_type}) performed "
                f"without documented monitoring configuration",
            )
        return None

    def _check_model_inventory(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check MAS-MRM-4: Model inventory maintained.

        Model operations should reference the model inventory.
        """
        model_events = {
            "deployment", "inference", "training", "model_update",
            "model_release", "model_retirement"
        }
        if entry.event_type.lower() not in model_events:
            return None

        has_inventory_ref = (
            entry.metadata.get("model_inventory_id", "") != "" or
            entry.metadata.get("model_registered", False)
        )
        if not has_inventory_ref:
            return self._create_violation(
                entry,
                rule,
                f"Model operation (type={entry.event_type}) performed "
                f"without reference to model inventory",
            )
        return None

    # ========================================================================
    # Data Governance Checks
    # ========================================================================

    def _check_data_quality(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check MAS-DATA-1: Data quality standards.

        Data operations should meet data quality standards.
        """
        data_events = {
            "data_ingestion", "data_processing", "training", "fine_tuning",
            "feature_engineering", "data_preparation"
        }
        if entry.event_type.lower() not in data_events:
            return None

        has_quality_check = entry.metadata.get("data_quality_validated", False)
        if not has_quality_check:
            return self._create_violation(
                entry,
                rule,
                f"Data operation (type={entry.event_type}) performed "
                f"without documented data quality validation",
            )
        return None

    def _check_data_lineage(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check MAS-DATA-2: Data lineage documented.

        Data transformations should have lineage documentation.
        """
        data_transformation_events = {
            "data_processing", "feature_engineering", "data_transformation",
            "data_aggregation", "data_preparation"
        }
        if entry.event_type.lower() not in data_transformation_events:
            return None

        has_lineage = (
            entry.metadata.get("lineage_documented", False) or
            entry.metadata.get("data_lineage_id", "") != ""
        )
        if not has_lineage:
            return self._create_violation(
                entry,
                rule,
                f"Data transformation (type={entry.event_type}) performed "
                f"without documented data lineage",
            )
        return None

    def _check_data_privacy(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check MAS-DATA-3: Data privacy compliance.

        Operations involving personal data should comply with privacy requirements.
        """
        # Check if personal data is involved
        personal_data_classifications = {"pii", "personal", "customer_data", "sensitive"}
        if entry.data_classification.lower() not in personal_data_classifications:
            return None

        has_privacy_compliance = (
            entry.metadata.get("privacy_compliant", False) or
            entry.metadata.get("consent_obtained", False)
        )
        if not has_privacy_compliance:
            return self._create_violation(
                entry,
                rule,
                f"Operation involving personal data (classification={entry.data_classification}) "
                f"performed without documented privacy compliance",
            )
        return None

    # ========================================================================
    # Operational Resilience Checks
    # ========================================================================

    def _check_system_resilience(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check MAS-OPS-1: AI system resilience.

        AI systems should demonstrate resilience measures.
        """
        # Check for error handling on all operations
        if not entry.error_handled:
            return self._create_violation(
                entry,
                rule,
                f"AI operation (type={entry.event_type}) indicates error was not "
                f"handled gracefully, suggesting resilience gap",
            )
        return None

    def _check_incident_management(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check MAS-OPS-2: Incident management for AI failures.

        Error events should trigger incident management procedures.
        """
        error_events = {"error", "failure", "exception", "timeout", "degradation"}
        if entry.event_type.lower() not in error_events:
            return None

        has_incident_management = (
            entry.metadata.get("incident_logged", False) or
            entry.metadata.get("incident_id", "") != ""
        )
        if not has_incident_management:
            return self._create_violation(
                entry,
                rule,
                f"AI error event (type={entry.event_type}) occurred "
                f"without documented incident management response",
            )
        return None

    def _check_business_continuity(
        self, entry: AuditEntry, rule: ComplianceRule
    ) -> Optional[ComplianceViolation]:
        """
        Check MAS-OPS-3: Business continuity for AI systems.

        Critical AI operations should have business continuity documentation.
        """
        # Only check critical operations
        if entry.risk_level != RiskLevel.CRITICAL:
            return None

        has_bcp = (
            entry.metadata.get("bcp_documented", False) or
            entry.metadata.get("fallback_available", False)
        )
        if not has_bcp:
            return self._create_violation(
                entry,
                rule,
                f"Critical AI operation (type={entry.event_type}) performed "
                f"without documented business continuity provisions",
            )
        return None

    # ========================================================================
    # Helper Methods
    # ========================================================================

    def _create_violation(
        self, entry: AuditEntry, rule: ComplianceRule, evidence: str
    ) -> ComplianceViolation:
        """
        Create a compliance violation object.

        Args:
            entry: The audit entry that triggered the violation
            rule: The rule that was violated
            evidence: Specific evidence describing the violation

        Returns:
            ComplianceViolation object
        """
        return ComplianceViolation(
            rule_id=rule.rule_id,
            rule_name=rule.name,
            severity=rule.severity,
            description=rule.description,
            evidence=evidence,
            remediation=rule.remediation,
            entry_id=entry.entry_id,
            category=rule.category,
            framework=self._name,
        )

__init__

__init__()

Initialize the MAS framework with all defined rules.

Source code in src/rotalabs_comply/frameworks/mas.py
def __init__(self):
    """Initialize the MAS framework with all defined rules."""
    rules = self._create_rules()
    super().__init__(name="MAS FEAT", version="2022", rules=rules)

MAS (Monetary Authority of Singapore) FEAT principles and AI governance framework for financial institutions.

Categories

Category Focus Description
fairness FEAT-F Ensuring AI decisions are fair and unbiased
ethics FEAT-E Ethical use of data and AI alignment with firm standards
accountability FEAT-A Clear accountability and human oversight
transparency FEAT-T Explainability and customer notification
model_risk MRM Model development, validation, and monitoring
data_governance Data Data quality, lineage, and privacy compliance
operations Ops System resilience and incident management

Rules

Rule ID Name Category Severity
MAS-FEAT-F1 Fair AI-Driven Decisions fairness HIGH
MAS-FEAT-F2 Bias Detection and Mitigation fairness HIGH
MAS-FEAT-E1 Ethical Use of Data and AI ethics HIGH
MAS-FEAT-E2 AI Alignment with Firm's Ethical Standards ethics MEDIUM
MAS-FEAT-A1 Clear Accountability for AI Decisions accountability HIGH
MAS-FEAT-A2 Human Oversight for Material AI Decisions accountability CRITICAL
MAS-FEAT-T1 Explainable AI Decisions transparency HIGH
MAS-FEAT-T2 Customer Notification of AI Use transparency HIGH
MAS-MRM-1 Model Development Standards model_risk HIGH
MAS-MRM-2 Model Validation Requirements model_risk HIGH
MAS-MRM-3 Model Monitoring and Review model_risk HIGH
MAS-MRM-4 Model Inventory Maintained model_risk MEDIUM
MAS-DATA-1 Data Quality Standards data_governance HIGH
MAS-DATA-2 Data Lineage Documentation data_governance MEDIUM
MAS-DATA-3 Data Privacy Compliance data_governance CRITICAL
MAS-OPS-1 AI System Resilience operations HIGH
MAS-OPS-2 Incident Management for AI Failures operations HIGH
MAS-OPS-3 Business Continuity for AI Systems operations MEDIUM

Usage

from rotalabs_comply.frameworks.mas import MASFramework
from rotalabs_comply.frameworks.base import AuditEntry, ComplianceProfile, RiskLevel
from datetime import datetime

framework = MASFramework()

entry = AuditEntry(
    entry_id="mas-001",
    timestamp=datetime.utcnow(),
    event_type="credit_decision",
    actor="credit-officer@bank.sg",
    action="AI credit scoring",
    risk_level=RiskLevel.HIGH,
    data_classification="customer_data",
    user_notified=True,
    human_oversight=True,
    error_handled=True,
    metadata={
        "fairness_assessed": True,
        "bias_mitigation_documented": True,
        "accountable_owner": "credit-risk-team",
        "explanation_available": True,
        "monitoring_enabled": True,
        "model_inventory_id": "MODEL-CS-001",
        "privacy_compliant": True,
    },
)

profile = ComplianceProfile(
    profile_id="mas-profile",
    name="MAS FEAT Compliance",
)

result = await framework.check(entry, profile)

Key Requirements

Customer-facing AI decisions require: - metadata["fairness_assessed"]=True - metadata["explanation_available"]=True or metadata["explainability_method"] set - user_notified=True

High-risk operations require: - human_oversight=True

Model lifecycle events require: - metadata["bias_mitigation_documented"]=True - metadata["validation_completed"]=True (for deployments) - metadata["model_inventory_id"] or metadata["model_registered"]=True

Personal data operations require: - data_classification set to "pii", "personal", "customer_data", or "sensitive" - metadata["privacy_compliant"]=True or metadata["consent_obtained"]=True

All operations require: - error_handled=True (for resilience)