Core Module¶
The rotalabs_audit.core module provides the foundational data types, configuration classes, and exceptions used throughout the rotalabs-audit package for reasoning chain capture and decision transparency.
Types¶
Core data structures for representing reasoning chains, decision traces, and analysis results.
ReasoningType¶
Classification enumeration for different kinds of reasoning steps.
ReasoningType
¶
Bases: str, Enum
Classification of reasoning step types.
Used to categorize different kinds of reasoning that appear in AI model outputs, enabling analysis of reasoning patterns and detection of specific behaviors like evaluation awareness or strategic reasoning.
Attributes:
| Name | Type | Description |
|---|---|---|
EVALUATION_AWARE |
References to testing, evaluation, or monitoring context. Indicates the model may be aware it's being evaluated. |
|
GOAL_REASONING |
Goal-directed reasoning where the model explicitly considers objectives and how to achieve them. |
|
DECISION_MAKING |
Explicit decision points where the model chooses between alternatives. |
|
FACTUAL_KNOWLEDGE |
Factual statements or knowledge retrieval without significant inference. |
|
UNCERTAINTY |
Expressions of uncertainty, hedging, or acknowledgment of limitations. |
|
META_REASONING |
Meta-cognitive statements like "I think" or "I believe" that reflect on the reasoning process itself. |
|
INCENTIVE_REASONING |
Consideration of rewards, penalties, or other incentive structures. |
|
CAUSAL_REASONING |
Cause-and-effect reasoning, analyzing why things happen or predicting consequences. |
|
HYPOTHETICAL |
Counterfactual or "what if" reasoning exploring alternative scenarios. |
|
UNKNOWN |
Reasoning that doesn't fit other categories or couldn't be classified. |
ConfidenceLevel¶
Discrete confidence levels that map to numeric confidence scores.
ConfidenceLevel
¶
Bases: str, Enum
Discrete confidence levels for reasoning assessments.
Provides human-readable confidence categories that map to numeric confidence scores, useful for reporting and thresholding.
Attributes:
| Name | Type | Description |
|---|---|---|
VERY_LOW |
Confidence score < 0.2. Very uncertain assessment. |
|
LOW |
Confidence score 0.2-0.4. Uncertain assessment. |
|
MEDIUM |
Confidence score 0.4-0.6. Moderate confidence. |
|
HIGH |
Confidence score 0.6-0.8. Confident assessment. |
|
VERY_HIGH |
Confidence score > 0.8. Highly confident assessment. |
from_score(score)
classmethod
¶
Convert a numeric confidence score to a discrete level.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
score
|
float
|
Confidence score between 0 and 1. |
required |
Returns:
| Type | Description |
|---|---|
ConfidenceLevel
|
The corresponding ConfidenceLevel. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If score is not between 0 and 1. |
ReasoningStep¶
A single step in a reasoning chain with classification and confidence.
ReasoningStep
dataclass
¶
A single step in a reasoning chain.
Represents an atomic unit of reasoning extracted from model output, including its classification, confidence assessment, and supporting evidence.
Attributes:
| Name | Type | Description |
|---|---|---|
content |
str
|
The text content of this reasoning step. |
reasoning_type |
ReasoningType
|
Classification of what kind of reasoning this represents. |
confidence |
float
|
Model's confidence in this step (0-1 scale). |
index |
int
|
Position of this step in the reasoning chain (0-indexed). |
evidence |
Dict[str, List[str]]
|
Dictionary mapping evidence types to lists of pattern matches or other supporting information. |
causal_importance |
float
|
How important this step is to the final decision (0-1 scale). Higher values indicate steps that significantly influenced the outcome. |
metadata |
Dict[str, Any]
|
Additional arbitrary metadata about this step. |
Example
step = ReasoningStep( ... content="Since the user asked for Python, I should use Python syntax", ... reasoning_type=ReasoningType.GOAL_REASONING, ... confidence=0.85, ... index=0, ... causal_importance=0.7 ... )
ReasoningChain¶
A complete chain of reasoning steps from an AI model.
ReasoningChain
dataclass
¶
A complete chain of reasoning steps.
Represents a full reasoning trace from an AI model, potentially parsed into discrete steps with classifications. The chain maintains both the raw text and structured representation.
Attributes:
| Name | Type | Description |
|---|---|---|
id |
str
|
Unique identifier for this chain. |
steps |
List[ReasoningStep]
|
List of parsed reasoning steps in order. |
raw_text |
str
|
Original unparsed text of the reasoning. |
model |
Optional[str]
|
Name/identifier of the model that produced this reasoning. |
timestamp |
datetime
|
When this reasoning was captured. |
parsing_confidence |
float
|
Confidence in the quality of step parsing (0-1). |
metadata |
Dict[str, Any]
|
Additional arbitrary metadata about the chain. |
Example
chain = ReasoningChain( ... id="chain-001", ... steps=[step1, step2, step3], ... raw_text="First, I consider... Then, I decide...", ... model="gpt-4" ... ) chain.is_structured True chain.step_count 3
is_structured
property
¶
Check if this chain has been successfully parsed into steps.
Returns:
| Type | Description |
|---|---|
bool
|
True if the chain contains at least one parsed step. |
step_count
property
¶
Get the number of reasoning steps in this chain.
Returns:
| Type | Description |
|---|---|
int
|
The count of parsed reasoning steps. |
average_confidence
property
¶
Calculate the average confidence across all steps.
Returns:
| Type | Description |
|---|---|
float
|
Mean confidence score, or 0.0 if no steps exist. |
__post_init__()
¶
Validate field values after initialization.
get_steps_by_type(reasoning_type)
¶
Filter steps by their reasoning type.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
reasoning_type
|
ReasoningType
|
The type of reasoning to filter for. |
required |
Returns:
| Type | Description |
|---|---|
List[ReasoningStep]
|
List of steps matching the specified type, in order. |
get_high_importance_steps(threshold=0.5)
¶
Get steps with high causal importance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
threshold
|
float
|
Minimum causal importance to include (default 0.5). |
0.5
|
Returns:
| Type | Description |
|---|---|
List[ReasoningStep]
|
List of steps with causal importance >= threshold, in order. |
DecisionTrace¶
Trace of a single decision point with context and rationale.
DecisionTrace
dataclass
¶
Trace of a single decision point.
Captures a specific decision made by an AI system, including the context, reasoning, alternatives considered, and potential consequences.
Attributes:
| Name | Type | Description |
|---|---|---|
id |
str
|
Unique identifier for this decision. |
decision |
str
|
The actual decision that was made (text description). |
timestamp |
datetime
|
When this decision was made. |
context |
Dict[str, Any]
|
Dictionary of contextual information relevant to the decision. |
reasoning_chain |
Optional[ReasoningChain]
|
Optional full reasoning chain leading to this decision. |
alternatives_considered |
List[str]
|
List of alternative decisions that were considered. |
rationale |
str
|
Explanation for why this decision was made. |
confidence |
float
|
Confidence in the decision (0-1 scale). |
reversible |
bool
|
Whether this decision can be undone. |
consequences |
List[str]
|
List of known or predicted consequences. |
metadata |
Dict[str, Any]
|
Additional arbitrary metadata. |
Example
trace = DecisionTrace( ... id="decision-001", ... decision="Use caching for API responses", ... timestamp=datetime.utcnow(), ... context={"request_volume": "high", "latency_requirement": "low"}, ... alternatives_considered=["No caching", "CDN caching"], ... rationale="High volume requires low latency responses", ... confidence=0.8 ... )
confidence_level
property
¶
Get the discrete confidence level for this decision.
has_reasoning
property
¶
Check if this decision has an associated reasoning chain.
alternatives_count
property
¶
Get the number of alternatives that were considered.
__post_init__()
¶
Validate field values after initialization.
DecisionPath¶
A sequence of related decisions tracking progress toward a goal.
DecisionPath
dataclass
¶
A sequence of related decisions.
Represents a series of connected decisions made in pursuit of a goal, enabling analysis of decision trajectories and identification of failure points.
Attributes:
| Name | Type | Description |
|---|---|---|
id |
str
|
Unique identifier for this path. |
decisions |
List[DecisionTrace]
|
Ordered list of decisions in the path. |
goal |
str
|
The objective these decisions were working toward. |
success |
Optional[bool]
|
Whether the goal was achieved (None if unknown/ongoing). |
failure_point |
Optional[DecisionTrace]
|
The decision where things went wrong, if applicable. |
metadata |
Dict[str, Any]
|
Additional arbitrary metadata. |
Example
path = DecisionPath( ... id="path-001", ... decisions=[decision1, decision2, decision3], ... goal="Complete user request accurately", ... success=True ... ) path.length 3
length
property
¶
Get the number of decisions in this path.
is_complete
property
¶
Check if the path has a known outcome.
has_failure
property
¶
Check if this path has an identified failure point.
total_confidence
property
¶
Calculate the product of all decision confidences.
Represents the overall confidence in the path, assuming independence between decisions.
Returns:
| Type | Description |
|---|---|
float
|
Product of confidences, or 1.0 if no decisions. |
get_decision_by_id(decision_id)
¶
Find a decision in the path by its ID.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
decision_id
|
str
|
The ID of the decision to find. |
required |
Returns:
| Type | Description |
|---|---|
Optional[DecisionTrace]
|
The matching DecisionTrace, or None if not found. |
AwarenessAnalysis¶
Result of evaluation awareness detection analysis.
AwarenessAnalysis
dataclass
¶
Result of evaluation awareness detection.
Contains the results of analyzing whether an AI system appears to be aware that it's being evaluated, monitored, or tested, and whether it's adapting its behavior accordingly.
Attributes:
| Name | Type | Description |
|---|---|---|
is_evaluation_aware |
bool
|
Whether the system shows signs of evaluation awareness. |
awareness_score |
float
|
Numeric score (0-1) indicating degree of awareness. |
awareness_indicators |
List[str]
|
Specific textual indicators of awareness found. |
evaluation_signals |
List[str]
|
References to evaluation/testing context detected. |
behavioral_adaptation_indicators |
List[str]
|
Signs the system is adapting behavior based on perceived evaluation context. |
strategic_reasoning_score |
float
|
Score (0-1) for strategic/game-theoretic reasoning. |
confidence |
float
|
Confidence in this analysis (0-1). |
Example
analysis = AwarenessAnalysis( ... is_evaluation_aware=True, ... awareness_score=0.75, ... awareness_indicators=["reference to 'test'", "mention of 'evaluator'"], ... evaluation_signals=["this appears to be a test scenario"], ... behavioral_adaptation_indicators=["adjusted response style"], ... strategic_reasoning_score=0.6, ... confidence=0.85 ... )
QualityMetrics¶
Quality assessment metrics for reasoning chains.
QualityMetrics
dataclass
¶
Quality assessment of reasoning.
Provides a comprehensive quality assessment of a reasoning chain, evaluating multiple dimensions of reasoning quality.
Attributes:
| Name | Type | Description |
|---|---|---|
clarity |
float
|
How clear and understandable the reasoning is (0-1). |
completeness |
float
|
Whether all necessary steps are explained (0-1). |
consistency |
float
|
Absence of contradictions in the reasoning (0-1). |
logical_validity |
float
|
Whether inferences are logically sound (0-1). |
evidence_support |
float
|
Whether claims are backed by evidence (0-1). |
overall_score |
float
|
Composite quality score (0-1). |
depth |
int
|
Number of reasoning steps (indicates depth of analysis). |
breadth |
int
|
Number of alternatives considered. |
issues |
List[str]
|
List of identified quality issues. |
recommendations |
List[str]
|
Suggestions for improving reasoning quality. |
Example
metrics = QualityMetrics( ... clarity=0.8, ... completeness=0.7, ... consistency=0.9, ... logical_validity=0.85, ... evidence_support=0.6, ... overall_score=0.77, ... depth=5, ... breadth=3, ... issues=["Some claims lack supporting evidence"], ... recommendations=["Add citations for factual claims"] ... )
quality_level
property
¶
Get the discrete overall quality level.
has_issues
property
¶
Check if any quality issues were identified.
issue_count
property
¶
Get the number of identified issues.
__post_init__()
¶
Validate field values after initialization.
to_summary_dict()
¶
Create a summary dictionary of the metrics.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary with all metric values and counts. |
Configuration¶
Configuration classes for controlling parser, analysis, and tracing behavior.
ParserConfig¶
Configuration for reasoning chain parsing.
ParserConfig
dataclass
¶
Configuration for reasoning chain parsing.
Controls how raw model outputs are parsed into structured reasoning chains, including pattern matching settings and output constraints.
Attributes:
| Name | Type | Description |
|---|---|---|
patterns_file |
Optional[Path]
|
Path to a YAML/JSON file containing custom patterns for identifying reasoning steps. If None, uses built-in patterns. |
min_confidence |
float
|
Minimum confidence threshold for including a parsed step. Steps below this threshold are discarded. |
max_steps |
int
|
Maximum number of steps to extract from a single chain. Prevents runaway parsing on very long outputs. |
step_separator_patterns |
List[str]
|
List of regex patterns that indicate boundaries between reasoning steps. |
include_raw_evidence |
bool
|
Whether to include raw pattern match strings in the evidence field of parsed steps. |
normalize_whitespace |
bool
|
Whether to normalize whitespace in parsed content (collapse multiple spaces, trim, etc.). |
preserve_formatting |
bool
|
Whether to preserve markdown/code formatting in step content. |
timeout_seconds |
float
|
Maximum time allowed for parsing a single chain. Prevents hanging on adversarial inputs. |
Example
config = ParserConfig( ... min_confidence=0.6, ... max_steps=20, ... include_raw_evidence=True ... )
__post_init__()
¶
Validate configuration values.
AnalysisConfig¶
Configuration for reasoning chain analysis features.
AnalysisConfig
dataclass
¶
Configuration for reasoning chain analysis.
Controls which analysis features are enabled and their parameters. Analysis includes evaluation awareness detection, quality assessment, and counterfactual reasoning analysis.
Attributes:
| Name | Type | Description |
|---|---|---|
enable_counterfactual |
bool
|
Whether to perform counterfactual analysis, examining what would happen if certain reasoning steps were different. |
enable_awareness |
bool
|
Whether to detect evaluation awareness, checking if the model appears to know it's being tested. |
enable_quality |
bool
|
Whether to compute quality metrics for reasoning chains. |
awareness_threshold |
float
|
Score threshold above which a chain is considered evaluation-aware. |
quality_weights |
Dict[str, float]
|
Dictionary of weights for different quality dimensions when computing overall score. |
counterfactual_depth |
int
|
How many alternative branches to explore in counterfactual analysis. |
use_llm_analysis |
bool
|
Whether to use an LLM for deeper analysis (requires API credentials). |
llm_model |
Optional[str]
|
Model identifier for LLM-based analysis. |
cache_results |
bool
|
Whether to cache analysis results for repeated calls on the same input. |
parallel_analysis |
bool
|
Whether to run independent analyses in parallel. |
Example
config = AnalysisConfig( ... enable_awareness=True, ... awareness_threshold=0.7, ... enable_quality=True ... )
__post_init__()
¶
Validate configuration values.
TracingConfig¶
Configuration for decision tracing and persistence.
TracingConfig
dataclass
¶
Configuration for decision tracing.
Controls how decisions are captured, tracked, and organized into decision paths during AI system operation.
Attributes:
| Name | Type | Description |
|---|---|---|
capture_alternatives |
bool
|
Whether to capture alternative decisions that were considered but not taken. |
max_trace_depth |
int
|
Maximum depth of nested decision traces. Prevents unbounded recursion in complex decision trees. |
max_path_length |
int
|
Maximum number of decisions in a single path. |
capture_context |
bool
|
Whether to capture contextual information at each decision point. |
context_keys |
Optional[List[str]]
|
Specific context keys to capture (if None, captures all). |
include_reasoning_chain |
bool
|
Whether to parse and include the full reasoning chain for each decision. |
track_consequences |
bool
|
Whether to track predicted and actual consequences of decisions. |
enable_timestamps |
bool
|
Whether to record precise timestamps for each decision. |
persistence_backend |
Optional[str]
|
Backend for persisting traces ("memory", "sqlite", "postgres", or None for no persistence). |
persistence_path |
Optional[str]
|
Path or connection string for persistence backend. |
auto_flush_interval |
float
|
Seconds between automatic flushes to persistence (0 disables auto-flush). |
Example
config = TracingConfig( ... capture_alternatives=True, ... max_trace_depth=10, ... include_reasoning_chain=True, ... persistence_backend="sqlite", ... persistence_path="./traces.db" ... )
__post_init__()
¶
Validate configuration values.
AuditConfig¶
Master configuration combining all audit-related settings.
AuditConfig
dataclass
¶
Master configuration combining all audit-related settings.
Provides a unified configuration object that contains parser, analysis, and tracing configurations, along with global settings.
Attributes:
| Name | Type | Description |
|---|---|---|
parser |
ParserConfig
|
Configuration for reasoning chain parsing. |
analysis |
AnalysisConfig
|
Configuration for reasoning analysis. |
tracing |
TracingConfig
|
Configuration for decision tracing. |
debug |
bool
|
Whether to enable debug mode with verbose logging. |
log_level |
str
|
Logging level ("DEBUG", "INFO", "WARNING", "ERROR"). |
metadata |
Dict[str, Any]
|
Additional global metadata to include in all outputs. |
Example
config = AuditConfig( ... parser=ParserConfig(min_confidence=0.6), ... analysis=AnalysisConfig(enable_awareness=True), ... tracing=TracingConfig(capture_alternatives=True), ... debug=True ... )
__post_init__()
¶
Validate configuration values.
from_dict(data)
classmethod
¶
Create an AuditConfig from a dictionary.
Useful for loading configuration from files or environment.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Dict[str, Any]
|
Dictionary containing configuration values. |
required |
Returns:
| Type | Description |
|---|---|
AuditConfig
|
Configured AuditConfig instance. |
to_dict()
¶
Convert this configuration to a dictionary.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary representation of the configuration. |
Exceptions¶
Custom exceptions for handling errors in parsing, analysis, tracing, and integrations.
AuditError¶
Base exception for all rotalabs-audit errors.
AuditError
¶
Bases: Exception
Base exception for all rotalabs-audit errors.
All exceptions raised by this package inherit from AuditError, allowing callers to catch all audit-related errors with a single except clause if desired.
Attributes:
| Name | Type | Description |
|---|---|---|
message |
Human-readable error description. |
|
details |
Optional dictionary of additional error context. |
|
cause |
Optional underlying exception that caused this error. |
Example
try: ... raise AuditError("Something went wrong", details={"code": 42}) ... except AuditError as e: ... print(f"Error: {e.message}, Details: {e.details}") Error: Something went wrong, Details: {'code': 42}
__init__(message, details=None, cause=None)
¶
Initialize an AuditError.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message
|
str
|
Human-readable error description. |
required |
details
|
Optional[Dict[str, Any]]
|
Optional dictionary of additional error context. |
None
|
cause
|
Optional[Exception]
|
Optional underlying exception that caused this error. |
None
|
__repr__()
¶
Return a detailed string representation.
ParsingError¶
Exception raised when parsing reasoning chains fails.
ParsingError
¶
Bases: AuditError
Exception raised when parsing reasoning chains fails.
This exception is raised when the parser encounters problems extracting structured reasoning steps from raw model output, such as malformed input, unrecognized patterns, or timeout.
Attributes:
| Name | Type | Description |
|---|---|---|
message |
Human-readable error description. |
|
raw_text |
The original text that failed to parse. |
|
partial_steps |
Any steps that were successfully parsed before failure. |
|
position |
Character position in raw_text where parsing failed. |
|
details |
Additional error context. |
|
cause |
Underlying exception if applicable. |
Example
try: ... raise ParsingError( ... "Unexpected token", ... raw_text="malformed input...", ... position=15 ... ) ... except ParsingError as e: ... print(f"Parsing failed at position {e.position}")
__init__(message, raw_text=None, partial_steps=None, position=None, details=None, cause=None)
¶
Initialize a ParsingError.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message
|
str
|
Human-readable error description. |
required |
raw_text
|
Optional[str]
|
The original text that failed to parse. |
None
|
partial_steps
|
Optional[List[Any]]
|
Any steps successfully parsed before failure. |
None
|
position
|
Optional[int]
|
Character position where parsing failed. |
None
|
details
|
Optional[Dict[str, Any]]
|
Additional error context. |
None
|
cause
|
Optional[Exception]
|
Underlying exception if applicable. |
None
|
AnalysisError¶
Exception raised when reasoning analysis fails.
AnalysisError
¶
Bases: AuditError
Exception raised when reasoning analysis fails.
This exception is raised when analyzing a reasoning chain encounters problems, such as invalid chain structure, analysis timeout, or failures in quality assessment or awareness detection.
Attributes:
| Name | Type | Description |
|---|---|---|
message |
Human-readable error description. |
|
chain_id |
Identifier of the chain being analyzed. |
|
analysis_type |
Type of analysis that failed (e.g., "quality", "awareness"). |
|
partial_results |
Any results computed before failure. |
|
details |
Additional error context. |
|
cause |
Underlying exception if applicable. |
Example
try: ... raise AnalysisError( ... "Quality assessment timed out", ... chain_id="chain-123", ... analysis_type="quality" ... ) ... except AnalysisError as e: ... print(f"Analysis '{e.analysis_type}' failed for {e.chain_id}")
__init__(message, chain_id=None, analysis_type=None, partial_results=None, details=None, cause=None)
¶
Initialize an AnalysisError.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message
|
str
|
Human-readable error description. |
required |
chain_id
|
Optional[str]
|
Identifier of the chain being analyzed. |
None
|
analysis_type
|
Optional[str]
|
Type of analysis that failed. |
None
|
partial_results
|
Optional[Dict[str, Any]]
|
Any results computed before failure. |
None
|
details
|
Optional[Dict[str, Any]]
|
Additional error context. |
None
|
cause
|
Optional[Exception]
|
Underlying exception if applicable. |
None
|
TracingError¶
Exception raised when decision tracing fails.
TracingError
¶
Bases: AuditError
Exception raised when decision tracing fails.
This exception is raised when capturing, storing, or retrieving decision traces encounters problems, such as persistence failures, depth limit exceeded, or invalid trace structure.
Attributes:
| Name | Type | Description |
|---|---|---|
message |
Human-readable error description. |
|
trace_id |
Identifier of the trace involved. |
|
operation |
The operation that failed (e.g., "capture", "store", "retrieve"). |
|
depth |
Current trace depth when error occurred. |
|
details |
Additional error context. |
|
cause |
Underlying exception if applicable. |
Example
try: ... raise TracingError( ... "Maximum trace depth exceeded", ... trace_id="trace-456", ... operation="capture", ... depth=25 ... ) ... except TracingError as e: ... print(f"Tracing failed during {e.operation} at depth {e.depth}")
__init__(message, trace_id=None, operation=None, depth=None, details=None, cause=None)
¶
Initialize a TracingError.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message
|
str
|
Human-readable error description. |
required |
trace_id
|
Optional[str]
|
Identifier of the trace involved. |
None
|
operation
|
Optional[str]
|
The operation that failed. |
None
|
depth
|
Optional[int]
|
Current trace depth when error occurred. |
None
|
details
|
Optional[Dict[str, Any]]
|
Additional error context. |
None
|
cause
|
Optional[Exception]
|
Underlying exception if applicable. |
None
|
IntegrationError¶
Exception raised when integration with external systems fails.
IntegrationError
¶
Bases: AuditError
Exception raised when integration with external systems fails.
This exception is raised when interacting with external components like LLM APIs, databases, or other rotalabs packages encounters problems.
Attributes:
| Name | Type | Description |
|---|---|---|
message |
Human-readable error description. |
|
integration_name |
Name of the integration that failed. |
|
endpoint |
API endpoint or connection string involved. |
|
request_data |
Data that was being sent (sanitized). |
|
response_data |
Any response received before failure. |
|
status_code |
HTTP status code if applicable. |
|
details |
Additional error context. |
|
cause |
Underlying exception if applicable. |
Example
try: ... raise IntegrationError( ... "API request failed", ... integration_name="openai", ... endpoint="https://api.openai.com/v1/chat", ... status_code=429 ... ) ... except IntegrationError as e: ... print(f"Integration '{e.integration_name}' failed: {e.status_code}")
is_rate_limited
property
¶
Check if this error indicates rate limiting (HTTP 429).
is_auth_error
property
¶
Check if this error indicates authentication failure (HTTP 401/403).
is_server_error
property
¶
Check if this error indicates a server-side failure (HTTP 5xx).
__init__(message, integration_name=None, endpoint=None, request_data=None, response_data=None, status_code=None, details=None, cause=None)
¶
Initialize an IntegrationError.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message
|
str
|
Human-readable error description. |
required |
integration_name
|
Optional[str]
|
Name of the integration that failed. |
None
|
endpoint
|
Optional[str]
|
API endpoint or connection string involved. |
None
|
request_data
|
Optional[Dict[str, Any]]
|
Data that was being sent (should be sanitized). |
None
|
response_data
|
Optional[Dict[str, Any]]
|
Any response received before failure. |
None
|
status_code
|
Optional[int]
|
HTTP status code if applicable. |
None
|
details
|
Optional[Dict[str, Any]]
|
Additional error context. |
None
|
cause
|
Optional[Exception]
|
Underlying exception if applicable. |
None
|
ValidationError¶
Exception raised when input validation fails.
ValidationError
¶
Bases: AuditError
Exception raised when input validation fails.
This exception is raised when input data doesn't meet expected constraints, such as invalid configuration values, malformed data structures, or out-of-range parameters.
Attributes:
| Name | Type | Description |
|---|---|---|
message |
Human-readable error description. |
|
field_name |
Name of the field that failed validation. |
|
expected |
Description of expected value/format. |
|
actual |
The actual value that was provided. |
|
details |
Additional error context. |
|
cause |
Underlying exception if applicable. |
Example
try: ... raise ValidationError( ... "Confidence out of range", ... field_name="confidence", ... expected="0.0 to 1.0", ... actual=1.5 ... ) ... except ValidationError as e: ... print(f"Field '{e.field_name}': expected {e.expected}, got {e.actual}")
__init__(message, field_name=None, expected=None, actual=None, details=None, cause=None)
¶
Initialize a ValidationError.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message
|
str
|
Human-readable error description. |
required |
field_name
|
Optional[str]
|
Name of the field that failed validation. |
None
|
expected
|
Optional[str]
|
Description of expected value/format. |
None
|
actual
|
Optional[Any]
|
The actual value that was provided. |
None
|
details
|
Optional[Dict[str, Any]]
|
Additional error context. |
None
|
cause
|
Optional[Exception]
|
Underlying exception if applicable. |
None
|
TimeoutError¶
Exception raised when an operation times out.
TimeoutError
¶
Bases: AuditError
Exception raised when an operation times out.
This exception is raised when parsing, analysis, or other operations exceed their configured time limits.
Attributes:
| Name | Type | Description |
|---|---|---|
message |
Human-readable error description. |
|
operation |
The operation that timed out. |
|
timeout_seconds |
The timeout limit that was exceeded. |
|
elapsed_seconds |
How long the operation ran before timeout. |
|
details |
Additional error context. |
|
cause |
Underlying exception if applicable. |
Example
try: ... raise TimeoutError( ... "Parsing timed out", ... operation="parse_chain", ... timeout_seconds=30.0, ... elapsed_seconds=30.5 ... ) ... except TimeoutError as e: ... print(f"Operation '{e.operation}' exceeded {e.timeout_seconds}s limit")
__init__(message, operation=None, timeout_seconds=None, elapsed_seconds=None, details=None, cause=None)
¶
Initialize a TimeoutError.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message
|
str
|
Human-readable error description. |
required |
operation
|
Optional[str]
|
The operation that timed out. |
None
|
timeout_seconds
|
Optional[float]
|
The timeout limit that was exceeded. |
None
|
elapsed_seconds
|
Optional[float]
|
How long the operation ran before timeout. |
None
|
details
|
Optional[Dict[str, Any]]
|
Additional error context. |
None
|
cause
|
Optional[Exception]
|
Underlying exception if applicable. |
None
|