Core Concepts¶
This document explains the fundamental concepts behind Trust3 IQ and how its Universal Context Engine transforms natural language into actionable data intelligence.
Understanding Trust3 IQ¶
Trust3 IQ is an intelligent data platform that bridges the gap between how people think about their business and how data is technically structured. At its core, Trust3 IQ builds a Universal Context Engine that understands not just what your data is, but what it means in your business context.
The Context Engine is the brain of Trust3 IQ, combining multiple layers of intelligence to understand your data landscape:
What Makes It Universal:
- Semantic Understanding: Goes beyond metadata to capture business meaning and relationships
- Continuous Learning: Adapts from user feedback, query patterns, and data changes
- Multi-Source Integration: Unifies context across all your data sources into a coherent understanding
- Business-Aware: Understands your company's terminology, metrics, and domain logic
The Three Pillars¶
Data Ingestion & Enrichment - Connects to your data sources with read-only access - Extracts metadata (structure, schemas, relationships) - Performs data profiling and sampling - Applies semantic typing and entity recognition - Enriches with business context and domain knowledge
Real-Time Intelligence - Processes natural language queries through specialized AI agents - Orchestrates complex query resolution workflows - Generates optimized SQL for your specific data engines - Analyzes results in business context - Delivers insights in intuitive formats
Continuous Learning Pipeline - Incorporates feedback from SMEs and business users - Monitors data lineage and schema changes - Tracks usage patterns and metric popularity - Updates semantic cache with validated queries - Refines understanding of business semantics
Core Components¶
Semantic Models¶
Semantic models are rich, living representations of your data that capture both technical structure and business meaning.
Beyond Traditional Metadata Traditional metadata tells you:
customer_tablehas columnsid,name,emailorders_tablehas a foreign key tocustomer_table
Semantic models understand:
- "Customer" is a core business entity representing individuals or organizations
- Email addresses are contact information, subject to privacy rules
- Customers have a lifecycle (prospect → active → churned)
- "Revenue" means different things in different contexts (booking vs. recognized vs. collected)
- Certain tables are authoritative sources while others are derived
The key characteristics of semantic models are:
- Entity Recognition: Identifies business objects across your data landscape
- Relationship Mapping: Understands how entities connect, both explicitly (foreign keys) and implicitly (business logic)
- Attribute Semantics: Knows what each field represents in business terms
- Temporal Awareness: Understands time dimensions (fiscal calendars, effective dates, SCD patterns)
- Metric Definitions: Captures how KPIs are calculated and what they measure
- Domain Context: Recognizes industry-specific concepts and terminology
Semantic models aren't static, they evolve over time and get richer through: - User interactions and feedback - New data source connections - Schema changes and lineage tracking - Discovered relationships and patterns - Business rule refinements
Context Spaces or IQ Spaces¶
Context Spaces are logical containers that organize your data, queries, and intelligence into manageable, secure domains.
Think of Context Spaces as: - Departments or business units (Sales, Finance, Marketing) - Projects or initiatives (Q4 Campaign Analysis, Customer 360 View) - Data domains (Customer Analytics, Supply Chain Operations) - Security boundaries (Executive Dashboard, Team Analytics)
Context Spaces have the following characteristics:
Isolation & Organization - Each space has its own semantic model and context - Queries stay within their space unless explicitly cross-referenced - Different spaces can have different access controls - Prevents context confusion between unrelated domains
Linked Intelligence - Spaces can reference and relate to each other - Share common entities and definitions where appropriate - Cross-space queries when business logic requires it - Maintain consistency of core business concepts
Customization - Tailor semantic models to specific use cases - Configure data sources relevant to each space - Define space-specific metrics and KPIs - Set appropriate security and governance policies
The IQ Multi-Agent Framework¶
The IQ Multi-Agent Framework is the orchestration layer that transforms natural language into accurate, context-aware results through specialized AI collaboration.
The framework is composed of the following agents: IQ Agent (Orchestrator) The central coordinator that:
- Receives and interprets user queries
- Delegates tasks to specialized agents
- Maintains context throughout the resolution process
- Synthesizes results into coherent answers
- Ensures consistency and accuracy
Natural Language Processing Agent - Parses user queries for intent and structure - Extracts key entities, metrics, and filters - Handles ambiguity and clarification - Understands context from conversation history
Semantic Analysis Agent - Maps natural language to semantic model concepts - Resolves entity and relationship references - Applies business logic and rules - Identifies relevant data sources
Entity Identification Agent - Recognizes business entities in queries - Disambiguates similar or related entities - Finds entity instances across data sources - Handles entity hierarchies and classifications
Data Engine Agent (Text-to-SQL) - Translates semantic queries to SQL - Optimizes for specific database dialects - Handles complex joins and aggregations - Generates efficient, performant queries - Adapts to different data warehouse technologies
Query Execution Agent - Executes SQL against appropriate data sources - Manages connections and authentication - Handles query timeout and error scenarios - Retrieves result sets efficiently
Analytical Insights Agent - Performs statistical analysis on results - Identifies trends, anomalies, and patterns - Contextualizes findings with business meaning - Suggests follow-up questions or drill-downs
Visualization Agent - Determines optimal presentation format - Creates charts, tables, and dashboards - Applies visual best practices - Ensures accessibility and clarity
Data Sources¶
Data sources are the foundation where business data lives, your databases, warehouses, lakes, and applications.
Secure Integration - Read-only access by default - Encrypted connections (TLS/SSL) - Credential management and rotation - Support for SSO and federated identity - VPC/private network connectivity options
Supported Source Types - Cloud data warehouses - Traditional databases - Data lakes - Analytics platforms - Business applications
Data Discovery Process
-
Metadata Extraction
- Catalogs all databases, schemas, tables, and views
- Captures column definitions, data types, constraints
- Maps foreign key relationships and indexes
- Identifies partitioning and clustering schemes
-
Schema Discovery
- Understands table structures and hierarchies
- Maps relationships between entities
- Identifies fact and dimension tables
- Recognizes common data modeling patterns (star, snowflake, vault)
-
Data Profiling
- Samples data to understand content and quality
- Analyzes value distributions and cardinality
- Identifies null rates and data completeness
- Detects patterns, formats, and anomalies
- Measures data freshness and update frequency
-
Semantic Classification
- Applies semantic types (email, phone, currency, etc.)
- Recognizes business entities (customer, product, order)
- Identifies PII and sensitive data
- Tags columns with business meaning
- Categorizes by domain (sales, finance, operations)
Performance Optimization
- Intelligent caching of metadata and statistics
- Query result caching for common patterns
- Parallel execution across multiple sources
- Query pushdown for warehouse-native optimization
- Incremental updates rather than full scans
Workflow Concepts¶
Query Lifecycle¶
- Query breakdown: The user query gets broken down into semantic units
- Semantic Augmentation: Several NLP techniques are applied to identify intent
- Domain-specific Analysis: Semantic entities are identified based on the domain(s) the query is framed
- Dynamic Relationship Generation: Entities are dynamically correlated on their physical data layer
- SQL Query Generation: Engine-specific agents build the SQL query(ies) required to answer the user's query
- Execution: Retrieves and processes data against the required data engines
- Result Analysis: Performs data analysis on the resultset contextualising it with the user's goal
- Presentation: Shows the results in an intuitive format
Data Flow¶
Next Steps¶
-
Ask Questions
-
Integration Options
-
Operational Guides