Skip to content

Trust3 Client Integration Guide

This guide demonstrates how to integrate Trust3 client for access control in your Python applications when Trust3 server is deployed as a Snowflake native app, providing data governance and security for AI/LLM interactions.

Prerequisites

  • Python 3.11+
  • LLM provider API access (any provider)
  • Trust3 server endpoint and credentials
  • Required Python packages (see Installation)

Installation

Bash
pip install trust3-client

Configuration

1. Trust3 Server Configuration

You'll need the following credentials for Trust3 integration:

Python
1
2
3
4
# Trust3 Configuration
TRUST3_SERVER_BASE_URL = "https://your-trust3-server.com"
SNOWFLAKE_PAT_TOKEN = "your-snowflake-pat-token"
TRUST3_AI_APP_API_KEY = "your-trust3-app-api-key"

How to obtain these credentials:

TRUST3_SERVER_BASE_URL:

  1. Go to your installed Snowflake application in your account
  2. Launch the application - this will open the Streamlit app for managing Trust3 application server
  3. Click on "Refresh" button to obtain the Trust3 server base URL

SNOWFLAKE_PAT_TOKEN:

This is a programmatic access token used to authenticate with Snowflake, as every native app endpoint uses Snowflake authentication as the first layer of authentication.

  1. Generate your PAT token following the steps mentinoned in the Snowflake Documentation

TRUST3_AI_APP_API_KEY:

  1. Login to the Trust3 server using the TRUST3_SERVER_BASE_URL
  2. Navigate to the AI Application you have set up
  3. Go to the "API Keys" tab
  4. Generate a new API Key for your application integration

2. Basic Setup

Python
import uuid
from trust3_client import client as trust3_shield_client
from trust3_client.model import ConversationType
import trust3_client.exception

# Initialize Trust3 client
trust3_shield_client.setup(frameworks=[])

# Setup your application with Trust3
app = trust3_shield_client.setup_app(
    endpoint=TRUST3_SERVER_BASE_URL,
    application_config_api_key=TRUST3_AI_APP_API_KEY,
    snowflake_pat_token=SNOWFLAKE_PAT_TOKEN
)

Implementation Pattern

1. Shield Context Pattern

Always wrap your AI interactions and prompt/response validation within a shield context:

Python
1
2
3
4
5
6
# Replace with actual username or service account
user = "your-username"

with trust3_shield_client.create_shield_context(application=app, username=user):
    # Your AI logic/prompt/response validation goes here
    pass

2. Prompt Validation

Validate user prompts before sending to LLM:

Python
try:
    # Generate unique thread ID for conversation tracking
    thread_id = str(uuid.uuid4())

    # Original user prompt
    user_prompt = "User's input text here"

    # Validate prompt with Trust3
    validated_prompt = trust3_shield_client.check_access(
        text=user_prompt,
        conversation_type=ConversationType.PROMPT,
        thread_id=thread_id
    )

    # Extract the validated text
    safe_prompt = validated_prompt[0].response_text

except trust3_client.exception.AccessControlException as e:
    print(f"Prompt blocked by access control: {e}")
    # Handle blocked prompt appropriately

3. LLM Response Validation

Validate LLM responses before returning to users:

Python
try:
    # Get response from LLM
    llm_response = "LLM generated response"

    # Validate response with Trust3
    validated_response = trust3_shield_client.check_access(
        text=llm_response,
        conversation_type=ConversationType.REPLY,
        thread_id=thread_id  # Same thread_id used for prompt
    )

    # Extract the validated response
    safe_response = validated_response[0].response_text

except trust3_client.exception.AccessControlException as e:
    print(f"Response blocked by access control: {e}")
    # Handle blocked response appropriately

4. Vector Database Filter

Get vector database filter expressions for implementing data-level access control:

Python
import ast

try:
    # Generate unique thread ID for conversation tracking
    thread_id = str(uuid.uuid4())

    with trust3_shield_client.create_shield_context(application=app, username=user):
        # Get vector database filter expression
        filter_response = trust3_shield_client.get_vector_db_filter_expression(
            thread_id=thread_id
        )

        # By default, filter_response is a string. Convert it to a dictionary if required by your vector database
        filter = ast.literal_eval(filter_response)

        # Pass on this filter to your vector database API call
        # Example: results = vector_db.search(query, filter=filter)
        print(f"Vector DB Filter: {filter}")

        # To audit vector database operations, pass the filter information back to Trust3
        # Get the current vector database information for auditing
        filter_response_dict = trust3_shield_client.get_current("vectorDBInfo")

        # Create a new shield context with vector database info for subsequent operations
        with trust3_shield_client.create_shield_context(
            application=app, 
            username=user, 
            vectorDBInfo=filter_response_dict
        ):
            # Perform your prompt/response validation with vector DB audit trail
            # This ensures Trust3 can track which data was accessed from the vector database
            pass


except trust3_client.exception.AccessControlException as e:
    print(f"Filter access denied: {e}")
    # Handle filter access denial appropriately

Complete Integration Example Using OpenAI

Python
from openai import OpenAI
import uuid
from trust3_client import client as trust3_shield_client
from trust3_client.model import ConversationType
import trust3_client.exception

# Initialize Trust3 client
trust3_shield_client.setup(frameworks=[])

TRUST3_SERVER_BASE_URL = "<your-trust3-server-base-url>"
SNOWFLAKE_PAT_TOKEN = "<your-snowflake-pat-token>"
TRUST3_AI_APP_API_KEY = "<your-trust3-ai-app-api-key>"

# Setup your application with Trust3
app = trust3_shield_client.setup_app(
    endpoint=TRUST3_SERVER_BASE_URL,
    application_config_api_key=TRUST3_AI_APP_API_KEY,
    snowflake_pat_token=SNOWFLAKE_PAT_TOKEN
)

def secure_ai_chat(user_prompt, username="testuser"):
    """
    Secure AI chat function with Trust3 integration
    """
    try:
        # Generate conversation thread ID
        thread_id = str(uuid.uuid4())

        # Create shield context for the user
        with trust3_shield_client.create_shield_context(
            application=app,
            username=username
        ):
            print(f"Original prompt: {user_prompt}")

            # 1. Validate user prompt
            validated_prompt = trust3_shield_client.check_access(
                text=user_prompt,
                conversation_type=ConversationType.PROMPT,
                thread_id=thread_id
            )

            safe_prompt = validated_prompt[0].response_text
            print(f"Validated prompt: {safe_prompt}")

            # 2. Send to LLM (example with OpenAI)
            openai_client = OpenAI()  # Ensure OPENAI_API_KEY is set

            response = openai_client.chat.completions.create(
                model="gpt-4",
                messages=[{"role": "user", "content": safe_prompt}],
                temperature=0
            )

            llm_response = response.choices[0].message.content
            print(f"LLM response: {llm_response}")

            # 3. Validate LLM response
            validated_response = trust3_shield_client.check_access(
                text=llm_response,
                conversation_type=ConversationType.REPLY,
                thread_id=thread_id
            )

            safe_response = validated_response[0].response_text
            print(f"Final response: {safe_response}")

            return safe_response

    except trust3_client.exception.AccessControlException as e:
        print(f"Access denied: {e}")
        return "I'm sorry, I cannot process this request due to security policies."
    except Exception as e:
        print(f"Error: {e}")
        return "An error occurred while processing your request."

# Usage
if __name__ == "__main__":
    user_question = "What is my email address if it's abc@gmail.com?"
    response = secure_ai_chat(user_question)
    print(f"Bot: {response}")

Best Practices

1. Error Handling

Always handle AccessControlException gracefully:

Python
1
2
3
4
5
6
7
8
try:
    # Trust3 operations
    pass
except trust3_client.exception.AccessControlException as e:
    # Log the violation for audit purposes
    logger.warning(f"Access control violation: {e}")
    # Return appropriate user-friendly message
    return "Request cannot be processed due to security policies."

2. Thread ID Management

  • Use unique thread IDs for each conversation
  • Use the same thread ID for related prompt-response pairs
  • Consider using user session IDs or conversation IDs as thread IDs

3. Username Management

  • Use actual usernames for audit trails
  • For service accounts, use descriptive service names
  • Ensure usernames are consistent across your application

What Next?