Back to Resources
Research

Our Approach to AI Safety in Enterprise Business Intelligence

Skopx Team
April 7, 2026
11 min read

Our Approach to AI Safety in Enterprise Business Intelligence

AI safety in enterprise BI is different from AI safety in consumer chatbots. The risks are not about generating offensive text. They are about generating wrong numbers that drive wrong decisions, executing unintended actions on production systems, and leaking sensitive data across organizational boundaries.

This article describes the specific safety challenges we face at Skopx and how we address them.

Challenge 1: Numerical Accuracy

When an AI says "revenue was $2.3M last quarter," that number must be correct. A 5% error in a consumer chatbot is a minor annoyance. A 5% error in a financial report can trigger wrong decisions worth millions.

Our approach:

  • Every number comes from a verifiable SQL query or API call
  • Users can expand tool calls to see the exact query and raw data
  • The AI is instructed to never fabricate or estimate numbers
  • Self-correction: if a query fails, the AI retries with adjusted SQL up to 3 times

Challenge 2: Action Safety

Skopx agents can send emails, create tickets, modify calendar events, and post to Slack. Each of these is an irreversible action with real consequences.

Our approach:

  • Actions are only taken when the user explicitly requests them
  • The AI never takes proactive actions without confirmation
  • Terminal actions (send email, post message) trigger an early exit from the tool loop to prevent unnecessary follow-up API calls
  • User identity (name and email) is injected into the system prompt to prevent the AI from fabricating sender names

Challenge 3: Prompt Injection

A malicious user could craft inputs designed to make the AI ignore its instructions, reveal system prompts, or execute unauthorized actions.

Our approach:

  • System prompts use explicit boundary markers
  • Tool inputs are sanitized before execution
  • The AI cannot access other users' data regardless of prompt content (enforced by Row-Level Security at the database level)
  • Rate limiting prevents abuse

Challenge 4: Data Leakage

In a multi-tenant system, data from one organization must never appear in another organization's context.

Our approach:

  • Row-Level Security on all database tables
  • Source ownership tracking for connected databases
  • Memory isolation per user ID
  • Token encryption with per-token random salt

Challenge 5: Cost Control

AI API calls are expensive. A runaway tool loop can burn through a user's API budget in seconds.

Our approach:

  • Tool loop limited to 10 iterations
  • Early exit after successful terminal actions
  • Essential tool filtering (14 Gmail tools instead of 62)
  • Smart toolkit detection (load only relevant tools per message)
  • Prompt caching (90% savings on repeated system prompts)

What We Are Still Working On

AI safety is not a destination; it is a continuous process. We are actively working on:

  • Automated testing for prompt injection vulnerabilities
  • Anomaly detection on AI behavior (detecting when the model behaves unusually)
  • User-configurable action permissions (e.g., "allow email sending but block calendar modifications")
  • Formal verification of data isolation properties

Share this article

Skopx Team

The Skopx engineering and product team

Stay Updated

Get the latest insights on AI-powered code intelligence delivered to your inbox.