Back to Resources
Research

Why We Built a Learning Engine That Gets Smarter With Every Query

Skopx Team
April 18, 2026
11 min read

Why We Built a Learning Engine That Gets Smarter With Every Query

Traditional business intelligence is static. You build a dashboard, it shows the same metrics forever. The tool does not learn that you check revenue every Monday, that you prefer bar charts over pie charts, or that when you say "the main database" you mean the PostgreSQL instance, not the analytics warehouse.

Skopx is different. Every interaction teaches the system something about your preferences, your workflows, and your business. Over time, the AI becomes a better analyst, one that understands your context without you having to repeat it.

The Inspiration: Autoresearch

Our learning engine is inspired by the autoresearch paradigm in machine learning: experiment, measure, keep what works, discard what does not, and accumulate improvements over time.

In traditional ML, this means training runs, loss curves, and hyperparameter sweeps. In Skopx, it means tracking which AI responses users find helpful, which patterns emerge across conversations, and which approaches consistently produce better outcomes.

How the Learning Engine Works

The engine operates on three levels:

Level 1: Explicit Feedback

When users click thumbs-up or thumbs-down on an AI response, we record the feedback along with the full context: the question asked, the tools used, the response generated, and the conversation history. This creates a labeled dataset of good and bad responses.

Level 2: Implicit Signals

Beyond explicit feedback, we track behavioral signals that indicate satisfaction or frustration:

  • Follow-up questions: If a user asks the same question differently, the first response probably was not helpful.
  • Session length: Longer sessions after a response suggest the user found it valuable and kept working.
  • Action completion: If the AI sent an email and the user did not correct or resend it, the action was probably correct.

Level 3: Pattern Discovery

Every 24 hours, the learning engine analyzes accumulated feedback and discovers patterns:

  • Which prompt strategies produce the best responses for this user
  • Which data visualization formats they prefer
  • Which tools they use most frequently for which types of questions
  • Which query styles produce the most accurate SQL

These patterns are scored using an exponential moving average (EMA) with debiasing, similar to how training loss is smoothed in deep learning. Patterns that consistently perform well are promoted; patterns that produce mixed results are gradually retired.

Adaptive Features

The learning engine adapts several aspects of the AI's behavior:

  • Anomaly threshold: If a user dismisses too many anomaly alerts (false positives), the detection threshold tightens automatically. If they acknowledge all alerts, the threshold stays sensitive.
  • Memory relevance: The engine learns which types of conversation context are worth remembering and which are noise.
  • Follow-up style: Some users prefer proactive follow-up suggestions after every response. Others find them annoying. The engine adapts.
  • Visualization preferences: Bar vs. line vs. table. The engine learns which formats each user prefers for different data types.

Safety Mechanisms

A learning system that optimizes for user satisfaction could go wrong in obvious ways. We built three safety mechanisms:

  1. Crash detection: If user satisfaction drops more than 30% over a rolling window, the engine reverts to the previous stable configuration. This is analogous to NaN loss detection in training.

  2. Simplicity criterion: The engine prefers simpler patterns over complex ones. A pattern that applies to 80% of cases with 3 rules is preferred over one that applies to 95% of cases with 15 rules.

  3. Warmdown for demoted patterns: When a pattern stops performing well, it is not deleted immediately. It is gradually retired over several days, giving the system time to confirm the demotion is not a false signal.

Results

After deploying the learning engine to production users:

  • Response satisfaction improved by 23% over the first 30 days of use per user
  • Follow-up question rates (a proxy for "the first answer was not good enough") decreased by 34%
  • Average tokens per conversation decreased by 18% as the engine learned to give more concise responses to users who prefer brevity

The Cost

The learning engine costs approximately $0.001 per pattern discovery cycle (using Claude Haiku for analysis). All scoring and adaptation logic runs locally with zero API calls. For a typical user, the engine runs one discovery cycle per day, costing about $0.03 per month.

Open Questions

We are actively researching several extensions:

  • Cross-user learning: Can patterns discovered for one user benefit similar users without compromising privacy?
  • Active experimentation: Should the engine occasionally try approaches it has not seen, to discover potentially better strategies?
  • Meta-learning: Can the engine learn how to learn faster for new users by analyzing onboarding patterns across the user base?

These are hard problems with real privacy and safety implications. We will publish our findings as we make progress.

Share this article

Skopx Team

The Skopx engineering and product team

Stay Updated

Get the latest insights on AI-powered code intelligence delivered to your inbox.