Originally published on Kepler
AI produces numbers that are incorrect, untraceable, and unverifiable. Asking the same question twice yields different answers, and there’s no way to distinguish right from wrong outputs. This unreliability poses serious risks across industries like healthcare, legal, insurance, government, and finance where numerical accuracy directly impacts outcomes.
Through conversations with 137 financial firms - including private equity, hedge funds, and investment banks - a consistent pattern emerged: everyone wants to use AI, but nobody trusts it. As one managing director stated, “I can’t put a number in front of a client if I can’t show where it came from.”
The problem is fundamental: AI hallucination can’t be solved by making AI smarter alone. Training data improvements and guardrails reduce errors but don’t eliminate them.
Kepler’s solution is distinct: prevent AI from producing numbers altogether. When users ask Kepler questions, AI interprets intent while deterministic code retrieves data and performs calculations. AI and code operate in separate lanes, eliminating the possibility of AI-generated numerical errors. Every figure traces to its source; every answer is reproducible.
The team draws expertise from Palantir, where I spent seven years building data infrastructure, and Citadel, where co-founder John worked developing user-facing data products. Team members come from Meta, Bloomberg, and Stanford, backed by founders of MotherDuck, dbt, and figures from Facebook AI Research and OpenAI.
The company invokes Johannes Kepler’s historical use of trusted astronomical data to make groundbreaking discoveries, positioning trustworthy data infrastructure as foundational for AI advancement.