What we're thinking about.
The Context Wall
The naive approach to grounded, source-cited reasoning hits four walls in sequence: latency, cost, prompt-following degradation, and a hard context limit. Solving that forced us to rethink what a conversation actually is.
Equity Research in the Age of AI: What Actually Needs to Change
Equity research is built on precision, not just insight. AI doesn't need to replace judgment to be transformative — it needs to remove operational drag while keeping every number verifiably correct.
Agent Ontology: What It Actually Takes to Build AI You Can Trust
Building AI you can trust isn't a prompting problem or a model selection problem—it's an architecture problem. Here's what building Kepler's agent ontology has taught us.
Context Is the Easy Part
Everyone's talking about context engineering. Context engineering isn't a context problem. It's an engineering problem.
Trust in the Age of AI
What does it mean to trust AI? Accuracy, security, and credulity are three interrelated challenges, and forcing AI to show its work changes the nature of the output itself.
The AI Era of Finance - A Practitioner's View
Large language models excel at summarizing ideas, but struggle with the one thing finance can’t compromise on: numerical truth. Without grounding, traceability, and verification, AI risks accelerating errors rather than insight.
Why I Joined Kepler
From Meta to four seed-stage startups, Susannah has learned what actually matters when building from zero to one. In this post, Kepler’s Founding Engineer shares why she joined Kepler and why building fully traceable AI systems is a challenge she couldn’t pass up.
Introducing Kepler
We're building the infrastructure that gives AI the same foundation - to enable everyone to build on top of a platform they can truly trust. Verified data, traceable to its source, precise enough to follow wherever it leads.