Aspect Code: Giving AI Assistants the Context They're Missing

A lot of people are excited about AI coding tools.
A lot of people are also annoyed by them.

The complaints are pretty consistent:

That's the backdrop for Aspect Code.

When I looked at my own experience and what devs were posting, I kept coming back to three categories of pain:

  1. Breaking existing things — changes that pass at first glance but quietly break tests, invariants, or edge cases.
  2. Going off the rails — big, unnecessary refactors when you really just needed a small, local change.
  3. Long-term technical debt — code that "works" but is inefficient or spaghetti-like, and doesn't match the rest of the project.

All three point at the same root issue:
AI tools usually don't have any structured understanding of your codebase. They see tokens in a window, not the architecture you've actually built.


LLMs are good at patterns, bad at structure

Humans have a rough split between fast, pattern-based thinking and slower, more deliberate reasoning. LLMs are extremely strong at the first one. They're great at "this looks like that," and they've seen a lot of code.

What they don't have is a grounded model of your system:

We're also probably not getting rid of all hallucinations. Bigger models help, better prompts help, but the underlying mechanism is still statistical prediction.

So Aspect Code doesn't try to turn an LLM into something it's not. Instead, the idea is:

Give the model a structured description of your codebase, and let it use its pattern-matching ability on top of that.

Not "make the LLM symbolic," but "give it a symbolic map it can read."


A note on knowledge bases

I briefly worked at Cycorp, which builds a large, logic-based knowledge base about the world. The goal there is to encode facts and rules in a form that supports real reasoning: if X and Y are true, Z must follow.

Aspect Code is not that, and it's worth being clear about the difference.

That's a compromise, but a deliberate one:

So Aspect Code builds a richer internal model (dependency graph, symbol graph), then exports that into a form that:

  1. Fits how current agents are built, and
  2. Is still usable directly by humans in an editor.

If the infrastructure for agents evolves to support more native KB integration, there's room to push that direction. For now, "high-quality descriptions + consistent structure" already move the needle a lot.


What Aspect Code actually does

Under the hood there are two closely connected halves:
the engine (analysis + findings) and the surfaces (KB files + VS Code UI + agent prompts).

Engine: analysis and findings

Aspect Code parses your repo (Python, TypeScript/JavaScript, Java, C# to start) and builds:

These findings aren't just a side feature. They're important for two reasons:

  1. They show you where the codebase diverges from its own patterns or expectations. Those are the places an LLM is most likely to get confused and break something.
  2. They feed into the knowledge base we generate. The KB doesn't just say "file A calls file B"; it can also say "this file is a critical dependency" or "this bit of code is risky."

As a concrete example:
If you have a critical authorization check in one module, and several other modules call around it or reimplement parts of it, Aspect Code can:

That's useful for you when refactoring, and useful for an LLM trying not to accidentally skip security checks.


Surfaces: KB files, VS Code UI, and agents

From that engine, Aspect Code generates:

.aspect/ knowledge base

The KB lives in .aspect/ and contains 3 focused files:

1. architecture.md — The Guardrail

Defensive guide to project structure and "do not break" zones:

2. map.md — The Context

Dense symbol index for complex edits:

3. context.md — The Flow

How data and requests move through the system:

These are:

VS Code extension

There's also an interactive extension that sits on top of the same data:

The point of the Agent button is that you shouldn't have to open yet another chat, remember all the right context, and hand-craft a great prompt. The extension already has the graph and the findings; it can assemble that into something useful for your assistant.

Agent integrations

On top of that, Aspect Code generates instruction files for tools like:

Those instructions tell the assistant, in plain language:

Again, the engine is the same; I'm just exposing it in different forms: markdown for agents, UI and graphs for humans.


Why this approach (and not something bigger or smaller)

There's a long-term vision here:
a structural layer that any AI agent can rely on to understand and safely modify a codebase.

But right now, Aspect Code is intentionally scoped:

The bet is that a good, consistent representation of a single repo can:

And we can do that without asking you to rebuild your workflow from scratch.


Next steps

Aspect Code is aimed at people who are already using AI to write code:

If you recognize the patterns at the top of this post — broken tests, over-eager refactors, weird technical debt introduced by your assistant — then you're basically the target audience.

The short-term focus is simple:

Enter your email on the home page to be notified of releases, or check out the most recent benchmark.