your agent reads
6,000 tokens.
it needs 180.

Local code intelligence that fits in a context window.

// tree-sitter AST → SQLite graph → one CLI call → structured signal.

bash $ curl -fsSL https://inari.hermestech.uk/install.sh | sh
powershell > irm https://inari.hermestech.uk/install.ps1 | iex
-32%
cost
-35%
tokens
+67%
edits
2x
navigation
View on GitHub open source | MIT | v1.1.1

The problem

Your agent grep's. It cat's. It reads the whole file to find one function signature.

Without structural context, every refactoring task starts with the agent blindly loading source files. 8,000-12,000 tokens burned on navigation before a single edit. That's your money evaporating into context windows.

inari parses your code with tree-sitter, stores the result in a local SQLite graph, and lets agents query call hierarchies, type relationships, and dependency chains in ~200 tokens. No server. No API key. Just a binary.

Without inari -- 25:1 navigation-to-edit ratio

$0.58 per task | 2.4 edits

With inari -- 12:1 navigation-to-edit ratio

$0.39 per task | 4.0 edits

↑ 32% cheaper | 67% more edits | every read counts

See it in action

180 tokens. Class structure, caller counts, type signatures.

inari sketch PaymentService returns everything an agent needs to make correct edits -- without reading a single source file. The agent knows who calls what, what depends on what, and what's internal.

terminal
$
PaymentService class   src/payments/service.ts:12-89
---------------------------------------------------------------------
deps: StripeClient, UserRepository, Logger, PaymentConfig
extends: BaseService
impl: IPaymentService
methods:
processPayment  (amount: Decimal, userId: string) Promise<PaymentResult> [11 callers]
refundPayment  (txId: string, reason?: string) Promise<bool> [3 callers]
validateCard  (card: CardDetails) ValidationResult [internal]
getTransaction  (id: string) Transaction | null [2 callers]
fields:
private readonly client: StripeClient
private         repo: UserRepository
private         logger: Logger
// ~180 tokens | source file is 6,200 tokens
// agent reasoning -- based on this output:
claude code | agent decisions claude-sonnet-4-6
processPayment has 11 callers -- will run inari refs processPayment before touching the signature.
depends on StripeClient -- uses the correct import without reading the imports file.
validateCard is internal -- correctly excludes it from the public interface refactor.
did not read service.ts -- the 6,200-token source file was never loaded. saved ~6,020 tokens on this single command.
Tokens saved vs loading source directly ~6,020 tokens

Command reference

18 commands. Each one replaces a dozen file reads.

Every output is structured for LLM consumption -- labelled fields, consistent separators, --json on everything. No parsing needed.

inari sketch <symbol>

Compressed structural overview. Methods with caller counts, deps, type signatures. ~200 tokens instead of ~4,000.

Use before: reading source or editing a class.

inari refs <symbol> [--kind]

Every reference grouped by kind -- instantiations, imports, type annotations, call sites -- with file and line.

Use before: changing a function signature.

inari callers <symbol> [--depth]

Direct and transitive callers. At depth 1, shows who calls this. At depth 2+, shows the blast radius -- which entry points are ultimately affected.

Use before: any refactor that changes a public API.

inari find "<intent>" [--kind]

Semantic search by what code does, not what it's named. Local embeddings -- no API key, works offline.

Use when: navigating unfamiliar code by intent.

inari deps <symbol> [--depth]

What does this symbol depend on? Direct imports, called functions, extended classes. Transitive with --depth.

Use when: understanding what must exist first.

inari trace <symbol> [--depth]

Show how requests reach a symbol. Traces the call graph backward from target to entry points. Use for debugging -- see how a bug is triggered.

Use when: checking if an internal change is safe.

inari map [--json]

Full repository overview in ~500-1000 tokens. Entry points, core symbols ranked by caller count, architecture layers. Start here for complex tasks.

Use when: starting work on an unfamiliar codebase.

inari entrypoints [--json]

List API controllers, workers, and event handlers. Symbols with zero incoming calls -- the starting points for every request flow.

Use when: understanding how requests enter the system.

inari index [--full] [--watch]

Build or refresh the code index. Incremental by default. --full rebuilds from scratch. --watch auto re-indexes on file changes.

Use: once on setup. --watch during development.

inari rdeps <symbol> [--depth]

What depends on this symbol? Reverse dependency traversal. Know the blast radius before deleting or renaming anything.

Use before: deleting or renaming a symbol.

inari status [--json]

Index health check. Symbol count, file count, last indexed time, stale files. Know if your index is fresh before range-based edits.

Use when: checking if the index is stale.

inari update [--check] [--json]

Self-update from GitHub Releases. Downloads and replaces the binary in-place. --check to see available version without installing.

Use: when a new version is available.

Setup

curl. init. index. Done.

Everything lives in .inari/ -- a SQLite DB and a TOML config. No daemon, no Docker, no cloud dependency. Gitignore it and forget it.

1
Install
$ curl -fsSL https://inari.hermestech.uk/install.sh | sh
2
Index your project
$ inari init && inari index
3
Add to your agent's config
CLAUDE.md, .cursorrules, copilot-instructions.md -- paste the snippet. Agents use it automatically.
CLAUDE.md
## Code Navigation

This project uses Inari for code intelligence.
Start with `inari map`, then drill down.

**Orientation:** `inari map` -- full repo overview (~500-1000 tokens) `inari entrypoints` -- API controllers, workers
**Before editing:** `inari sketch <symbol>` -- structural overview `inari refs <symbol>` -- all references with file + line `inari callers <symbol>` -- blast radius
**Finding code and flow:** `inari find "<query>"` -- search by intent `inari trace <symbol>` -- entry-point-to-target paths
// always inari sketch before reading source. // auto re-index: inari index --watch

Scale up

Monorepo? Polyglot? Both?

Workspaces federate queries across independent project graphs. Your TypeScript frontend, Rust backend, and Python ML service -- one --workspace flag queries them all. Each keeps its own .inari/ index.

~/platform/
├── web-app/     [TS4,102 sym  live
├── api-gateway/  [Go2,341 sym  live
└── user-service/ [Rs1,823 sym  live
$ inari refs PaymentService --workspace
→ found 23 refs across 3 projects
inari workspace init [--name]

Discover projects and create a workspace manifest. Each project keeps its own .inari/ index.

Use: once per workspace.

inari map --workspace [--json]

Unified map across all members. Entry points, core symbols, architecture -- tagged by project.

Use: orienting across repos.

inari workspace index --watch

Watch all members. One watcher per project, one command to rule them all. Ctrl+C stops everything.

Use: during development.

Language support

tree-sitter does the parsing. You add a .scm file.

Each language is a plugin: grammar + two query files (symbols.scm, edges.scm). TypeScript, C#, Python, Rust, and Go ship built-in. Adding a new language is ~200 lines of Go.

TypeScript
ready
C#
ready
Python
ready
Rust
ready
Go
ready
Java
ready