How Call Graphs Gave Our LLM Superhuman Context
compute()
, Ctags will happily index both definitions. If our code calls compute()
, a naive lookup in the Ctags index would return both candidate definitions.
compute()
function from another module) would be misleading and could derail the review.
compute()
, process()
, or any other function might be defined.
result = compute(100)
, tree-sitter knows this is a function call, not a variable assignment or a string.
compute()
might return three different candidates from completely different modules. Without further analysis, we’re looking at a three-way ambiguity.
This is exactly what breaks naive approaches. Some tools would just pick the first match or show all three. But in code review, precision matters — showing the wrong compute()
function could lead to completely incorrect review feedback.
from payment.utils import compute
, we know exactly which compute()
wins. Even with relative imports like from ..utils import compute
, we resolve the path relative to the current file’s location.import { compute } from './payment/utils'
or const { compute } = require('./analytics/metrics')
, we trace through the module system. We handle default exports, named exports, and even barrel exports that re-export from other files.com.payment.Utils.compute()
are obvious, but we also resolve simple compute()
calls by checking the import com.payment.Utils;
statements. We even handle wildcard imports, though they require checking each potential match.compute()
, the LLM can see every call site and understand the specific context of each usage.
The beauty of this approach is its scalability. By building on Ctags’ language support, we immediately work with any language it supports — over 40 and counting. The import-resolution logic, while language-specific, follows common patterns that we can implement incrementally as needed.