AI Integration

Tern uses AI for planning migrations, executing workflows, and learning from results.

AI for Planning

Discovery work is tedious but necessary. You need to know what breaking changes exist, which files use deprecated APIs, how patterns vary across the codebase. Reading changelogs and searching manually takes hours. Asking an LLM in chat gives you answers but no structure to act on.

Chat with Tern to build a migration plan. Tell it “upgrade to React 18” and it searches your codebase, reads framework docs, identifies breaking changes, and creates a structured task hierarchy. Each task has a search query (finds all instances), notes for transformation logic, and a place in the overall plan.

The output isn’t just information - it’s a plan you can execute. AI handles systematic discovery across hundreds of files. You handle the judgment calls about scope and approach.

AI for Execution

Applying the same transformation across hundreds of files reveals variance. Each file has different patterns, edge cases, surrounding context. Prompting an LLM individually for each one means variance in your prompts too - you forget details, phrase things differently, include different examples.

Write your transformation instructions once. Tern executes them as a grid: one AI task per file, per step. 500 files with 5 steps = 2,500 parallelized executions. Each gets the same instructions but with its own context - file path, stored variables from earlier steps, validation results.

The AI reads each file, understands its specific patterns, applies your transformation logic, validates results (tests, syntax checks), stores outcomes. You watch the grid fill with status badges. Green = success, red = needs attention. When something fails, you improve the instructions once and re-run all failures with better logic.

This shifts where you work. Instead of prompting 500 times and getting 500 slightly different approaches, you refine one set of instructions and execute systematically. The transformation logic compounds - each improvement applies everywhere.

AI for Learning

Your first pass rarely gets everything right. Instructions that work for 400 files fail on 100 because you missed an edge case, weren’t specific enough, or the pattern varies more than you expected. With individual prompting, you’d debug each failure separately - same work, 100 times.

After a workflow runs, Tern analyzes patterns in the failures. It identifies commonalities: “These 20 files all failed at step 3 because your instructions said ‘remove all decorators’ but didn’t specify which ones.” It suggests specific improvements: “Change line 15 to: ‘Remove @observable, @action, and @computed decorators.‘”

You review the suggestions, apply what makes sense, re-run on failures. Your instructions get more precise. Success rate improves. The key difference: you’re iterating at the instruction level, not the execution level. One improvement fixes all similar failures. Your understanding of the transformation deepens. The logic compounds.

This is why systematic beats ad-hoc. Start with rough instructions, see what breaks, improve based on real results, re-execute with better logic. Each iteration teaches you about the problem and improves future runs.

Bring Your Own Key

Tern is BYOK (bring your own key). Your API keys stay local in the CLI, never sent to Tern’s servers, and proxy through the API to your chosen provider. See AI Models for configuration.