Task Instructions
Code transformations at scale need consistent execution. Prompting an LLM once per file gives you variance - results differ based on how you phrase each prompt, what context you include, whether you remember edge cases. When you’re working across hundreds of files, that variance compounds.
Task instructions solve this by encoding your transformation logic once, then executing it systematically. You define the steps, the context gathering, the validation. Tern runs it the same way on every file.
Why this matters: Complex transformations need multiple passes. Read the file, understand patterns, store that information, apply changes using what you learned, validate results. Doing this 500 times by prompting individually means 500 chances for inconsistency. Encoding it means you improve the logic in one place and every execution benefits.
How it works: When you execute a task, Tern runs the instructions as a grid: one AI execution per file, per step. 5 steps on 500 files = 2,500 parallelized tasks. Each gets the same instructions but with its own context ({file}, stored variables from earlier steps). When something fails, you improve the instructions and re-run. Your logic compounds instead of starting fresh each time.
Writing Instructions
Break your transformation into steps using # headers. Each step can have three parts:
# Step Name
Instructions for the AI
## Store
Context to save for later steps
## Validation
run: command to validate this step
AI validation instructionsInstructions
Content between # header and first ## subsection. This is what the AI reads to understand what to do. Include explanations, code examples (before/after), and references to template variables.
# Migrate FormModel to makeObservable pattern
Replace decorators with makeObservable pattern.
Update {file} to remove @observable, @action, @computed decorators.Store
Mark what context to capture for later steps. Later steps can reference this stored data.
## Store
Output: { "testFile": "path", "lineCount": number }Validation
Two types: Template commands (lines starting with run:) execute shell commands. LLM instructions (any other text) define AI validation. Both can be present - commands run first, then LLM.
## Validation
run: npm test {file}
run: ! grep -q "mountWithTheme" {file} # shell operators work too
Verify all test cases pass and there are no TypeScript errors.Template Variables
{file} - Current file being processed
# Fix imports in {file}
Update the import statements to use the new API.
## Validation
run: eslint {file}{base_commit} - Base commit hash for comparisons
# Gather context
Compare {file} at {base_commit} with current version.{subtasks} - Placeholder for nested subtasks
# Migrate all tests
For each test file:
{subtasks}File Includes
Reference external files in your instructions using @file syntax. Tern resolves these at execution time and substitutes the file contents inline. This keeps your instructions DRY and maintainable - update the guidelines once, all tasks use the latest version.
@filename - Include file from current repo (must have extension or path)
# Update component
Follow the patterns in @migration-guide.md
Update {file} according to these guidelines.@repo:filename - Include file from specific repo
# Apply API changes
Reference the API contract in @backend:api-spec.yaml
Update {file} to match the new API structure.@org/repo:filename - Full repo specification
# Update integration
Follow @myorg/standards:coding-guidelines.md
Ensure {file} adheres to our standards.What Gets Matched
The @file syntax is strict to avoid false matches:
- ✅
@config.yaml- has extension - ✅
@docs/guide.md- has path separator - ✅
@repo:file.txt- has repo prefix - ❌
@username- no extension or path (looks like social handle) - ❌
support@example.com- not preceded by whitespace (email address) - ❌
@- bare @ symbol
File Requirements
- Files must be text (not binary)
- Referenced repo must be included in the migration
- File path is relative to repo root
- For
@filewithout repo prefix, uses the first repo in multi-repo setups
Storage in Runs
When you execute a task, Tern resolves all @file references and stores the complete instructions with file contents substituted. This means:
- Task runs are self-contained and reproducible
- You can review exactly what instructions were used
- Changes to referenced files don’t affect historical runs
Complete Example
Here’s a real set of instructions from a React 18 migration:
# Gather Context
Find the following information at the git sha {base_commit}:
1. Component source file (follow imports from test file)
2. The original test file (for comparison during migration)
3. A test file that has already been migrated to RTL
4. Helper functions used - both names and file paths
5. HOCs used by the component
Output as JSON using @ syntax for file references.
## Store
Cache this context for reuse.
# Migrate Test
Using the cached context, convert this Enzyme test to React Testing Library.
**Key conversions**:
- `mountWithTheme(<Component />)` → `renderWithTheme(<Component />)`
- Remove Enzyme imports, add RTL imports
{subtasks}
## Validation
run: ! grep -q "mountWithTheme" {file}
# Run Tests
Get the test to pass.
DO NOT RUN npm test directly. It will hang.
INSTEAD, run: CI=true yarn test {file}
## Validation
run: CI=true yarn test {file}
# Fix Lint
Fix ESLint violations in the migrated file.
## Validation
run: yarn eslint {file}
# Verify Types
Ensure TypeScript types are correct for this file.
## Validation
run: yarn tsc -p config/tsconfig.build.json | grep -q {file} && exit 1 || exit 0Common Mistakes
Use singular forms:
✅ ## Validation
❌ ## Validations
✅ ## Store
❌ ## StoresStick to known template variables:
✅ {file}, {base_commit}, {subtasks}
❌ {custom_variable} # won't be substitutedJSON in instructions works fine - the parser ignores JSON when extracting template variables:
# Transform config
Change from: { "old": "format" }
To: { "new": "format" }File includes need proper syntax:
✅ @guidelines.md # has extension
✅ @docs/setup.md # has path
✅ @backend:api.yaml # has repo prefix
❌ @guidelines # no extension/path (won't match)Tips
Keep steps focused - one transformation per step. Validate incrementally, not just at the end. Store context early in the workflow. Use specific shell commands that return clear pass/fail signals. Provide before/after examples so the AI understands the transformation.
