AI Agent Operations Templates
This page collects small templates for operating AI coding agents without turning project instructions into long control-plane documents. Use them as starting points, then remove anything that does not apply to the repository.
Minimal AGENTS.md
# AGENTS.md
## Repository Purpose
- Describe what this repository builds and operates in 2-4 lines.
- Name the domain boundaries the agent must not misunderstand.
## First Places To Inspect
- List files and directories to read before editing.
- Example: README.md, package.json, src/, tests/, docs/
## Working Rules
- Name boundaries such as URLs, public APIs, database schemas, and generated files.
- Prefer existing repository patterns.
- Make large refactors explicit instead of accidental.
## Verification
- List verification commands by change type.
- Require skipped checks to be reported with reasons.
## Completion Checklist
- Changed files
- Verification commands and results
- Remaining risks
- Follow-up candidates
Minimal CLAUDE.md
# CLAUDE.md
## Project
- Repository purpose:
- Main code paths:
- Test paths:
- Documentation paths:
## Commands
- Build:
- Test:
- Lint:
- Format:
## Always-On Rules
- Keep only rules needed every session.
- Move path-specific rules to rules files.
- Move repeated procedures to skills.
- Move enforceable boundaries to settings, permissions, hooks, or CI.
## Done Means
- Summarize changes.
- Report verification results.
- Do not hide skipped checks or risks.
Codex Task Request Prompt
Goal:
- State the change in one sentence.
Scope:
- Files/directories Codex may edit:
- Files/directories Codex should not edit:
Constraints:
- Note URL, public API, data format, compatibility, performance, or security boundaries.
Working Method:
- Inspect the structure first, then make the smallest useful change.
- Do not revert existing user changes.
- Add dependencies only when necessary and explain why.
Verification:
- Commands to run:
- Pages or artifacts to inspect:
- What to report if verification fails:
Done Means:
- Changed files
- Purpose of each change
- Verification results
- Remaining risks
Agent Output Review Checklist
- Does the diff solve the requested goal?
- Were URLs, public APIs, filenames, or data schemas changed unintentionally?
- Did the agent avoid reverting existing user work?
- If a dependency was added, is the reason clear?
- Were the needed build, test, lint, or link checks run?
- Are skipped checks reported?
- Is the diff small enough to review?
- Were secrets, credentials, sensitive files, or internal logs exposed?
- Were required docs or operating checklists updated?
Claude Code Permissions/Settings Checklist
- Does
CLAUDE.mdcontain only short always-on guidance? - Did long procedures move into skills or separate docs?
- Did path-specific rules move into scoped rule files?
- Should
.env, secrets, credentials, or production config reads be denied? - Can destructive commands, forced pushes, or credential output be blocked by permissions or hooks?
- Was the data exposed through each MCP connection reviewed?
- Is each hook an advisory check or an enforceable guardrail?
- Does the permission mode match the team’s approval policy?
- Were allowed and denied tool calls tested after changing settings?