Development Workflows¶
Audience: AI Agents
Step-by-step processes for common development tasks.
Mental Model Loading¶
The toolkit's architectural thinking is layered:
- Default: Read
core/architecture-thinking.md(always, at session start) - Override: Read project root
architecture-thinking.local.md(if it exists) - Sections with matching headings replace the default
- New sections are added
- Sections listed under
## Skipare ignored from the default
This allows projects to customize how the agent thinks about architecture without forking the toolkit. See templates/architecture-thinking.override.template.md for the override template.
Pre-Work Checklist¶
Run this before starting any development task:
- Check if git is initialized
- If not initialized, run
git init - Add remote origin if needed
- Run
git statusto check workspace state - If there are uncommitted changes, inform the user
- Ask user if they want to stash, commit, or continue with dirty workspace
- Run test suite to verify current state
- If tests fail, inform user before proceeding
- Document which tests are failing
Autonomous Development Loop¶
Use this loop when implementing features or fixes:
REPEAT until all todo items are complete (max 10 retries per item):
1. Pick next todo item, mark as in_progress
2. Implement the change
3. Write/update tests for the change
4. Run test suite
- If tests fail: fix issues and re-run (increment retry counter)
- If retry limit reached: stop, inform user, document blockers
- If tests pass: reset retry counter, continue
5. Verify implementation meets requirements
6. Mark todo item as completed
7. Commit the code with descriptive message
8. Return to step 1
Exit conditions: - All todo items are marked as completed - All tests are passing - Code is committed
Failure conditions (stop and inform user): - Retry limit reached on any item - Unresolvable dependency or blocker encountered - Tests require infrastructure not available
Rollback procedure:
- If implementation fails mid-way, use git stash to save work
- Document what was attempted and why it failed
- Reset to last known good state if needed
Todo-Driven Development¶
An invokable workflow for autonomous and semi-autonomous development.
Invoke with: "Use todo workflow" | "Use todo workflow, review code" | "Use todo workflow --dry-run"
Full documentation: skills/core/todo-workflow/
Feature Development¶
- Pre-work: Run Pre-Work Checklist (above)
- Read project context for current project state
- Create a feature branch:
git checkout -b feature/<name> - Create todo list for implementation tasks
- Execute: Run Autonomous Development Loop for each todo
- Ensure all tests pass
- Update project context if significant changes made
- Create pull request using PR template
Bug Fixing¶
- Pre-work: Run Pre-Work Checklist (above)
- Reproduce the issue
- Identify root cause
- Create todo list for fix tasks
- Execute: Run Autonomous Development Loop
- Add regression test
- Verify fix resolves issue
- Commit with reference to bug/issue
Hotfix (Urgent Production Fix)¶
Use for critical production issues that need immediate resolution:
- Pre-work: Run Pre-Work Checklist
- Create hotfix branch from main/production:
git checkout -b hotfix/<issue> - Identify and document the issue
- Implement minimal fix (avoid scope creep)
- Write regression test
- Run full test suite
- Commit with
[HOTFIX]prefix in message - Create PR for expedited review
- After merge, backport to development branch if needed
- Update project context with post-mortem notes
Refactoring¶
Use when improving code structure without changing functionality:
- Pre-work: Run Pre-Work Checklist
- Document current behavior with tests (if not already covered)
- Create refactoring branch:
git checkout -b refactor/<area> - Create todo list for refactoring steps
- Execute: Run Autonomous Development Loop
- Critical: Tests must pass after each step
- No functionality changes allowed
- Verify all existing tests still pass
- Update documentation if APIs changed
- Create pull request
Code Review¶
- Check for coding standard compliance
- Review security implications
- Verify test coverage
- Check for performance issues
- Ensure no scope creep beyond PR description
- Verify documentation is updated
Context Update¶
When to update project context:
- After completing a significant feature
- When architectural decisions are made
- When new dependencies are added
- When known issues are discovered or resolved
- When environment setup changes
What to update: - Current State section with progress - Key Components if new modules added - Known Issues if bugs discovered - Dependencies if new ones added - Tech Stack if tools/frameworks change
Skill Discovery¶
An invokable workflow for listing capabilities and offering elaboration.
Triggers¶
- "What skills do you have?"
- "List your capabilities"
- "What can you do?"
- "Show me your skills"
- After toolkit installation or update
When to Use¶
- Initial onboarding - After installing the toolkit, introduce capabilities
- Refresh - User wants a reminder of available skills
- After update - Toolkit was updated, show what's new or changed
- Exploration - User is deciding which skill to use
Workflow¶
1. READ toolkit skill index (skills/_index.md)
2. READ TOGAF index (skills/optional/togaf/_index.md)
3. READ analysis outputs index (skills/optional/analysis-outputs/_index.md)
4. PRESENT skills organized by category:
**Analysis Skills** (understand codebases)
- codebase-analysis: Base analysis engine
- arch-analysis: 8-phase architecture documentation
- security-analysis: Security + compliance (OWASP, NIST, CIS, ISO, NIS 2)
- nonfunctional-analysis: Testing, config, performance, health
- architecture-synthesis: From diagrams to architecture model
- fitness-functions: Evolutionary architecture fitness
**Architecture & Modeling** (enterprise patterns)
- structurizr: C4 modeling with Structurizr DSL
- TOGAF ADM: Full cycle (Preliminary + Phases A-H)
**Development Workflows** (coding practices)
- git-workflow: Commits, branching, PRs
- todo-workflow: Autonomous task-based development
- software-design: Patterns and principles
- tech-stack-decisions: Technology evaluation, ADRs
- code-conventions: Style guides
- presentation: Slide generation (PPTX, PDF)
**Output Formats** (export options)
- core-architecture, architecture-docs, coding-context
- product-spec, structurizr, archimate
5. PRESENT invokable commands:
- "Analyze the architecture"
- "Analyze security"
- "Analyze code quality"
- "Create C4 model"
- "Apply TOGAF"
- "Use todo workflow"
- "Generate presentation"
- etc.
6. OFFER to elaborate:
"Would you like me to explain any of these skills in more detail?
Just name the skill or category you're interested in."
7. IF user requests elaboration:
- Read the skill's README.md
- Summarize key concepts and use cases
- Show example invocations
- Offer to demonstrate or run the skill
Output Format¶
## What I Can Do
I've learned the AI Architect Toolbox. Here's what I can help you with:
### Analysis Skills
| Skill | What It Does | Try It |
|-------|--------------|--------|
| arch-analysis | Document your codebase architecture | "Analyze the architecture" |
| security-analysis | Security assessment + compliance | "Analyze security" |
| ... | ... | ... |
### Architecture & Modeling
...
### Development Workflows
...
---
**Want details on any skill?** Just ask, and I'll explain what it does and show examples.
Elaboration Response¶
When user asks for details on a specific skill: