in the wake of Claude Code
cap (a poem) GitHub Copilot
The workflow of software development is undergoing a transformation with the increasing popularity of Large Language Model (LLM) coding assistants such as these. These tools have dramatically reduced the time from idea to realization, allowing projects that used to be put on hold due to time-consumption to move forward quickly.
However, AI-generated code often has problems with correctness or efficiency. The key to success is to establish a clear and efficient set of collaborative norms. Providing AI with a clear set of development guidelines is one of the most effective ways to ensure that it outputs high-quality, consistent, and maintainable code.
This guide serves as a "global profile" that provides context for each interaction with AI, guiding it to follow specific development philosophies, processes, and technical standards.
Below is a personal "Global Agent Guide" that you can save and use directly. It is recommended that it be stored in the ~/.claude/CLAUDE.md
path for quick recall when needed.
Personal "global" agent guide
This document is the core guideline for collaborating with AI coding assistants.
# Development Guidelines
## Philosophy
### Core Beliefs
- **Incremental progress over big bangs** - Small changes that compile and pass tests
- **Learning from existing code** - Study and plan before implementing
- **Pragmatic over dogmatic** - Adapt to project reality
- **Clear intent over clever code** - Be boring and obvious
### Simplicity Means
- Single responsibility per function/class
- Avoid premature abstractions
- No clever tricks - choose the boring solution
- If you need to explain it, it's too complex
## Process
### 1. Planning & Staging
Break complex work into 3-5 stages. Document in `IMPLEMENTATION_PLAN.md`:
```
## Stage N: [Name]
**Goal**: [Specific deliverable]
**Success Criteria**: [Testable outcomes]
**Tests**: [Specific test cases]
**Status**: [Not Started|In Progress|Complete]
```
- Update status as you progress
- Remove file when all stages are done
### 2. Implementation Flow
1. **Understand** - Study existing patterns in codebase
2. **Test** - Write test first (red)
3. **Implement** - Minimal code to pass (green)
4. **Refactor** - Clean up with tests passing
5. **Commit** - With clear message linking to plan
### 3. When Stuck (After 3 Attempts)
**CRITICAL**: Maximum 3 attempts per issue, then STOP.
1. **Document what failed**:
- What you tried
- Specific error messages
- Why you think it failed
2. **Research alternatives**:
- Find 2-3 similar implementations
- Note different approaches used
3. **Question fundamentals**:
- Is this the right abstraction level?
- Can this be split into smaller problems?
- Is there a simpler approach entirely?
4. **Try different angle**:
- Different library/framework feature?
- Different architectural pattern?
- Remove abstraction instead of adding?
## Technical Standards
### Architecture Principles
- **Composition over inheritance** - Use dependency injection
- **Interfaces over singletons** - Enable testing and flexibility
- **Explicit over implicit** - Clear data flow and dependencies
- **Test-driven when possible** - Never disable tests, fix them
### Code Quality
- **Every commit must**:
- Compile successfully
- Pass all existing tests
- Include tests for new functionality
- Follow project formatting/linting
- **Before committing**:
- Run formatters/linters
- Self-review changes
- Ensure commit message explains "why"
### Error Handling
- Fail fast with descriptive messages
- Include context for debugging
- Handle errors at appropriate level
- Never silently swallow exceptions
## Decision Framework
When multiple valid approaches exist, choose based on:
1. **Testability** - Can I easily test this?
2. **Readability** - Will someone understand this in 6 months?
3. **Consistency** - Does this match project patterns?
4. **Simplicity** - Is this the simplest solution that works?
5. **Reversibility** - How hard to change later?
## Project Integration
### Learning the Codebase
- Find 3 similar features/components
- Identify common patterns and conventions
- Use same libraries/utilities when possible
- Follow existing test patterns
### Tooling
- Use project's existing build system
- Use project's test framework
- Use project's formatter/linter settings
- Don't introduce new tools without strong justification
## Quality Gates
### Definition of Done
- [ ] Tests written and passing
- [ ] Code follows project conventions
- [ ] No linter/formatter warnings
- [ ] Commit messages are clear
- [ ] Implementation matches plan
- [ ] No TODOs without issue numbers
### Test Guidelines
- Test behavior, not implementation
- One assertion per test when possible
- Clear test names describing scenario
- Use existing test utilities/helpers
- Tests should be deterministic
## Important Reminders
**NEVER**:
- Use `--no-verify` to bypass commit hooks
- Disable tests instead of fixing them
- Commit code that doesn't compile
- Make assumptions - verify with existing code
**ALWAYS**:
- Commit working code incrementally
- Update plan documentation as you go
- Learn from existing implementations
- Stop after 3 failed attempts and reassess
Why is this guide so effective?
Providing these guidelines to the AI as context is equivalent to setting clear "rules of the game" for it.
- Setting the development philosophy: The "Core Beliefs" and "Simplicity" definitions at the beginning of the guide direct AI to favor simple, pragmatic, and easy-to-maintain code rather than over-engineered, "showy" code. The "core beliefs" and the definition of "simplicity" guide AI towards generating simple, pragmatic and easy-to-maintain code rather than over-engineered "showy" code.
- Standardized workflow: from
IMPLEMENTATION_PLAN.md
This process provides AI with clear action steps to prevent it from getting sidetracked in complex tasks, from the formulation of the "AI" rules, to the implementation process of test-driven development (TDD), to the coping strategies when it gets stuck in a difficult situation. In particular, the "pause after three attempts" rule effectively prevents the AI from wasting a lot of time and computing resources on the wrong path. - Technical standards establishedThe sections on Architectural Principles, Code Quality, and Error Handling set hard constraints for AI. This ensures that the code it generates not only compiles, but also meets the overall architectural and quality requirements of the project.
- Provided a basis for decision-makingThe "Decision Framework" section forces AI to evaluate the options in terms of testability, readability, and other dimensions, and to make sense of them when multiple implementation paths exist. This allows developers to quickly review AI's technology choices, rather than being confronted with an incomprehensible black box.
- Final quality was emphasized: The Definition of Done checklist serves as a quality gateway to ensure that each deliverable is fully tested and validated against project specifications.
Ultimately, developers are still responsible for every line of code generated by AI. Human review is an indispensable last line of defense. The greatest value of this guide is that it greatly simplifies the review and integration process by bringing the output of AI closer to the mindset and engineering standards of human experts.
Chinese Cues
# 开发指南
## 开发哲学
### 核心信念
- **渐进式进展优于颠覆式创新** - 提交微小、可通过测试的变更。
- **从现有代码中学习** - 在动手前,先研究和规划。
- **务实胜过教条** - 适应项目现实。
- **清晰意图胜过巧妙代码** - 保持代码的枯燥和直白。
### 简洁的含义
- 每个函数/类只承担单一职责。
- 避免过早的抽象。
- 不耍小聪明 - 选择最“无聊”的解决方案。
- 如果你的代码需要解释才能看懂,那它就太复杂了。
## 工作流程
### 1. 规划与分阶段
将复杂的工作分解为 3-5 个阶段,并在 `IMPLEMENTATION_PLAN.md` 文件中记录:
```markdown
## 阶段 N: [阶段名称]
**目标**: [具体的可交付成果]
**成功标准**: [可测试的结果]
**测试**: [具体的测试用例]
**状态**: [未开始 | 进行中 | 已完成]
```
- 随着你的进展更新状态。
- 所有阶段完成后,删除此文件。
### 2. 实现流程
1. **理解** - 研究代码库中现有的模式。
2. **测试** - 先编写测试(红灯)。
3. **实现** - 编写最少的代码以通过测试(绿灯)。
4. **重构** - 在测试通过的前提下清理代码。
5. **提交** - 提交时附带清晰的、关联到计划的消息。
### 3. 陷入困境时(尝试3次后)
**至关重要**:每个问题最多尝试 3 次,然后停下来。
1. **记录失败之处**:
- 你尝试了什么。
- 具体的错误信息。
- 你认为失败的原因。
2. **研究替代方案**:
- 寻找 2-3个 类似的实现。
- 注意它们使用的不同方法。
3. **反思根本问题**:
- 当前的抽象层次是否正确?
- 这个问题能否拆解成更小的问题?
- 是否有一个从根本上更简单的办法?
4. **尝试不同角度**:
- 使用不同的库或框架特性?
- 采用不同的架构模式?
- 是应该移除抽象,而不是增加抽象?
## 技术标准
### 架构原则
- **组合优于继承** - 使用依赖注入。
- **面向接口而非单例** - 保证可测试性和灵活性。
- **显式优于隐式** - 保持清晰的数据流和依赖关系。
- **尽可能测试驱动** - 永远不要禁用测试,而是修复它们。
### 代码质量
- **每一次提交都必须**:
- 成功编译。
- 通过所有现有测试。
- 为新功能包含测试。
- 遵循项目的格式化/代码规范。
- **提交之前**:
- 运行格式化/代码规范检查工具。
- 自行审查变更。
- 确保提交消息解释了“为什么”这么做。
### 错误处理
- 用描述性的信息实现快速失败(Fail Fast)。
- 在错误中包含有助于调试的上下文。
- 在适当的层级处理错误。
- 绝不静默地吞掉异常。
## 决策框架
当存在多种有效方法时,基于以下标准进行选择:
1. **可测试性** - 我能轻松测试这个方案吗?
2. **可读性** - 六个月后,有人能看懂这段代码吗?
3. **一致性** - 这是否符合项目的既有模式?
4. **简洁性** - 这是能解决问题的最简单的方案吗?
5. **可逆性** - 以后要改动这个决策有多难?
## 项目集成
### 学习代码库
- 找到 3 个类似的功能或组件。
- 识别通用的模式和约定。
- 尽可能使用相同的库和工具。
- 遵循现有的测试模式。
### 工具使用
- 使用项目现有的构建系统。
- 使用项目现有的测试框架。
- 使用项目现有的格式化/代码规范配置。
- 若无充分理由,不要引入新工具。
## 质量门禁
### “完成”的定义
- [ ] 测试已编写并通过
- [ ] 代码遵循项目约定
- [ ] 没有格式化/代码规范工具的警告
- [ ] 提交消息清晰明确
- [ ] 实现与计划相符
- [ ] 没有未关联 issue 编号的 TODO 注释
### 测试准则
- 测试行为,而不是测试实现细节。
- 如有可能,每个测试只包含一个断言。
- 使用清晰的、描述场景的测试名称。
- 使用现有的测试工具和辅助函数。
- 测试应该是确定性的。
## 重要提醒
**绝不**:
- 使用 `--no-verify` 来绕过提交钩子。
- 禁用测试,而不是修复它们。
- 提交无法编译的代码。
- 凭空假设 - 通过现有代码来验证。
**始终**:
- 渐进式地提交可工作的代码。
- 随时更新计划文档。
- 从现有的实现中学习。
- 尝试 3 次失败后停下来重新评估。