AI-Assisted Coding & Development
Measures adoption of AI-paired programming tools, code completion, and other AI-augmented development practices.
Sample assessment questions for each level:
- Level -1: “Are AI coding assistants explicitly banned from the development environment?”
- Level 0: “Are AI coding tools used individually without organisational standards?”
- Level 1: “Has the organisation evaluated specific AI coding assistants for team use?”
- Level 2: “Are AI code assistants (e.g., Copilot) integrated into dev workflows?”
- Level 3: “Is AI used to auto-complete, refactor, or generate boilerplate code?”
- Level 4: “Does AI provide contextual documentation or example usage patterns?”
- Level 5: “Are domain-specific code models fine-tuned internally for better support?”
Key metrics to track:
- Developer efficiency: Percentage increase in code production with AI assistance
- Code quality: Change in defect rates for AI-assisted code vs. traditional development
- Knowledge accessibility: Time saved accessing contextual documentation through AI
- AI suggestion acceptance rate: Percentage of AI code suggestions accepted by developers
- Learning curve reduction: Time for new developers to become productive with AI assistance