Commit Intelligence
Every commit scored for engineering quality, assessed for risk, classified by type, and checked for AI assistance.
What It Does
Every commit that enters a connected repository is run through an AI scoring pipeline. This is not line-count heuristics or diff-size guesses. The system reads the actual code changes, detects the engineering context, and produces a structured analysis covering quality, risk, classification, effort, and AI tool usage.
The pipeline assembles a prompt from communication style policy, engineering quality policy, a context-specific engineering profile, risk assessment policy, feature context, and an output format spec. A lean batch variant strips verbose sections for throughput when processing at scale.
Engineering Quality Score
Each commit receives an Engineering Quality Score from 0 to 100. This measures code quality -- SOLID principles, DRY adherence, cohesion, patterns, and context-specific engineering signals. It is not based on size, risk, or complexity. The baseline starts at 75 and adjusts up or down based on quality signals detected in the code.
Review Risk Level
Risk is scored independently from quality. A commit's risk level measures blast radius, data sensitivity, and production impact -- not whether the code is well-written.
- → LOW -- limited blast radius, no sensitive data, minimal production impact
- → MEDIUM -- moderate scope, some infrastructure or data touchpoints
- → HIGH -- broad impact, touches sensitive systems or data paths
- → CRITICAL -- database migrations, auth changes, payment logic, production infrastructure
The key insight: A+ quality code can be CRITICAL risk. A perfectly written database migration still has massive blast radius. F quality code can be LOW risk if it only touches a test helper. These dimensions are intentionally orthogonal.
Classification
Every commit is classified into one of six types:
Feature
New functionality or capabilities
Bugfix
Corrections to existing behavior
Refactoring
Structural improvements, no behavior change
Hotfix
Urgent production fixes
Chore
Maintenance, dependencies, config
Docs
Documentation changes
Classification feeds into KPI work type distribution, letting you see the ratio of feature work to maintenance to bug fixing across your organization.
Context-Aware Profiles
The scoring pipeline does not apply the same quality rubric to every commit. It detects the engineering context from file paths and selects the appropriate profile with domain-specific quality signals.
Backend
.java, .kt, .go, .py, .rb, .rs, /domain/, /api/
Frontend
.tsx, .jsx, .vue, .svelte, /components/, /hooks/
Mobile
.swift, .dart, /ios/, /android/, .kt (Android paths)
Infrastructure
.tf, Dockerfile, /k8s/, /.github/, /helm/, /ansible/
Data
.sql, .ipynb, /dbt/, /dags/, /airflow/, /etl/
Knowledge
.md, /docs/, /wiki/, /meetings/, /adr/, /rfcs/
When more than 30% of files in a commit belong to a second context, the system enters mixed mode and combines quality signals from both profiles. A commit touching both React components and Terraform configs gets evaluated against both frontend and infrastructure standards.
Execution Effort
Each commit receives an effort estimation that captures both the human and AI-assisted dimensions of the work:
- → Human Story Points -- the raw effort this work would require without AI assistance
- → AI-Adjusted Story Points -- actual effort accounting for AI tooling
- → AI Leverage Percentage -- the efficiency gain from AI assistance
- → Time Estimate -- estimated time in minutes
- → Rationale -- explanation of the estimation reasoning
Displayed as a compact summary like "2 pts | ~30m" for human effort and a separate AI-adjusted figure when AI tools are detected.
AI Attribution Detection
The system detects when AI tools were used to produce commit code. Detection works through three attribution types:
- → Explicit -- stated directly in the commit message or metadata
- → Implicit -- detected from code patterns characteristic of AI generation
- → Heuristic -- inferred from code style signals
Detected tool families: Claude, Copilot, Cursor, ChatGPT, Cody, Gemini, and Tabnine.
Each detection includes a confidence score and source signal. Assistance level is classified across a spectrum:
AI contribution data is tracked and aggregated into adoption statistics across your organization, feeding into the Commit Intelligence pillar evidence.
Initiative Linking
Commits are linked to initiatives through ownership patterns. When a commit matches an initiative's repository and contributor ownership rules, it is automatically associated. This connection lets you trace individual code changes back to strategic goals -- visible in the commit list as a linked initiative name and canonical ID. You can filter commits by initiative, or view only untracked commits that have not been linked to any initiative.
How to Use It
Commit Intelligence feeds directly into Cognis. You can query it naturally: