The Lens·10 Standards·Concordance Labs

The Velocity Governance Lens

10 standards from the Concordance Framework where AI integration materially changes the risk profile. Not new standards — sharper requirements on existing ones.

Each standard below shows the baseline expectation, how AI integration raises the stakes, the signal Concordance looks for when scoring, and a realistic failure scenario. These are the canary standards. If they're weak, the rest of the picture is usually worse.

PhaseDesignDevelopmentTestingRelease
2.2

Architecture Decision Records

Design
Baseline

Document significant architectural decisions with context, options considered, and rationale.

Why AI raises the stakes

Model selection is an architectural decision. When you choose an LLM provider, pin a version, or change a prompt strategy, that carries organisational risk — cost implications, behaviour changes, vendor lock-in. Without ADRs, these decisions accumulate invisibly. When something breaks, there is no trail.

Signal

Missing ADRs for AI adoption decisions are the single most common governance gap in AI-active codebases.

Failure scenario

Team switches model provider under cost pressure. Behaviour changes in edge cases. No record of what was changed, why, or what alternatives were considered.

2.6

Dependency Management

Design
Baseline

Pin dependencies, use lockfiles, scan for vulnerabilities.

Why AI raises the stakes

AI SDKs are not like typical dependencies. Model behaviour can change with a patch version bump — not because of a bug fix, but because the underlying model was updated. openai@4.x to openai@5.x isn't just an API change. Unpinned AI SDK dependencies introduce non-determinism that your test suite cannot catch.

Signal

Floating AI SDK versions in package.json are a higher-severity finding than most teams recognise.

Failure scenario

CI passes. Production behaves differently than staging. Root cause: AI SDK auto-updated, model behaviour shifted. No one noticed until a customer complained.

3.1

Branch Protection

Development
Baseline

Require PR reviews before merging to protected branches.

Why AI raises the stakes

AI coding tools increase PR volume substantially — some teams report 2x–3x. If branch protection was already inconsistently enforced, higher volume doesn't help. It creates the conditions for AI-generated code to bypass review entirely, either through direct pushes or rubber-stamp approvals on PRs that are too large to actually review.

Signal

Branch protection enforcement drops in correlation with AI adoption on teams that were already inconsistent.

Failure scenario

Developer merges AI-generated feature directly to main under deadline pressure. Protection rules not enforced. Bug ships. Audit finds no review record.

3.2

PR Review Quality

Development
Baseline

Reviews should be substantive — catching logic errors, security issues, and architectural drift.

Why AI raises the stakes

When PR volume doubles, review time per PR compresses. The result is reviews that check style and syntax but miss logic. Prompt changes and model configuration changes — which can fundamentally alter system behaviour — are often treated as non-code and receive no review at all.

Signal

Prompt files (.md, .txt) committed to repos almost never show review activity in PR history.

Failure scenario

Prompt template updated to handle new use case. Nobody reviews the security implications. Prompt injection vector introduced. Not discovered until penetration test.

3.6

Code Ownership

Development
Baseline

Define clear owners for each area of the codebase — someone accountable for changes and incidents.

Why AI raises the stakes

AI-generated code can span multiple domains in a single PR. Ownership boundaries blur. More critically: when model behaviour changes unexpectedly in production, you need a clear owner to investigate. If the AI integration layer has no designated owner, incidents become everyone's problem and no one's responsibility.

Signal

CODEOWNERS files rarely include AI integration directories, prompt libraries, or LLM config.

Failure scenario

Model starts returning unexpected formats. On-call engineer doesn't own the AI layer. Escalation chain unclear. Resolution takes 4x longer than it should.

3.9

Secrets Management

Development
Baseline

Never commit secrets. Use environment variables. Rotate regularly.

Why AI raises the stakes

LLM API keys are not like database passwords. They grant access to expensive, rate-limited, metered endpoints. A leaked OpenAI key is a financial exposure, not just a security one. The blast radius is larger and more immediate than most secrets. Additionally, many AI integrations are built quickly by developers unfamiliar with secrets hygiene.

Signal

API key exposure in public repos has increased since AI coding tools lowered the barrier to building integrations.

Failure scenario

Developer prototypes AI feature. Commits .env file to public repo by mistake. Key scraped by bot within minutes. $12,000 API bill arrives before anyone notices.

4.1

CI Pipeline

Testing
Baseline

Automated tests run on every PR. Build must pass before merge.

Why AI raises the stakes

CI pipelines were built for deterministic code. AI outputs are non-deterministic. Prompt injection testing, output validation, and model behaviour regression testing are new test categories that most pipelines simply do not include. A CI that passes consistently tells you nothing about whether your AI integration is safe.

Signal

Less than 5% of AI-active repos have any form of prompt injection testing in CI.

Failure scenario

All tests pass. PR merges. Adversarial input discovered in production that causes model to ignore system prompt. No automated gate existed to catch it.

4.6

Security Analysis

Testing
Baseline

Run static analysis and dependency scanning on every build.

Why AI raises the stakes

SAST tools were not built for prompt injection. Standard scanners check for SQL injection, XSS, insecure deserialization — not for patterns that manipulate LLM behaviour through crafted input. AI-specific security analysis is an emerging capability, and most teams have a gap between their confidence in their security posture and the actual coverage.

Signal

Existing SAST tools give a false sense of security for AI-active codebases.

Failure scenario

Security scan passes. Pentest identifies three prompt injection vectors. All exploitable. None detected by existing tooling.

5.7

Rollback Capability

Release
Baseline

Be able to roll back any release quickly and cleanly.

Why AI raises the stakes

Model drift is a new category of production incident — behaviour changes without a deployment happening. Your rollback process was built around reverting code. Rolling back model behaviour means either switching providers, switching model versions, or rolling back prompts. If that capability does not exist and is not tested, you have no response plan when a model update changes how your system behaves.

Signal

Most AI rollback capabilities are theoretical — documented but never tested under incident conditions.

Failure scenario

Model provider updates base model. Customer-facing AI feature starts producing incorrect responses. No tested rollback path. Incident takes 48 hours to resolve.

5.8

Feature Flagging

Release
Baseline

Use feature flags to control rollout and enable instant kill-switches.

Why AI raises the stakes

Feature flags are the only safe response to model drift that does not require a full deployment. If your AI feature cannot be disabled in under 5 minutes without a code change, you are not in control of your production behaviour. For AI features specifically, this is not a nice-to-have — it is the minimum viable safety control.

Signal

The question "can you disable your AI feature without a deployment?" has a surprising failure rate even in otherwise mature engineering organisations.

Failure scenario

Model behaviour drifts overnight. Feature causes data quality issues. Team cannot disable without a full release. 3-hour deployment window. Damage compounds.


These aren't self-assessments. Concordance scores each of these standards from direct evidence in your repositories — commit history, branch configuration, CI definitions, dependency files, CODEOWNERS, PR review activity. No surveys. No manual input.

The Velocity Governance lens activates automatically via AI Sentinel when AI integration is detected in your codebase. Connect your repositories and see exactly where you stand across all 10 standards.

The Concept
Why this matters
The Lens
You are here
AI Sentinel
See it in action

See your scores across all 10.

Free for one team. Evidence-based, not survey-based.

Get Started FreeSee AI Sentinel →
A Concordance Labs concept · © 2026
FrameworkMappingsPricing