Clearbridge AI Readiness Index · 2026 Edition

Benchmark your organisation against AI Leaders.

30 questions. 6 weighted dimensions. A tier rating, leader-benchmark gap, and a generated 90-day action plan — so you know exactly what to do on Monday.

30questions
6dimensions
~12minutes
90-dayaction plan

Why 2026 Edition

AI readiness has changed. Your assessment should too.

Differentiator

Agentic, not chat-based

The ROI conversation has moved from “do people use ChatGPT?” to “do your agents take actions?” We measure agentic maturity as a first-class axis.

Differentiator

Weighted for what matters

Technology & Agentic and Strategy & Value carry the most weight — the dimensions that actually separate Proficient from Leader in 2026 benchmarks.

Differentiator

Action plan, not just a score

Every report includes a 30/60/90-day plan generated from your weakest signals, mapped to named initiatives — not generic advice.

What we measure

6 dimensions of AI readiness, weighted for 2026

Each dimension contributes a different share to your overall score. Technology & Agentic carries 25% because it's the sharpest differentiator we see in real organisations.

D1

20% weight

Strategy & Value

How AI investment ties to measurable business outcomes, executive alignment, and operating-model redesign.

D2

15% weight

Data & Knowledge

Retrievability of institutional knowledge, data quality, protection of proprietary information, and build-vs-buy-vs-orchestrate discipline.

D3

15% weight

Talent & Fluency

Role-specific AI fluency across the workforce, modern tooling adoption by engineers, and workforce redesign in response to AI leverage.

D4

10% weight

Culture & Adoption

Whether the organization shares AI wins, rewards AI leverage in reviews, and has an explicit position on how AI changes roles.

D5

15% weight

Governance & Risk

Ownership of AI governance, pre-deployment evaluation, acceptable-use policy, agentic-system risk management, and regulatory readiness.

D6

25% weight

Technology & Agentic

Breadth of model portfolio, production use of agents, system integration via MCP/gateways, LLMOps observability, and time-to-production for new use cases.

Scoring

Four readiness tiers

Leader
81–100

Agentic workflows in production, governed model portfolio, operating-model redesign complete.

Proficient
61–80

Production use cases, basic governance, role-specific fluency. Next: agents and LLMOps.

Developing
41–60

Pockets of adoption and early wins. Needs retrievable knowledge, fluency plans, AI council.

Beginner
0–40

Starting line. Priorities: name an owner, approve tools, ship one measured use case.

How it works

From registration to action plan in one session

01

Register

Work email only. Your results automatically group with colleagues from your organisation.

02

Take the assessment

30 weighted questions across 6 dimensions. 10–15 minutes.

03

Review your report

Tier rating, weighted dimension scores, and a leader-benchmark gap.

04

Execute the plan

Auto-generated 30/60/90-day action plan mapped to named Clearbridge services.

Get started

Know where you stand. Know what to do next.

Every registered organisation gets a rolled-up report across all employees — anonymised individual scores, per-dimension benchmarks, and a prioritised service-gap view for leadership.

Start the assessment →

clearbridge.ca