An AI Maturity Framework
by Druce Vertes
A 12-dimension assessment of your company’s AI maturity and readiness, and a roadmap for developing an AI strategy
Level 1: Crawl
• Are individual employees experimenting with AI tools on their own initiative?
• Has leadership acknowledged AI as a topic of interest or potential?
• Are there ad-hoc discussions about AI in leadership meetings?
Level 2: Walk
• Has leadership defined formal AI objectives and priorities?
• Is there a designated executive sponsor or AI steering committee?
• Has the organization allocated budget specifically for AI initiatives?
• Are AI goals included in strategic planning documents?
Level 3: Run
• Does leadership have a comprehensive AI strategy with clear ROI targets?
• Are AI objectives integrated into departmental and individual performance goals?
• Is there regular executive review of AI initiative progress and outcomes?
• Has leadership championed organization-wide AI adoption?
Level 4: Fly
• Does the board regularly review AI strategy and competitive positioning?
• Are AI capabilities central to the organization's competitive advantage?
• Is there a Chief AI Officer or equivalent C-level position?
• Does leadership actively shape industry AI standards and best practices?
Level 1: Crawl
• Are employees using publicly available or manually gathered data?
• Is data primarily stored in siloed spreadsheets or local files?
• Is data quality managed informally or ad-hoc?
Level 2: Walk
• Is there a centralized data warehouse or lake?
• Are data quality standards documented and monitored?
• Is data lineage tracked for critical datasets?
• Are there established data governance policies?
Level 3: Run
• Is real-time or near-real-time data available for AI applications?
• Are data pipelines automated with monitoring and alerting?
• Is metadata comprehensively managed across all data assets?
• Are data quality metrics consistently above 95% for critical datasets?
Level 4: Fly
• Do automated systems continuously validate and improve data quality?
• Is synthetic data generation used to augment training datasets?
• Are data pipelines self-healing with automatic anomaly detection?
• Is data provenance fully tracked with automated compliance verification?
Level 1: Crawl
• Are employees using consumer-grade AI tools (ChatGPT, Claude, etc.)?
• Is AI experimentation happening on personal devices or accounts?
• Is there no formal AI infrastructure in place?
Level 2: Walk
• Has the organization deployed enterprise AI platforms (Microsoft Copilot, etc.)?
• Is there secure API access to AI models for development teams?
• Are development and production environments separated?
• Is there basic version control for AI-related code?
Level 3: Run
• Is there a complete MLOps platform with CI/CD pipelines?
• Are model training, testing, and deployment automated?
• Is infrastructure scalable with cloud or hybrid capabilities?
• Are there established sandboxes for experimentation?
Level 4: Fly
• Are AI agents deployed on auto-scaling infrastructure?
• Is there a comprehensive AI orchestration platform managing multiple agents?
• Are models automatically retrained based on performance degradation?
• Is infrastructure self-optimizing for cost and performance?
Level 1: Crawl
• Is AI usage largely untracked or unmonitored?
• Are there no formal logs of AI tool usage?
• Is success measured anecdotally if at all?
Level 2: Walk
• Are basic usage metrics collected (number of queries, users, etc.)?
• Is there logging of AI system inputs and outputs?
• Are cost and performance metrics tracked in dashboards?
• Are there alerts for system failures or anomalies?
Level 3: Run
• Is model performance continuously monitored against KPIs?
• Are comprehensive audit trails maintained for compliance?
• Is there real-time monitoring of model drift and data quality?
• Are user satisfaction and business impact metrics tracked?
Level 4: Fly
• Do monitoring systems automatically trigger remediation actions?
• Is there predictive alerting based on trend analysis?
• Are agents self-monitoring with automatic performance optimization?
• Is there comprehensive observability across the entire AI agent ecosystem?
Level 1: Crawl
• Are there no formal AI usage policies or guidelines?
• Is risk management for AI informal or non-existent?
• Are employees unaware of potential AI-related risks?
Level 2: Walk
• Are acceptable use policies documented and communicated?
• Is there a risk assessment framework for AI projects?
• Are there approval processes for new AI initiatives?
• Is regulatory compliance reviewed for AI applications?
Level 3: Run
• Is there a formal AI governance board with regular meetings?
• Are all AI projects subject to ethical review and risk assessment?
• Is IP protection systematically managed for AI-generated content?
• Are third-party AI vendors assessed for compliance and risk?
Level 4: Fly
• Are governance policies automatically enforced through technical controls?
• Is there continuous compliance monitoring with automated reporting?
• Does the organization contribute to industry AI governance standards?
• Are AI agents subject to automated ethical guardrails and override mechanisms?
Level 1: Crawl
• Are employees potentially sharing sensitive data with public AI tools?
• Is there no awareness of AI-specific security threats?
• Are AI tools used without security review?
Level 2: Walk
• Are prompt injection and data leakage risks understood and documented?
• Is sensitive data prohibited from being shared with external AI tools?
• Are enterprise AI tools deployed with security configurations?
• Is there basic training on AI security risks?
Level 3: Run
• Are AI assistants deployed with comprehensive security controls?
• Is there continuous monitoring for AI-specific attack vectors?
• Are security assessments mandatory for all AI implementations?
• Is AI being used to enhance cybersecurity capabilities?
Level 4: Fly
• Do AI agents operate within zero-trust security architectures?
• Are AI-powered security tools autonomously detecting and responding to threats?
• Is there real-time threat intelligence specifically for AI systems?
• Are agents secured with hardware-based trust and attestation?
Level 1: Crawl
• Are individuals using AI tools without coordination?
• Is there no dedicated AI talent or roles?
• Is AI expertise limited to a few enthusiasts?
Level 2: Walk
• Has the organization hired or designated AI specialists?
• Are there cross-functional AI working groups that meet regularly?
• Is there a forum for sharing AI knowledge and use cases?
• Are job descriptions updated to include AI-related responsibilities?
Level 3: Run
• Is there a dedicated AI team or center of excellence?
• Are AI champions embedded across business units?
• Is there a clear career path for AI roles?
• Are retention strategies in place for AI talent?
Level 4: Fly
• Is AI expertise distributed throughout the organization?
• Are teams structured around AI agent development and orchestration?
• Is there a talent pipeline through academic partnerships and internships?
• Does organizational design facilitate rapid AI innovation and deployment?
Level 1: Crawl
• Is AI training limited to individual self-learning?
• Are employees discovering AI tools on their own?
• Is there no formal AI education program?
Level 2: Walk
• Is there firm-wide AI literacy training for all employees?
• Are role-specific AI training programs developed?
• Is there documentation of AI best practices and guidelines?
• Are employees aware of available enterprise AI tools?
Level 3: Run
• Is there continuous learning with regular AI training updates?
• Are employees certified in AI tool usage for their roles?
• Is there a knowledge management system for AI practices?
• Are advanced courses available for power users and developers?
Level 4: Fly
• Are external certifications and advanced degrees supported?
• Is there a formal AI academy or university partnership?
• Do employees contribute to AI research and publications?
• Is training personalized based on role, skill level, and usage patterns?
Level 1: Crawl
• Are AI applications limited to personal productivity?
• Is AI use opportunistic without clear business objectives?
• Are success metrics undefined or not tracked?
Level 2: Walk
• Are use cases identified and prioritized by business function?
• Is there a documented methodology for selecting AI projects?
• Are pilot projects launched with defined success criteria?
• Is ROI tracked for AI initiatives?
Level 3: Run
• Are AI assistants deployed across multiple business functions?
• Is there a portfolio approach to managing AI projects?
• Are use cases regularly evaluated and optimized?
• Is learning from projects systematically captured and applied?
Level 4: Fly
• Are autonomous agents handling end-to-end business processes?
• Is there continuous identification of new automation opportunities?
• Are agents self-optimizing based on performance data?
• Is AI integrated into every major business function with measurable impact?
Level 1: Crawl
• Is analytics limited to using pre-built AI tools?
• Is there no custom AI model development?
• Are capabilities limited to prompting existing models?
Level 2: Walk
• Are analytics teams experimenting with model fine-tuning?
• Is there basic prompt engineering and RAG implementation?
• Are pre-built models integrated into business applications?
• Is there a development environment for AI experimentation?
Level 3: Run
• Are custom models developed and deployed for specific use cases?
• Is there a complete MLOps pipeline from development to production?
• Are A/B testing and champion/challenger models implemented?
• Is model performance continuously evaluated and improved?
Level 4: Fly
• Are AI agents autonomously developing and deploying models?
• Is there automated feature engineering and model selection?
• Do systems automatically optimize across multiple objectives?
• Is there a self-improving AI development pipeline?
Level 1: Crawl
• Are vendor relationships limited to consumer SaaS subscriptions?
• Is there no formal engagement with AI vendors or partners?
• Is external AI expertise not being leveraged?
Level 2: Walk
• Are enterprise agreements established with major AI providers?
• Is the organization participating in industry AI forums?
• Are consultants or implementation partners engaged for specific projects?
• Is there evaluation of specialized AI vendors by use case?
Level 3: Run
• Are strategic partnerships established with key technology providers?
• Is the organization involved in industry consortia and working groups?
• Are academic partnerships in place for research collaboration?
• Are data partnerships established to enhance AI capabilities?
Level 4: Fly
• Is the organization co-developing AI solutions with partners?
• Are there joint ventures or investments in AI startups?
• Does the organization influence partner roadmaps and standards?
• Is there an ecosystem of partners supporting the AI agent infrastructure?
Level 1: Crawl
• Are AI tools used informally without process integration?
• Are workflows unchanged despite AI availability?
• Is there no documentation of how AI fits into processes?
Level 2: Walk
• Are business workflows documented to identify AI opportunities?
• Is there a change management process for AI implementations?
• Are employees trained on new AI-enhanced processes?
• Are processes updated to incorporate enterprise AI tools?
Level 3: Run
• Are workflows optimized around AI assistant capabilities?
• Is there continuous process improvement driven by AI insights?
• Are standard operating procedures consistently followed?
• Is change management sophisticated with user adoption tracking?
Level 4: Fly
• Are processes fully automated with AI agents handling exceptions?
• Is there dynamic process optimization based on real-time data?
• Are agents autonomously improving workflows?
• Is the organization constantly innovating processes using AI capabilities?