Building the builders.

AI-native Org Evolution

Assess and advance your organization's capability to build, experiment, and learn using AI.

AI-Native Evolution
A framework for AI-native product organizations
Dimension
LEVEL 1
Initial
Ad-hoc experimentation
LEVEL 2
Developing
Early standardization
LEVEL 3
Defined
Integrated workflows
LEVEL 4
Advanced
Optimized usage
LEVEL 5
Leading
Innovation & leadership
Prototyping & Design Velocity
Rare prototypes, weeks to create
Some AI prototyping, hours possible
Standard practice, integrated workflows
High velocity, measurable impact
Core to development, PM/PD submit PRs
📊Data & Insights Automation
Manual analysis, weeks to insights
Early AI tools, some automation
AI integrated, 3-7 day cycles
High automation, 1-3 day cycles
AI-native, reusable frameworks
🛠️Tool Fluency & Infrastructure
Ad-hoc usage, no standardization
Some teams experimenting
Standardized toolset, regular training
High proficiency, custom integrations
Innovation, tool development
🤝Cross-Functional Builder Culture
Teams in silos, limited collaboration
Some cross-functional projects
Standard workflows, shared practices
Deep collaboration, optimized
Seamless collaboration, innovation
🧪Learning & Experimentation
Limited experimentation, siloed
Some experimentation, early sharing
Regular events, systematic learning
Continuous experimentation, optimized
Learning-driven innovation
📈Career Ladder
No AI skills in career frameworks
AI mentioned but not defined
Clear AI competencies per level
AI fluency tied to advancement
AI leadership expected at senior levels
💼Interviewing
No AI-related interview questions
Basic AI awareness assessed
AI tool proficiency evaluated
AI problem-solving emphasized
AI innovation and leadership assessed

01Overview

The AI Product Org Maturity Model helps product organizations assess and advance their capability to build, experiment, and learn using AI-powered tools. It measures how effectively teams leverage AI to accelerate product development, from initial concept to validated insights.

This maturity model is specifically designed for Product Management, Product Design, Data Science, and User Research teams. Each dimension includes role-specific indicators to help you understand where your organization stands and how to advance.

02Core Dimensions

The AI Product Org Maturity Model assesses five key dimensions that determine an organization's ability to build AI-native products:

1. Prototyping & Design Velocity

How quickly Product and Design teams can go from idea to interactive prototype

2. Data & Insights Automation

How effectively Data Science and Research teams use AI to accelerate analysis and discovery

3. Tool Fluency & Infrastructure

The depth and breadth of AI tool adoption across all product teams

4. Cross-Functional Builder Culture

How well PM, Design, Data, and Research collaborate using AI tools

5. Learning & Experimentation

The organization's ability to learn from experiments and continuously improve

03Maturity Levels

All dimensions are assessed across five levels. Your organization may be at different levels across different dimensions—this is normal and helps prioritize improvement efforts.

LevelNameDescription
1InitialAd-hoc, individual experimentation
2DevelopingSome teams experimenting, early standardization
3DefinedStandardized practices, integrated workflows
4AdvancedOptimized usage, continuous improvement
5LeadingInnovation and thought leadership

04Dimension 1: Prototyping & Design Velocity

Definition: The speed and quality with which Product and Design teams can transform ideas into interactive prototypes using AI-powered tools.

Level 1: Initial

State: Prototypes are rare, time-intensive, and primarily created by specialists.

Product Management:

  • Prototypes are requested from Design/Engineering, not created by PMs
  • No AI tools used for rapid concept exploration
  • Product concepts communicated via documents and static mockups

Product Design:

  • Prototypes take days or weeks to create
  • Limited use of AI design tools (Figma Make, etc.)
  • High-fidelity prototypes require significant design time
  • Prototypes are polished before sharing

Indicators:

  • <10% of product concepts have prototypes
  • Average prototype creation time: 1-2 weeks
  • Prototypes created only by Design team
  • No AI-assisted prototyping tools in use

Level 2: Developing

State: Some teams experimenting with AI prototyping tools, early wins visible.

Product Management:

  • PMs creating low-fidelity prototypes for concept validation
  • Using AI tools (Cursor, Figma Make) for quick explorations
  • Prototypes used in early discovery conversations

Product Design:

  • Designers experimenting with Figma Make and AI code assistants
  • Some prototypes created in hours instead of days
  • Mix of low and high-fidelity prototypes based on need
  • AI tools used for initial explorations, then refined manually

Indicators:

  • 25-40% of product concepts have prototypes
  • Average prototype creation time: 2-5 days
  • PMs creating 10-20% of prototypes
  • Basic training on AI prototyping tools available

Level 3: Defined

State: Prototyping is standard practice, AI tools integrated into workflows.

Product Management:

  • PMs regularly create low-to-medium fidelity prototypes
  • Prototypes included in standard project kickoff templates
  • AI tools used for opportunity exploration and solution ideation
  • Prototypes shared early in discovery phase

Product Design:

  • Designers proficient with AI prototyping tools (Figma Make, Cursor)
  • Prototype fidelity matched to phase (discovery = low, design = medium, build = high)
  • AI tools used throughout design process, not just exploration
  • Prototypes updated frequently with daily/weekly iterations

Indicators:

  • 60-75% of product concepts have prototypes
  • Average prototype creation time: 4-8 hours for low-fidelity, 1-2 days for medium
  • PMs creating 30-40% of prototypes
  • Quarterly training on prototyping tools and best practices
  • Prototypes integrated into standard project workflows

Level 4: Advanced

State: High-velocity prototyping, optimized workflows, measurable impact.

Product Management:

  • PMs create prototypes for most concepts before formal design work
  • Prototypes used for stakeholder alignment and user validation
  • AI tools used to explore multiple solution directions quickly
  • Prototype-to-decision time measured and optimized

Product Design:

  • Designers create prototypes in hours, not days
  • AI tools used for rapid iteration and variant exploration
  • Custom workflows and automations for common prototype patterns
  • Prototypes serve as primary communication artifact
  • Design system components integrated into AI-generated prototypes

Indicators:

  • 85%+ of product concepts have prototypes
  • Average prototype creation time: 1-4 hours for low-fidelity, 4-8 hours for medium
  • PMs creating 50%+ of initial prototypes
  • Prototype usage tracked and measured
  • Custom templates and workflows for common patterns
  • Prototype-to-value metrics established

Level 5: Leading

State: Prototyping is core to product development, continuous innovation.

Product Management:

  • PMs are prototyping experts, teaching others
  • Prototypes used for all major decisions
  • AI tools used to explore edge cases and complex scenarios
  • Contributing to tool development and best practices
  • PMs submitting pull requests and contributing code directly
  • PMs generating Python notebooks for data analysis and prototyping

Product Design:

  • Designers pioneering new prototyping techniques
  • AI tools used to create production-quality prototypes
  • Sharing prototyping best practices externally
  • Prototypes often become production code or inform architecture
  • Product Designers submitting pull requests and contributing code
  • Designers generating Python notebooks for analysis and automation

Indicators:

  • 95%+ of concepts prototyped before formal design
  • Average prototype creation time: <1 hour for low-fidelity
  • PMs creating 60%+ of initial prototypes
  • PMs and Product Designers submitting pull requests regularly
  • PMs and Product Designers generating Python notebooks for analysis
  • Thought leadership on prototyping (blog posts, talks, tool contributions)
  • Prototypes directly inform or become production features

05Dimension 2: Data & Insights Automation

Definition: How effectively Data Science and User Research teams use AI to accelerate analysis, automate synthesis, and generate actionable insights.

Level 1: Initial

State: Manual analysis, limited AI assistance, insights take weeks.

Data Science:

  • All analysis done manually with traditional tools
  • No AI-assisted data exploration or pattern recognition
  • Reports and dashboards created manually
  • Limited automation in data pipelines

User Research:

  • Research synthesis done manually
  • Interview transcripts analyzed by hand
  • No AI tools for pattern recognition in qualitative data
  • Insights compiled in documents and presentations

Indicators:

  • Average time from data collection to insights: 2-4 weeks
  • <10% of analysis uses AI tools
  • No automated data synthesis
  • Research reports created manually

Level 2: Developing

State: Early experimentation with AI analysis tools, some automation.

Data Science:

  • Data scientists experimenting with AI code assistants (Cursor) for analysis
  • Some automated data exploration using AI
  • AI used to generate initial hypotheses from data
  • Basic automation in data cleaning and transformation

User Research:

  • Researchers using AI to transcribe and summarize interviews
  • AI tools used for initial theme identification
  • Some automated synthesis of research findings
  • AI-assisted report generation

Indicators:

  • Average time from data to insights: 1-2 weeks
  • 25-40% of analysis uses AI tools
  • Basic AI transcription and summarization in use
  • Initial automation of repetitive analysis tasks

Level 3: Defined

State: AI tools integrated into standard analysis workflows.

Data Science:

  • AI code assistants used for most data analysis work
  • Automated pattern recognition in datasets
  • AI used to generate analysis scripts and visualizations
  • Standard workflows include AI-assisted exploration
  • AI helps identify anomalies and trends

User Research:

  • AI transcription and summarization standard for all interviews
  • AI tools used for theme extraction and pattern recognition
  • Automated synthesis of research findings across studies
  • AI-assisted report generation with human review
  • Research insights database maintained with AI assistance

Indicators:

  • Average time from data to insights: 3-7 days
  • 60-75% of analysis uses AI tools
  • All interviews transcribed and summarized with AI
  • Standard research workflows include AI tools
  • Quarterly training on AI analysis tools

Level 4: Advanced

State: High automation, optimized workflows, measurable impact on speed.

Data Science:

  • AI tools used for advanced analysis and modeling
  • Custom AI workflows for common analysis patterns
  • Automated insight generation and report creation
  • AI used to identify opportunities and anomalies proactively
  • Analysis velocity measured and optimized

User Research:

  • AI used for real-time research synthesis
  • AI tools identify patterns across multiple research studies
  • Automated research insight extraction and sharing
  • AI-assisted user journey mapping and persona development
  • Research insights automatically integrated into product decisions

Indicators:

  • Average time from data to insights: 1-3 days
  • 85%+ of analysis uses AI tools
  • Custom AI workflows for research synthesis
  • Research insights automatically surfaced to product teams
  • Analysis velocity metrics tracked and improved

Level 5: Leading

State: AI-native analysis, continuous innovation, thought leadership.

Data Science:

  • Data scientists pioneering new AI analysis techniques
  • Custom AI models and tools developed internally
  • AI used for predictive insights and proactive recommendations
  • Contributing to AI analysis tool development
  • Sharing best practices externally
  • Building reusable analysis frameworks and libraries used across the org
  • Creating internal tools and platforms that democratize data analysis
  • Contributing to open source AI analysis tools

User Research:

  • Researchers using AI for advanced qualitative analysis
  • AI-powered research synthesis across multiple data sources
  • Automated insight generation and recommendation systems
  • Thought leadership on AI-assisted research
  • Research insights drive product strategy proactively
  • Building reusable research frameworks and automation pipelines
  • Creating internal tools that enable self-service research analysis
  • Contributing to open source research tools and methodologies

Indicators:

  • Average time from data to insights: <1 day for standard analysis
  • 95%+ of analysis uses AI tools
  • Custom AI tools developed for research needs
  • Reusable frameworks and libraries created by Data Science/Research teams
  • Internal tools built that enable other teams to do analysis
  • Thought leadership (blog posts, talks, tool contributions)
  • Research insights automatically inform product roadmap
  • Open source contributions to AI analysis or research tools

06Dimension 3: Tool Fluency & Infrastructure

Definition: The depth and breadth of AI tool adoption, proficiency, and infrastructure support across Product, Design, Data, and Research teams.

Level 1: Initial

State: Ad-hoc tool usage, no standardization, limited access.

Indicators:

  • <10% of product org uses AI tools regularly
  • No approved tool list
  • No tool budget or procurement process
  • No training programs
  • Tools accessed individually (personal accounts)

Level 2: Developing

State: Some teams experimenting, early standardization beginning.

Indicators:

  • 25-40% adoption in key teams
  • Initial tool evaluation framework
  • Basic training sessions offered (quarterly or ad-hoc)
  • Some tools procured organizationally
  • Tool champions emerging in each discipline

Level 3: Defined

State: Standardized toolset, integrated workflows, regular training.

Product-Specific Tools:

  • AI code editors (Cursor) for prototyping
  • AI design tools (Figma Make) for rapid design
  • AI analysis platforms for data insights

Indicators:

  • 60-75% adoption across product org
  • Standardized tool stack per discipline
  • Quarterly training cadence
  • Tools in standard project kickoff templates
  • Tool usage metrics tracked
  • New team members onboarded to tools

Level 4: Advanced

State: High proficiency, custom integrations, optimized usage.

Indicators:

  • 85%+ adoption with high proficiency
  • Custom workflows and automations
  • Tool ROI measured and reported
  • Internal tool communities and knowledge sharing
  • Advanced training and certification programs
  • Tools integrated with core product infrastructure

Level 5: Leading

State: Innovation, thought leadership, tool development.

Indicators:

  • 95%+ adoption with expert-level proficiency
  • Tool partnerships and co-development
  • Industry speaking/thought leadership
  • Internal tool development projects
  • Open source contributions or tool improvements
  • Tool usage drives competitive advantage

07Dimension 4: Cross-Functional Builder Culture

Definition: How effectively Product Management, Design, Data Science, and User Research collaborate using AI tools to build, experiment, and learn together.

Level 1: Initial

State: Teams work in silos, limited collaboration, no shared AI practices.

Indicators:

  • Teams use different tools and processes
  • Collaboration happens through formal handoffs
  • No shared AI tool knowledge or practices
  • Limited cross-functional prototyping or analysis
  • Silos between PM, Design, Data, Research

Level 2: Developing

State: Some cross-functional collaboration, early shared practices.

Indicators:

  • 25-40% of projects involve cross-functional AI tool usage
  • Some shared tool training sessions
  • Informal communities forming around AI tools
  • PM-Design or Data-Research collaboration increasing
  • Early examples of cross-functional prototypes

Level 3: Defined

State: Standard cross-functional workflows, shared practices, regular collaboration.

Product-Design Collaboration:

  • PMs and Designers co-creating prototypes
  • Shared understanding of prototyping tools and practices
  • Joint exploration of product concepts

Data-Research Collaboration:

  • Data Scientists and Researchers using AI tools together
  • Shared analysis workflows and insights
  • Collaborative research synthesis

Cross-Discipline Collaboration:

  • PM-Data collaboration on insights and analysis
  • Design-Research collaboration on user understanding
  • All disciplines contributing to product decisions using AI tools

Indicators:

  • 60-75% of projects involve cross-functional AI collaboration
  • Standard cross-functional workflows established
  • Quarterly cross-functional training sessions
  • Shared tool documentation and best practices
  • Cross-functional AI tool communities active

Level 4: Advanced

State: Deep collaboration, optimized workflows, measurable impact.

Indicators:

  • 85%+ of projects involve cross-functional AI collaboration
  • Custom workflows for cross-functional patterns
  • Collaboration metrics tracked and improved
  • Cross-functional AI tool expertise across all teams
  • Collaboration drives measurable product outcomes

Level 5: Leading

State: Seamless collaboration, innovation, thought leadership.

Indicators:

  • 95%+ of projects involve cross-functional AI collaboration
  • Thought leadership on cross-functional AI collaboration
  • Custom tools developed for cross-functional needs
  • Collaboration practices shared externally
  • Cross-functional AI usage drives innovation

08Dimension 5: Learning & Experimentation

Definition: The organization's ability to learn from experiments, share knowledge, and continuously improve AI-native practices.

Level 1: Initial

State: Limited experimentation, knowledge siloed, no systematic learning.

Indicators:

  • <10% of teams regularly experiment with AI tools
  • No systematic capture of learnings
  • Limited knowledge sharing about AI tools
  • No experimentation framework or process
  • Learnings not applied to future work

Level 2: Developing

State: Some experimentation, early knowledge sharing, basic learning processes.

Indicators:

  • 25-40% of teams regularly experiment
  • Informal knowledge sharing (Slack, ad-hoc sessions)
  • Basic experimentation frameworks (Builder Day, hackathons)
  • Some documentation of learnings
  • Early communities and champions emerging

Level 3: Defined

State: Regular experimentation, systematic learning, knowledge sharing processes.

Indicators:

  • 60-75% of teams regularly experiment
  • Quarterly experimentation events (e.g., Builder Day)
  • Systematic knowledge sharing (documentation, forums, sessions)
  • Best practices documented and accessible
  • Regular learning reviews and retrospectives
  • Active communities around AI tools

Level 4: Advanced

State: Continuous experimentation, optimized learning, measurable improvement.

Indicators:

  • 85%+ of teams continuously experiment
  • Experimentation integrated into standard workflows
  • Learning metrics tracked and improved
  • Knowledge sharing optimized and automated where possible
  • Learnings drive measurable improvements in practices
  • Innovation cycles accelerated

Level 5: Leading

State: Learning-driven innovation, thought leadership, external sharing.

Indicators:

  • 95%+ of teams continuously experiment and innovate
  • Thought leadership on AI-native learning practices
  • External sharing of learnings and best practices
  • Learning culture recognized externally
  • Experimentation drives measurable competitive advantage
  • Innovation cycles fastest in industry

09Assessment Tool

Use this assessment to identify your organization's current maturity level across each dimension. Your organization may be at different levels across dimensions—this is normal and helps prioritize improvement efforts.

How to Use This Assessment

  1. Assess Each Dimension Independently - Your organization may be at different levels across dimensions
  2. Involve Multiple Perspectives - Get input from PM, Design, Data, and Research teams
  3. Be Honest - The goal is to identify where you are, not where you want to be
  4. Focus on Evidence - Use the indicators to guide your assessment, not just opinions

Scoring Your Assessment

  1. For each dimension, identify the highest level where you answer "Yes" to all questions
  2. Your maturity level for that dimension is the highest level you fully meet
  3. If you meet some criteria for a higher level but not all, note it as "approaching" that level
  4. Create a maturity profile showing your level across all five dimensions

Interpreting Your Results

Uneven Maturity is Normal: Most organizations will be at different levels across dimensions. This is expected and helps prioritize where to focus improvement efforts.

Focus Areas:

  • If Level 1-2 across most dimensions: Focus on foundational tool adoption and training
  • If Level 2-3 across most dimensions: Focus on standardization and workflow integration
  • If Level 3-4 across most dimensions: Focus on optimization and advanced use cases
  • If Level 4-5 across most dimensions: Focus on innovation and thought leadership

Quick Wins:

  • Identify dimensions where you're close to the next level (e.g., Level 2 approaching Level 3)
  • These are good candidates for focused improvement efforts
  • Small investments can yield significant progress

Self-Assessment Questions

For each dimension, review the indicators in the dimension sections above and assess which level best describes your organization. Use the detailed level descriptions to guide your assessment.

Note: A full checklist of assessment questions for each dimension and level is available in the detailed maturity model documentation. Use the dimension sections above as your primary assessment guide.

10Roadmap: Advancing Your Maturity

Use this roadmap to identify specific actions you can take to advance from your current level to the next level in each dimension.

From Level 1 to Level 2

Key Focus: Start experimenting, build early wins

  • Prototyping & Design Velocity: Introduce AI prototyping tools (Figma Make, Cursor) to one product team. Run pilot training session for PMs and Designers. Set goal: Create 3-5 prototypes using AI tools in next quarter.
  • Data & Insights Automation: Pilot AI transcription tools for user research interviews. Experiment with AI code assistants (Cursor) for data analysis. Set goal: Use AI tools for 25% of analysis work in next quarter.
  • Tool Fluency & Infrastructure: Identify tool champions in each discipline. Establish basic tool evaluation process. Procure organizational licenses for 2-3 key tools.
  • Cross-Functional Builder Culture: Run one cross-functional workshop on AI tools. Create shared Slack channel for AI tool discussions. Set goal: One cross-functional project using AI tools.
  • Learning & Experimentation: Plan first Builder Day or hackathon event. Create basic documentation template for learnings. Set goal: Capture learnings from 3-5 experiments.

From Level 2 to Level 3

Key Focus: Standardize practices, integrate workflows

  • Prototyping & Design Velocity: Establish standard prototyping tool stack. Integrate prototypes into project kickoff templates. Set quarterly training cadence. Set goal: 60% of concepts have prototypes, PMs create 30%.
  • Data & Insights Automation: Standardize AI tools for all interviews and analysis. Create standard workflows for AI-assisted analysis. Set goal: 60% of analysis uses AI tools, 3-7 day insight cycle.
  • Tool Fluency & Infrastructure: Finalize approved tool stack per discipline. Establish quarterly training program. Create tool documentation and best practices. Set goal: 60% adoption across product org.
  • Cross-Functional Builder Culture: Establish standard cross-functional workflows. Create shared tool documentation. Set quarterly cross-functional training. Set goal: 60% of projects involve cross-functional AI collaboration.
  • Learning & Experimentation: Establish quarterly Builder Day events. Create systematic knowledge sharing process. Document best practices. Set goal: 60% of teams regularly experiment.

From Level 3 to Level 4

Key Focus: Optimize usage, measure impact

  • Prototyping & Design Velocity: Create custom templates and workflows. Measure prototype-to-decision time. Optimize based on metrics. Set goal: 85% of concepts prototyped, 1-4 hour creation time.
  • Data & Insights Automation: Create custom AI workflows for common patterns. Automate insight surfacing to product teams. Measure analysis velocity. Set goal: 85% of analysis uses AI, 1-3 day insight cycle.
  • Tool Fluency & Infrastructure: Measure and optimize tool ROI. Create custom integrations and workflows. Build internal tool communities. Set goal: 85% adoption with high proficiency.
  • Cross-Functional Builder Culture: Optimize collaboration workflows. Measure collaboration impact on outcomes. Create custom tools for cross-functional work. Set goal: 85% of projects involve cross-functional collaboration.
  • Learning & Experimentation: Integrate experimentation into standard workflows. Measure learning metrics. Optimize knowledge sharing. Set goal: 85% of teams continuously experiment.

From Level 4 to Level 5

Key Focus: Innovate, lead, share externally

  • Prototyping & Design Velocity: Enable PMs and Product Designers to submit pull requests. Train PMs and Product Designers to generate Python notebooks. Create frameworks and templates for code contributions from non-engineering roles. Set goal: PMs and Product Designers contributing code regularly.
  • Data & Insights Automation: Build reusable analysis frameworks and libraries. Create internal tools that democratize data analysis. Enable self-service research analysis tools. Contribute to open source AI analysis or research tools. Set goal: Internal tools used by other teams, open source contributions.
  • All Dimensions: Pioneer new use cases and techniques. Contribute to tool development. Share best practices externally (blog posts, talks). Build thought leadership. Set goal: Industry recognition, competitive advantage.