Marko Klemetti,
CTO of Eficode
Guide
How software organizations are moving from AI pilots to AI transformation
What five waves of AI maturity survey data reveal about software teams and governance, and what it means for technology leaders in regulated and semi-regulated industries.

Executive summary: AI adoption is growing, but transformation is still early
Artificial intelligence is no longer just an experiment for software organizations. It is becoming a key factor in how they compete.Yet the gap between organizations that are merely experimenting and those genuinely transforming their business with AI is widening, not closing.
This whitepaper draws on five consecutive waves of our quarterly customer survey, covering more than 270 organizations from March 2025 through March 2026. The data reveals a consistent and sobering pattern across every measured use case: requirements management, new code creation, legacy refactoring, delivery, portfolio management, and security and compliance.
Adoption is progressing, but most organizations are still struggling to reach the stages where AI transforms how work gets done.
The share of organizations reporting they are 'not using AI currently' is declining in every category. Experimentation is widespread. But the higher-maturity stages — AI actively assisting people at scale, agents operating within workflows, and autonomous AI — barely register. For leaders in regulated and semi-regulated industries, the picture is even more pronounced: the use cases most critical to compliance and risk management show the slowest adoption curves.
This document explains what the data shows, why most organizations stall, and how technology and business leaders can use a structured framework to move from optimizing individual tasks to transforming the entire software business.
1. How we measure AI maturity in software organizations
Since early 2025, we have run a quarterly survey tracking how its customer base — large and mid-size organizations across industries, including finance, industrial, healthcare, public sector, and technology — is adopting AI across their software development lifecycle.
Survey design
Each quarterly wave captures responses from approximately 360 to 380 participants per use case. Respondents self-assess their organization's current adoption level across six use case areas:
-
New code creation
-
Requirements management
-
Legacy refactoring
-
Delivery
-
Portfolio management
-
Security and compliance
For each use case, respondents select the maturity level that best describes their organization today:
| Not using currently | AI is not in use for this activity. |
| Experimenting / Investigating | Pilots or exploratory use, no production adoption. |
| AI assisting people | AI is in productive use, helping individuals do their work faster or better. |
| Agents assisting in workflows | AI agents are embedded in and partially automating workflows. |
| Autonomous AI in use | AI operates significant parts of this activity with minimal human intervention. |
This five-point scale maps directly to our AI Adoption Framework, which is described in full in Section 6. The survey's longitudinal design — five waves spanning 12 months — makes it possible to track the velocity of adoption, not just its current state.
2. What the AI adoption data shows across six software use cases
The chart below presents all six use case areas across five quarterly waves, from W1 March 2025 through W5 March 2026. Each group of bars shows the distribution across the five maturity levels at a point in time.
Figure 1: Eficode Customer AI Maturity Survey — W1 Mar 2025 to W5 Mar 2026 (~360 respondents per use case per wave)
Adoption is progressing, transformation is not
Across all six use cases, the red bars — organizations not using AI at all — are declining over time. That is the good news. Experimentation (orange) is growing, and the proportion reporting that AI is actively assisting people (yellow) is gradually increasing, particularly in code creation.
But the categories that represent real transformation, such as agents in workflows and autonomous AI, remain very small across most use cases and waves. The data strongly suggests that the vast majority of organizations are stuck in the transition from exploration to integration, unable to move into the phases that deliver disproportionate business value.
AI adoption by software development use case
New code creation leads AI adoption
Code creation shows the most advanced adoption curve of all six categories. By W5 March 2026, the proportion of respondents reporting AI is actively assisting people here is the highest of any use case, and experimentation is also the highest. This makes intuitive sense: coding assistants were the first widely available AI tools for software teams, and the productivity gains are immediate and measurable. However, even here, agents in workflows and autonomous AI remain marginal — the gains are still largely at the individual productivity level.
Requirements management has high AI potential, but adoption remains early
Requirements management started with one of the highest 'not using currently' rates in Wave 1, above 75%. By Wave 5, this has declined significantly, with experimentation and AI assistance growing. This is a use case where AI can dramatically accelerate the translation of business intent into machine-readable specifications. Yet the data shows organizations have barely scratched the surface of what is possible. For regulated industries in particular, where requirements traceability is a compliance obligation, this represents both a risk and an opportunity.
Legacy refactoring is a high-value AI use case with slow adoption
Legacy refactoring shows a pattern similar to requirements management: still over 40% 'not using' by Wave 5, despite the fact that AI is arguably most powerful in this domain. Refactoring large, complex, underdocumented codebases is exactly the kind of work that exhausts human developers — and where AI agents can provide asymmetric value. The slow adoption here likely reflects both risk aversion in organizations with critical legacy systems and the absence of the governance structures needed to trust AI with high-stakes code.
AI in software delivery is progressing steadily
Delivery shows a steady, if modest, decline in non-adoption across the five waves, from approximately 70% to around 40%. This is a broad category covering CI/CD, release management, and operational handoff, and the gradual curve suggests that adoption is happening incrementally as AI tooling matures across the pipeline. Delivery is also where the 'bottleneck effect' described in Section 4 is most visible — faster code generation upstream increases pressure on delivery processes that have not yet been similarly accelerated.
Portfolio management remains an AI adoption blind spot
Portfolio management consistently shows some of the highest 'not using' rates across all waves and remains above 45% by Wave 5. This is striking because portfolio-level decision-making — prioritization, investment allocation, roadmap management — is where AI assistance could most directly affect business outcomes rather than engineering efficiency. The data suggests that AI adoption in most organizations is being driven bottom-up from developer tooling, and has not yet reached the strategic layers where the largest value lies.
AI adoption in security and compliance remains slow
Security and compliance is the use case most directly relevant to regulated industries — and it shows among the slowest adoption curves. By Wave 5, roughly 40% of respondents are still not using AI at all in this area. Given that security and compliance are mandatory, non-negotiable activities in any regulated sector, this is both a risk signal and an opportunity. Organizations that successfully deploy AI agents for security scanning, policy enforcement, and compliance verification gain not just efficiency but a structural advantage in managing the cost of regulation.
In security and compliance, approximately 40% of organizations are still not using AI at all, in a function where automation could be most transformative.
3. Why AI adoption stalls after early experimentation
The survey data describes what is happening. Understanding why requires looking at the structural barriers that prevent organizations from moving beyond early-stage experimentation. Based on our work with over 1,800 customers across ten countries, four blockers account for the vast majority of stalled AI transformations.
| No measurable ROI | Hundreds of AI pilots generate no measurable business impact. Value stays isolated inside individual teams and never scales across the organization. Faster code output often creates larger queues at every downstream stage — requirements, testing, security, and ops — so lead time does not improve. |
| Regulations are a blocker | Security, legal, and compliance concerns slow or halt AI initiatives, particularly in regulated sectors. Uncertainty about data residency, sovereignty obligations, and emerging regulations blocks real adoption rather than just slowing it down. |
| Technologies evolve rapidly | Rapidly evolving tools and platforms create confusion and fragmented tooling decisions. Organizations struggle to commit to a scalable architecture when the landscape shifts every quarter. |
| New skills and roles required | There are no clear roles, responsibilities, or structures for AI across the lifecycle. AI remains an individual capability rather than an organizational one, limiting how far any transformation can travel. |
The bottleneck effect in detail
The most common misconception about AI ROI is that deploying a coding assistant makes the organization more productive. In isolation, it does — developers write code faster. But in most organizations, development is not the bottleneck. Requirements gathering takes weeks. Testing is partly manual and takes days. Compliance review is checklist-driven. Deployment requires human sign-off at multiple stages.
When you accelerate one stage of a pipeline while leaving the rest unchanged, you do not speed up the pipeline. You create a larger queue at the next constraint. Faster code output simply means more code waiting longer for testing, review, and deployment. Lead time does not improve. Business value does not arrive faster. And the ROI of the AI investment appears to be zero, even though the underlying technology is working exactly as advertised.
THE SYSTEMS LESSON FOR LEADERS
Real productivity gains from AI require end-to-end thinking. Organizations that accelerate one stage without addressing the surrounding pipeline will see no improvement in what matters to the business: time-to-value, release frequency, and defect rates. AI transformation is a systems problem, not a tooling problem.
Why regulated industries face an additional challenge
For organizations operating under formal regulatory frameworks — whether financial services, healthcare, energy, defense, public administration, or critical infrastructure — the four blockers above are compounded by a fifth: the requirement to maintain an auditable, defensible record of every consequential decision made in the software lifecycle.
Traditional AI tools built for commercial SaaS deployment were not designed with regulatory audit trails in mind. Deploying them without an appropriate governance infrastructure creates legal and compliance exposure. But building that governance infrastructure requires skills, architecture, and organizational maturity that most organizations are still developing.
The result is that regulated industries, which often have the most to gain from AI-driven efficiency — because compliance overhead is disproportionately large — are also the most constrained in how quickly they can safely adopt.
The scale of the strategy gap makes this concrete. Survey data shows that by Wave 5, only 10% of organizations actually have a formal AI strategy in place — while 59% are in the process of creating one. A further 31% have no strategy at all. Organizations cannot govern what they have not defined.
Only 1 in 10 organizations has a formal AI strategy. Most are building the plane while flying it.
4. A five-phase AI maturity framework for software organizations
Our AI Adoption Framework describes five phases of AI maturity in software organizations. The phases are not arbitrary — they reflect the structural changes in people, processes, and technology required to unlock each successive level of efficiency gain.
| Phase | Name | Efficiency | What it means |
| Phase 1 | AI-enhanced productivity | 1.2× | Individual developers use AI coding assistants. Local productivity gains. Most organizations are here today. |
| Phase 2 | AI-powered agents | 2× |
AI agents handle discrete tasks end-to-end. Development delivery accelerates |
| Phase 3 | Multi-agent workflows | 3× | Agents work in concert across the pipeline. Software operations become substantially more efficient. |
| Phase 4 | AI-orchestrated lifecycle | 5× | AI runs the full software lifecycle autonomously. Tooling capabilities to achieve this exist today. |
| Phase 5 | AI-native software business | 20× | The entire software business is AI-driven. Humans focus on strategy, creativity, and oversight. |
Most organizations are still in the early stages of AI maturity
The survey data maps closely onto this framework. The majority of responding organizations are operating in Phase 1, with some moving into Phase 2. The gains here are real — individual developers work faster, some tasks are delegated to AI agents — but they are optimization gains, not transformation gains.
The critical threshold is the move from Phase 3 to Phase 4. This is where the nature of the change shifts from 'we have made our existing processes more efficient' to 'we have changed the fundamental operating model of our software business.' The tooling capabilities required to reach Phase 4 and Phase 5 now exist. What is missing for most organizations is not technology but the people, process, and governance foundations to use it safely and at scale.
The tools to reach 5x and 20x efficiency exist today. The organizations that get there first will not be the ones that moved fastest — they will be the ones that built the right foundations.
The polarization risk
A critical strategic dynamic is beginning to emerge in the market. As AI tooling shifts from per-seat subscription models to consumption-based billing — paying for tokens, compute, and agent actions rather than licenses — the economics of AI adoption will increasingly favor organizations that have moved to higher phases of the framework.
Organizations treating AI primarily as a cost-reduction tool — deploying it to cut headcount and reduce spend — will capture Phase 1 gains and then plateau. Organizations treating AI as a value-creation engine, meaning using it to build new capabilities, serve customers better, and enter new markets faster, will continue to compound their advantage. The divergence between these two groups will accelerate.
5. AI governance for regulated and semi-regulated industries
For organizations operating in regulated or semi-regulated environments, AI adoption is not simply a matter of deploying tools and training developers. It requires aligning the level of AI control to the regulatory environment. That alignment needs to be a deliberate strategic choice, not an afterthought.
We have developed a five-level control framework that maps regulatory requirements to appropriate AI deployment architectures:
| Level | Name | Controls | Key Regs | Typical Context |
| L1 | Open / SaaS | No special controls | GDPR basics | Low-risk experimentation, generic use cases |
| L2 | Controlled access | Policies, secured SaaS | GDPR, AI Act | Internal productivity, non-sensitive data |
| L3 | EU boundary | Data and ops within EU | DORA, NIS2 | Regulated industries, sensitive operational data |
| L4 | Ops independence | Private / self-hosted | Critical infrastructure | High-security environments, national operators |
| L5 | Full sovereignty | Air-gapped, fully isolated | National security | Defence and highest-restriction environments |
Most organizations in regulated industries are currently operating at L1 or L2, using commercial SaaS AI tools with limited governance controls. The survey data reflects this: security and compliance use cases show the slowest AI adoption precisely because the required L3, L4, or L5 controls are not yet in place. Infrastructure data from Wave 5 confirms the picture — 59% of organizations are running AI primarily on public cloud, 29% on hybrid, and only 2% on sovereign AI infrastructure.
Moving to L3 or above is not simply a technology project. It requires decisions about data architecture, vendor selection, staff location and vetting, and legal jurisdiction. These decisions interact with AI adoption strategy at every level. An organization that commits to a full EU-boundary deployment (L3) will make different tooling choices than one that can operate on global SaaS.
STRATEGIC RECOMMENDATION
Regulated-industry leaders should establish their required control level before selecting AI tooling — not afterward. The cost of retrofitting governance onto an AI deployment is typically far higher than designing it in from the start. This is an architectural decision, not a procurement decision.
6. AI transformation is a leadership challenge
Perhaps the most important insight from five waves of survey data is this: the constraint on AI adoption is not technology. The tools exist. The APIs are available. The models are capable. The constraint is organizational, and at its core, it is a leadership challenge.
How roles evolve in AI-powered software organizations
Most organizations today are still in what might be called the Builder phase of human-AI collaboration: developers use AI to speed up individual work, operating within tools rather than workflows, focused on local productivity gains.
The next phase — Composer — requires people who can design AI-driven workflows, coordinate work across humans, tools, and AI agents, and operate at the team level rather than the individual level. The phase beyond that — Value Architect — requires leaders who define outcomes and systems, aligning AI to business value at the organizational level.
Ninety to ninety-five percent of organizations today are operating at the Builder level. Moving to Composer and Value Architect requires deliberate investment in skill development, new role definitions, and cultural change — not just access to better tools.
Why this is a leadership transformation, not a technology one
Previous major technology transformations, such as ERP implementations, cloud migrations, and Agile adoptions, changed the outer layers of how organizations work: the tools they use and the processes they follow. They were significant and difficult. But they did not fundamentally alter who makes decisions or what those decisions are about.
The agentic AI transformation goes deeper. It changes who is doing work (humans and AI agents together), what decisions are being made (with AI participating in or driving decisions previously made exclusively by humans), and why the organization is structured the way it is (when agents can own workflows end-to-end, traditional functional silos lose their rationale).
This is why most AI initiatives fail to move beyond pilots. A pilot is a technology experiment. A transformation requires leadership commitment, organizational redesign, investment at a scale comparable to ERP or cloud migration, and the willingness to change core operating models rather than just add new tools to existing ones.
Companies that spent 1–3% of revenue on ERP and cloud transformation will need to invest at a comparable or greater scale for agentic AI, because the scope of change is fundamentally larger.
7. Three actions leaders can take to scale AI adoption
The survey data and the framework above point to three concrete actions that technology and business leaders should take now — regardless of where their organization currently sits on the maturity curve.
Diagnose the real bottleneck
Before purchasing more AI tooling, map your end-to-end software delivery pipeline and identify where the actual constraints are. In most organizations, development is not the bottleneck — requirements, testing, compliance, or deployment are. Invest AI capability where it will reduce cycle time, not where it is easiest to deploy. This analysis is a leadership exercise, not a technical one.
Set your control level before selecting tools
Determine what level of regulatory and governance control your AI deployment requires — using the L1–L5 framework or equivalent — and use that as a filter on your tooling and vendor decisions. This is especially important for organizations in regulated sectors, where the cost of getting this wrong includes both compliance exposure and the need to rebuild your AI architecture mid-journey. Establish your AI governance policy and data residency requirements as prerequisites, not afterthoughts.
Invest in the organization, not just the tooling
The survey data shows that most organizations are stuck at Phase 1, not because they lack AI tools but because they have not invested in the organizational foundations required to go further. This means building AI literacy across the workforce, defining new roles and responsibilities for the AI-native era, establishing cross-functional AI governance, and treating AI transformation as a program of cultural and organizational change — not just a technology rollout. Budget accordingly.
The data from five waves of customer survey tells a consistent story: AI adoption in software organizations is real, it is accelerating, and it is, for most organizations, still nowhere near delivering its full potential.
The organizations that will win are not necessarily the ones moving fastest today. They are the ones building the right foundations: understanding their bottlenecks, establishing appropriate governance, investing in their people, and thinking about AI as a business transformation rather than a tooling upgrade.
For leaders in regulated and semi-regulated industries, the window of opportunity is open, but it requires more deliberate thinking about control, compliance, and architecture than the broader market does. The organizations that navigate this thoughtfully will find that the regulatory constraint, which slowed their initial adoption, becomes a competitive moat: a platform of trusted, auditable, governed AI capability that is very hard for less-disciplined competitors to replicate.
The transformation is not a technology problem. It is a leadership one.
Henri Hämäläinen,
Head of DevOps and AI at Eficode