Six trends for CTOs in 2026: The shift to autonomous SDLC
The era of the passive AI chatbot is over. In 2026, the software development lifecycle (SDLC) shifts to autonomous agents, meaning intelligent actors that roam your infrastructure to eliminate manual toil. From automating audit trails to managing service restarts, agents don't just assist developers; they execute operations independently, driving a massive leap in development velocity.
1. Global AI governance: Mitigating "Shadow AI" risks
Organizations do need AI tools to improve, but they need to be set up with the right level of governance to ensure that intellectual property is not leaked, or even worse, your AI tool of choice has access to PII data, but not in a controlled manner. If your platform doesn't block unvetted LLM endpoints by default, your IP is already leaving the building.
While we are all trying to stay ahead of the curve to leverage the competitive advantage right here, right now, we often forget that policies should be in place and they should be followed. This is a topic that keeps coming up, but the context of AI’s speed and access to sensitive information makes the governance challenge highly urgent and important.
As all good platform engineering stories go, there is a fine balance between ultimate freedom and how much risk you are taking on your account. I expect to see a higher focus on ensuring the right tools are available, but they are set up in a manner that doesn’t bring you to the front page of the news media.
2. From Copilots to autonomous agents: The shift to agent-driven operations
There is no doubt that AI has revolutionized the industry, and it will keep doing so for a while. One of the major successes behind AI adoption is for sure that it became available for everyone quite fast, and it assisted our day-to-day life. Whether it was helping us debug code or come up with creative solutions to a given problem.
While assisting our lifestyle on demand has been helpful, the next step is switching to autonomous agents to enrich our lives, both in work and leisure.
Though it takes time to switch to fully autonomous agents, I expect to find them in the SRE / incident response space first. This is to make sure our resilience is high, and we have as quick a response as possible, whether it’s application logic, infrastructure, or cybersecurity tasks at hand for the agent.
“Something that's going to change is that AI tools are going to be built up as part of the corporate tools into the build pipeline." Henri Terho, Principal AI consultant, Eficode
While it’s not all about infrastructure, cybersecurity, and resilience, there is great potential for improving our whole SDLC. I see the big players like GitHub and Port.io taking the step, and including agents in their portals, or the like. While it would be easy to go big and claim the space for all agents, I’m seeing a trend in having fairly open systems that allow for third-party and custom agents.
Luckily, as an industry, we are seeing a trend towards being open for others being the experts in specific fields instead of delivering half-baked solutions for everything. Whether you end up with one of the big vendors with their agent portals, or you choose one of the other alternatives, it will for sure be important to be in control of what is running, how it connects, and what it can see.
3. Regulatory readiness: EU AI Act and Cybersecurity Resilience Act compliance
With the upcoming legislations like the EU AI Act, EU Cyber Resilience Act, which add specificity on top of DORA and NIS2 for many and something new for others, it will be very important that you are in control of your SDLC. You should be able to run audits if needed on your setup and, in worst-case scenarios, show complete audit trails should a cyber incident occur.
This high demand for control is why platform engineering is essential. Good platform engineering practices put security in the product that most developers in your organization should be using. While having a secure platform is important, we’ll see more and more compliance as code in 2026 as well. Specific tools for compliance as code have not emerged yet, but I’m sure plenty of vendors will enter that space based on their current offerings of automated security scanning and reports
4. Cloud sovereignty: The pivot to European cloud
For European enterprises, there is a rising trend of moving away from the big cloud providers. As an offspring of GDPR and other initiatives, there is a need for being in control of data residency, having operational sovereignty, and knowing exactly which jurisdictions are in play. With the upcoming EU Cloud Services Cybersecurity Certification Scheme (EUCS), I expect more organizations to move towards EU-based operations. This removes the discussions of jurisdiction where the big vendors usually are US subsidiaries, and not pure EU entities.
With a market that is growing for sovereignty, I see more of the EU-based cloud providers stepping up their game when it comes to the capabilities they are offering. What began with a bare metal offering is now moving in the direction of managed services with shared responsibility models that are well described, and organizations can be in full control, knowing that data is not exported outside of Europe.
As Eficode’s Principal AI Consultant Henri Terho predicts, the friction between EU policy and US business models is forcing a migration:
"Worried for their data, many organizations will transition to Europe-native clouds and on-prem installations because of the policy issues and the way that the US is behaving now."
While organizations regain sovereignty, it leaves an open question: what will we do about AI workloads? I see many of the EU-based clouds offering either bare metal GPU options, and some even GPU-enabled Kubernetes clusters. It will take a bigger effort to get the same flexibility as buying a service in the cloud, but it will move the risk away from fear of data exfiltration to a risk of operational excellence when it comes to GPU-enabled workloads. What I do see in that space is a greater need for packaging AI capabilities effectively within your developer platform.
5. Platform engineering strategy: The "platform as a product" model
Across all the emerging 2026 trends, the need for robust platform engineering is clear. This goes beyond a unified way of working; it demands running your platform with a “Product mindset.” This means your developer platform needs to be run as any other product in your organization, which means roadmaps, user interviews, communities, and, more importantly, it comes with an investment into making the best platform for the developers, if you don’t want them to go back into shadow IT.
But now, you’ll have to ensure the entire organization, not just development teams, are stakeholders. The usual personas that are covered in platform engineering initiatives are developers, platform engineers, and infrastructure. Your platform will need to empower everyone, from engineering managers to FinOps practitioners and even some C-level executives, to instantly gauge the 'state of the union' across the organization.
While more personas are bound for getting a glance into the development platforms of your organization, I believe it ties well with the shift from running IT as an operational cost, but putting the responsibility of profit and loss on given business units. If you want to run a profitable business, you need to ensure that the right people are looking at the monetary value of a service and will decide to turn it off if it is not profitable.
6. AI FinOps: Optimizing GPU costs with Kubernetes DRA
As it is clear that AI is here to stay, I keep seeing a rising demand for squeezing the last bit out of the more expensive hardware that supports the AI workloads. Even though the majority I see leverage the bigger models in the cloud, I’m seeing customers building up AI capabilities on-premise as well.
To ensure a higher return on investment for expensive investments, luckily, there are tendencies moving towards bin-packing of AI workloads.
In the Kubernetes world, I have observed how Dynamic Resource Allocation (DRA) has been defined, adjusted, and made available to ensure a standardized way of addressing GPUs, for example. For the workloads running elsewhere, I expect a large base of our clients, current and potential, to focus on tweaking the infrastructure to match the workload to make profitable solutions.
To achieve the best possible outcome of your investments, you need a robust platform engineering setup with defined capabilities, modularity, and an invested organization behind it to serve internal customers.
Published: