The Future of Software: Key takeaways from the 2025 London conference

Let’s be honest. If you’ve been in tech long enough, you’ve seen hype cycles come and go like seasonal fashion. And yet, there’s something different about this current wave of AI. It’s not just hype. It’s profoundly reshaping how we work and how we build.
After listening to some of the smartest folks at The Future of Software conference in London, in March 2025 (including Patrick Debois and Kelsey Hightower), a few themes stood out.
AI is just math (but also not just that)
Although the AI revolution feels sudden, it’s built on decades of work. As we move forward, the focus shouldn’t be on which model to choose, as those are becoming commoditized. What’s crucial is the system around the model: Data governance, security, filtering, context, and user experience. Choosing your infrastructure and building thoughtful safeguards around AI usage will be where the real battles are won.
Henri Terho, Senior AI Consultant at Eficode, reminded us of a simple truth that gets lost in the excitement: AI isn’t magic. It’s just really good at math, specifically matrix multiplication. It doesn’t think; it predicts. And when it comes to humans, we’re terrible at grasping the pace of exponential change.
The new skill in town? Prompting. Clearly articulating what you want your AI tool to do for you is becoming the essential skill of the modern engineer. Think of it as your colleague. Explain, add details, and provide background in your prompt; you’ll get a better result. AI has the potential to change the world, but only if we take the time to set it up thoughtfully.
Responsible AI starts with critical thinking
A major message from the panel discussion at the London event was that responsibility isn’t a software feature. It’s a mindset. AI’s biggest danger is how convincing it can be, even when it’s wrong. The truth is that AI will get things wrong, and someone needs to be in the room to catch it.
As AI gets more integrated into workflows, from banking to creative work, the expectation is shifting from engineers to everyone. Responsibility is no longer a role. It’s a shared skill. Lofred Madzou, AI Governance Expert, summed it up: “We need to teach people not just how to use AI, but how to question it.”
And it starts early—with critical thinking and better education, with the understanding that, as these tools become more powerful, it’s our judgment, not just their output, that matters.
Culture over process
The fundamental shift we’re experiencing isn’t just technological; it’s cultural and in how we work together. Kelsey Hightower’s “relay race” metaphor best describes the situation we’re in: We’re not in a solo sprint. We’re passing the baton between generations. Mentorship, documentation, and inclusive contribution processes aren’t fluff. They are infrastructure, too.
As we enter a world where “software decides”, the bar for professionalism is increasing. Identity, accountability, and trust in software development are becoming front-and-center issues. The open source world is already feeling the pressure, and soon, the rest of us will too.
Another eye-opening session by Chris Davidson from Atlassian pulled back the curtain on what really slows teams down: misalignment. It’s rarely the tech that kills velocity. It’s unclear goals and fuzzy priorities. It’s not about adding more meetings; it’s about providing more context and clarity around goals. One experiment they’re trying at Atlassian? Friday “working out loud” updates. No sync, no micromanaging. Just a human-readable log of what’s happening. Small changes like this can free up teams to focus on what really matters.
When AI enters the team, think of it as a collaborator, not a disruptor. Chris described an optimistic future: Human + AI teams aligned on purpose, delivering faster and smarter work. When humans and AI work together with intention, real breakthroughs happen.
Legacy isn’t a dirty word
Old systems aren't necessarily broken; they’ve simply earned their place by surviving. Some of the most resilient systems, such as airline reservation software from the 1960s, are still quietly doing their job.
Legacy isn’t just about the software; it’s about the expertise built around it. As we automate more, there’s a growing risk that we may skip the learning curve entirely. If AI writes our code, summarizes our research, and drafts our contracts, when do you build domain knowledge?
Legacy systems aren’t just systems. They’re teaching tools, test beds, and memory banks. Throwing them out risks forgetting why things work the way they do. Progress in software doesn’t mean leaving the past behind. It means carrying forward the lessons learnt, building on what works, and knowing what’s worth rethinking.
Patrick Debois framed this tension perfectly when he asked, “Aren’t we just reinventing the wheel over and over?” Yes, and that’s okay. Reinvention is how we learn, iterate, and build better systems. The key is doing it with intent, not just because the old one isn’t trending anymore.
Make what you want to exist
More than anything, The Future of Software was a call to build with intention. Reinvent the wheel if you must, but do it with a purpose. Pass the baton, but give others a running start.
The software we build today is increasingly making decisions. The question is: Are we making sure it reflects our best thinking? Whether you’re mentoring others, refining prompts, shaping policy, or just trying to keep pace with the whirlwind of change AI brings, the future isn’t waiting. But it asks us to show up more responsibly, clearly, and collaboratively.
As Kelsey said, “The people who are best at predicting the future are the ones who work on it.”
If you want to learn more about how we help clients shape tomorrow’s solutions by transforming software development in your organization, contact our DevOps teams.
Published: