Skip to main content Search

Visionaries, Rebels and Machines with Jamie Dobson

In this episode of the DevOps Sauna, Darren and Pinja sit down with Jamie Dobson to explore his new book Visionaries, Rebels, and Machines—a high-energy journey through the breakthroughs and lucky accidents that pushed technology from steam engines to today’s AI-powered cloud.

 

[Jamie] (0:06 - 0:14)
I've created better workers, better parents, better citizens, and therefore a better society. Managers underestimate the power that they hold within their hands.

[Darren] (0:17 - 0:25)
Welcome to the DevOps Sauna, the podcast where we deep dive into the world of DevOps, platform engineering, security, and more as we explore the future of development.

[Pinja] (0:25 - 0:35)
Join us as we dive into the heart of DevOps, one story at a time. Whether you're a seasoned practitioner or only starting your DevOps journey, we're happy to welcome you into the DevOps Sauna.

[Darren] (0:47 - 0:56)
Welcome back to the DevOps Sauna. Today, we're joined by Jamie Dobson, founder and former CEO of Container Solutions. Hey, Jamie.

[Jamie] (0:56 - 0:57)
Hello.

[Darren] (0:57 - 0:59)
And of course, I'm joined once again by Pinja.

[Pinja] (0:59 - 1:02)
Hello, Jamie. Warmly welcome to the podcast.

[Darren] (1:02 - 1:15)
Thank you so much. Okay. So today we're talking about...

Jamie, you have a book coming out on June 19th called Visionaries, Rebels, and Machines. Before we get onto that, why don't you tell us a little bit about yourself?

[Jamie] (1:15 - 2:00)
Absolutely, Darren. Thank you so much for having me, first of all. So I was the co-founder of Container Solutions.

That's correct. And that was a... Well, was, it still exists.

It's a consultancy that basically specializes in programmable infrastructure. And all of the work that we do at Container Solutions has grown out of my work, which was in the very beginning as a computer programmer. I then learned something called extreme programming, and then I basically took that method, moved it forward, and that became the method that we use at Container Solutions.

But mainly, I'm just a very nice person. I try not to do too much evil or harm, and I love computers. Since I stepped back from being the chief executive of Container Solutions, I've had somewhat of a life back.

These days, I've been enjoying reconnecting with the community and reading all the books that I had been meaning to read for the last 10 years, but couldn't get around to.

[Pinja] (2:01 - 2:14)
So let's talk about the book a little bit, and let's talk about what was the initial why you wanted to write this book. You mentioned that you're really interested in joining the community and bringing it back to the community. So why did you go around writing this book?

What is the target audience, perhaps?

[Jamie] (2:14 - 3:39)
It's a bit of a multifaceted question, that one. There was a why rolling around in my head, probably for about 25 years. And it was because I thought I knew the story of computing.

But there were a couple of black holes. And actually, I think computer programmers are all a little bit scared of electronics. And so I knew one day I would have to learn about transistors and semiconductors.

And I'd been meaning to do that for almost 30 years. So that was one idea. How can I shoot an arrow that starts in Thomas Edison's Menlo Park facility, that goes all the way through the last century, and then lands at our feet today with cloud computing?

However, a but, a big but. Once I started to really sit down to piece the story together, it does make a lot of sense to say, well, who am I writing this for? Originally, the first writer was probably somebody a bit like me, but maybe 10 or 15 years earlier in their careers.

So somebody who had been tasked to innovate at scale, how do you do that? Maybe tasked to bring in their company onto the cloud or introducing DevOps and things like that. So a huge focus of the book is teams.

So it's about people who build systems. And you don't build systems like you build single programs. And you don't build systems or manage systems thinkers in the same way you would manage people in a factory.

So that was the original target audience. Soon after, I realized there were two other miscellaneous readers. The first one, this is a pretty funny story, but you better be prepared to become depressed.

Are you both ready for that?

[Darren] (3:39 - 3:41)
We're always ready for that.

[Pinja] (3:41 - 3:41)
Always.

[Jamie] (3:42 - 4:50)
So last year, DevOps London, it's a nice meetup around here in the city, needed a speaker. And I was right in the middle of polishing chapter 10, which is about AWS and the birth of the cloud.

And I thought, okay, no problems. I can speak about this, but I cannot speak about the cloud unless I mention the PC and unless I mentioned something called Moore's law. So that's what I did.

As soon as the talk finished, I had a queue of people, young people, younger than me anyway, and they were all enthusiastically waiting to ask me questions. And it turned out that some of them had recently graduated computer science last couple of years, and they had the first job in the industry. And they wanted to know about the olden days.

I was like, great, fantastic. So my mind exploded into life as I started to think about time-sharing computers and semiconductors. And as I started to answer the questions, they said, no, no, no, no.

That's not what we mean by the olden days. We're talking about the 1990s. And it was absolutely shocking.

They had no idea what the web is. Where does it come from? What's the protocol?

Are the web and the internet the same? So there was a huge black hole in their understanding of the history of our industry. So immediately I treated them as a reader.

And I think it was a positive thing because the book became much lighter in tone after that.

[Darren] (4:50 - 5:29)
There's actually a kind of interesting parallel here. There's this book I recommend to anyone getting into cybersecurity. I actually have it on my desk.

It's referenced in my master's thesis and such. It's called If It's Smart, It's Vulnerable by Mikko Hyppönen. And it's like, it has this historical framing of cybersecurity basically from the start, from the first floppy disk transmitted viruses through to the current situation.

And we received a copy of this book prior to the podcast I've been going through it. And it feels like this book does that for basically a wider angle from like cloud computing. You even go a little bit into AI.

[Jamie] (5:29 - 7:08)
That's right. I think the book you mentioned is making a very inaccessible subject accessible. So I think cyber is quite terrifying for a lot of people.

So already the title, you said, If It's Smart, It's Vulnerable. Already that title is quite suggestive of what's about to happen because that in just one sentence makes a lot of sense. Yes, of course, if something smart, if it has a blast radius or a surface area, it can be vulnerable to cyber attacks.

So I did try to make all of the concepts in the book accessible. And I set myself a crazy challenge. I've never actually told anybody this before, but I decided possibly extremely foolishly to teach the reader about computing without teaching them about computing.

I don't say at any point in the book, oh, this is an exclusive or. What I do instead is I say in the book, imagine you walk into a room and you flip the light switch on, the light goes on, but you leave the room by a different exit and you turn off a light at the different exit. One switch is on, one switch is off.

That's exclusive or. And the reason I did that is not to be clever or creative. I genuinely believe that the general reader switches off a little bit when you get too technical too early.

So what I tried to do was loosen up the minds of the reader, then at the exact right moment pop a concept in. So I think that's necessary. And I think it's necessary because whether we like it or not, we live in a computerized world.

And if we're going to have sensible conversations about AI, and by the way, that's not even a thing. It's such a catch all term. It's a stupid thing to discuss.

If we're going to discuss AI sensibly and what it means for our children, we have to be more computer literate. So that's definitely one of the aims of this book.

[Pinja] (7:08 - 7:15)
Do you think that there is something that we're not allowing ourselves to admit that we don't know something here?

[Jamie] (7:15 - 7:16)
Absolutely.

[Pinja] (7:16 - 7:25)
Is there something like a psychological safety issue here that we're expected to be these experts and then not allow ourselves to say they actually don't understand where this is coming from?

[Jamie] (7:25 - 8:11)
I think that's a huge thing. So the idea that we should be vulnerable and we should fail, of course, is theoretically wonderful, but practically, even for those who are experienced in failure, it's extremely painful. And so a lot of the book, we talk in the book about the work of Thomas Edison.

We talk about the work of Robert Oppenheimer in the deserts of Los Alamos. And psychological safety is a theme throughout the last 150 years of innovation. And that's because people don't change, even though the technologies they build do.

There is a huge issue with us being unwilling to admit this. So one person I spoke to who will remain anonymous had said to me, you have to finish this book because I need to understand AI as a normal person. So she differentiates between normal people and computer people.

[Pinja] (8:11 - 8:13)
That's often the case, yes.

[Jamie] (8:13 - 8:39)
As a normal person. And it turned out that that person leads a team of investors that specialize in artificial intelligence. Now, I'd seen this before.

Once upon a time, investors couldn't evaluate web businesses, and thereafter, they couldn't evaluate cloud businesses. So in a way, this book is a guidebook for those people who want to understand more specifically, what do we actually mean when we talk about artificial intelligence?

[Darren] (8:40 - 9:20)
From what I've read of this book, it highlights exactly that, that we have this issue of letting ego get in our own ways. And we have this event, the future of software. We had it in Helsinki last year, and we had someone come on stage and very frankly, talk about why their AI driven business had failed.

And just that was such a refreshing take from all this AI is great, AI is powerful, AI is everything to here is AI, here is the fail. And just the removal of ego from that equation to allow for the discussion and the culture of experimentation and this failure driven approach.

[Jamie] (9:20 - 10:18)
That's amazing. I think the problem is it's not only, so there's the first thing you need to overcome is your ego. But the second thing you have to overcome, and if you work in a business that's had investors, or if you've led a company, admitting failures may be then used against you.

So that is one of the other most practical considerations. We can't all jump up on stage. I could tell you off the record about my biggest failures that contain the solutions, but I can't do it here on the record, because I wouldn't want to hurt people's feelings.

I wouldn't want to give a one-sided version of the story. I think it's a huge challenge. So one of the themes of the book is failure is normal.

So failure must be normalized. And I think we've learned to do that as programmers for ourselves. We've learned to do that in a team setting.

I think we're struggling to fail at a societal level. And of course, what happens is if you refuse to fail as you're developing an idea, then things fail catastrophically in real life. And so interestingly, failure is not completely normalized in our schools, our universities and across government, but we see failures all over the place.

[Darren] (10:18 - 10:47)
It's kind of interesting though, because as technical people, we have not just a culture, but an entire tool set built around failure. So as programmers, we have debuggers. As security people, we have scanners.

We have the tooling. And I think that's actually where society is missing out. There's no verification tooling.

There's no assistance for determining failure. And maybe it's because what we do can be a bit more easily technically defined than a lot of social issues.

[Jamie] (10:47 - 11:53)
I think that's definitely true. Yeah. I think once you're in the social realm, what counts as a failure?

And then of course, you've got ego. I told you earlier, when you build a company, you lose a bit of your soul because you see human nature in a different way. And most techies are just quite nice people.

But when Container Solutions was really succeeding, I would meet businesses that wouldn't necessarily become our customers, but they all had a vested interest in saying their cloud failures were cloud successes. So the cloud provider who would charge them tons of money to get onto their cloud, the consultancy who would help them, and the executive who would put their career on the line to go cloud or become cloud native, they all had a vested interest to reframe their failures as success. And that's because they'd have all lost their jobs if the truth would have come out.

And this is very, very depressing. And ultimately, who loses are the shareholders. They're the ones who really lose in that situation.

And I don't know how you can change that. Here in London, this is very, very common in the big banks and the big insurance companies to reframe partial success or complete failure as success.

[Pinja] (11:53 - 12:16)
And if we think of the history, you mentioned that the human nature and our minds, they haven't actually changed that much. And I also read some information just, I think it was last week or so, that we live in this society because we always need to think about our technology in the context of our culture, right? But the human mind hasn't really changed biologically since the beginning of the modern human, has it?

[Jamie] (12:16 - 14:31)
No, it hasn't. So if you could go back in time and kidnap a baby and bring them back here, they would grow up just like us. And this is very, very serious.

This is a serious problem. Technology evolves quicker than the mind, for sure. So Vaclav Smil, a historian of technology, great writer, very technical writer, he makes a stunning observation about what he calls the, I think he calls them the magical 1880s.

And this is the observation. If there were aliens going around planet Earth, observing us, they would be asking themselves, why has this planet that has laid dormant for millions and millions of years, all of a sudden started pulsating radiation from right across the electromagnetic spectrum? And it's because we harnessed the electricity, we created the combustion engine, and before you knew it, we were just sending out radio waves, microwaves, you name it, to the rest of the universe.

And people talk about a tsunami of change. This is incorrect. So for example, Mustafa Süleyman, the founder of DeepMind, he wrote a bestselling book about artificial intelligence called The Coming Wave.

There is a famous economic theory that talks about waves of innovation. It is just not true. Electronic innovation from the light bulb all the way through artificial intelligence supercharges itself.

So every wave is powered by and is more powerful than the one that came before. So unbelievably, only 50 years after the Wright brothers first flew, we were on the moon. And to do that, we had to invent guidance computers to put into the nose cone.

And to do that, we needed to shrink down the light bulb to the size of the transistor and stick it in a microchip. So the pace with which things are changing is absolutely remarkable. And then the other thing is we invent tools that accelerate the pace further.

And the example I always give here is we got more powerful microchips, which gave us better software. Once we had better software, we had better computer-aided design systems so we could create better microchips. And the system just accelerated.

How long can we accelerate for as a species when people like Donald Trump are in charge? Impulsive, egotistical maniacs who really don't look much different to the gorillas you might see in the zoo. We are just a hairless ape and we've got nuclear weapons.

I'm just going to leave that hanging there.

[Darren] (14:31 - 15:01)
Yeah, yeah. I think we need to stay there. I mean, I don't disagree with you, but this conversation usually goes better when we don't take it too political.

So let's discuss, you have this one aspect in your book, which is something that aligns with what we're talking about very much, which is this humanistic management in the age of AI. And one thing that we've been noticing is the idea of needing to, we talk about shifting the perspective so that we're not replacing people, we're moving people to verification.

[Jamie] (15:02 - 15:04)
Sorry, you said you're moving people to verification?

[Darren] (15:05 - 15:15)
Yeah. The idea that we have to have people verifying AI to essentially realize how humans and AI have to co-exist in that sort of way.

[Jamie] (15:16 - 15:38)
Yeah, absolutely. So I think one of the things I've been trying to teach people is that humanistic management and computing technologies, they go together, but humanistic management is usually in the background to technology's foreground. And I love to trick people.

I've got an awful trick question, Darren. I mean, I can be wickedly mischievous sometimes. My question goes like this.

What did Thomas Edison invent at Menlo Park? And the answer is.

[Darren] (15:38 - 15:46)
The answer you want is the light bulb. I know you want the light bulb, but I maintain Edison was less scrupulous than you might believe.

[Jamie] (15:47 - 17:33)
Thomas Edison is, it sparks everybody off. Everybody's got an opinion on Edison. I'll keep mine to myself for now.

I don't want to derail the podcast any further. But you were correct. Usually when I say to people, what did Edison invent at Menlo Park?

They say the light bulb. That's true. But he also invented a system of management that foreshadowed the big and such successful R&D departments of the last century, which themselves led to Microsoft and Google's campuses and the way all of us do technology.

And so those two things, they go together. Now, there was no such thing as humanistic management in Edison's time, because it was not invented as a concept. But by the time the 1940s and 1950s rolled around, psychologists started to slowly realise actually the way you treat people at work and how you organise things would let them excel as people.

But oh, remarkably, when they're doing well as a person, the business improves, less sickness, products get better, there's more innovation. So in a way, the work of Edison, and then later a fellow called Andy Grove at Intel, foreshadowed all of the findings of Amy Edmondson and two guys called Locke and Latham, who eventually pieced together something called goal setting theory. So in a way, the academics were playing catch up to what practitioners had already figured out.

And this is a very important point, because lots of people come to me for help. Can I please help them with the technology? Usually, can I help them become a bit more like container solutions, friendly and effective and good at their jobs, but they focus on the technology.

And there is a chapter, I don't want to spoil the surprise, but I go into that in I think chapter 11. And if you do not improve your technology in lockstep with how you manage the people who build your systems, you won't get any success. Because once people are afraid, they won't take risks.

So you'll have a very limp tech setup, you've got all the tools and the bells and whistles, but you're not hitting the dizzy heights that maybe Netflix would be doing.

[Pinja] (17:33 - 17:49)
And this is something that what I've been thinking about recently is that what is the role of a tool as an enabler for innovation and creativity in humans, at least from my perspective, is that's the relationship that it should go instead of why are we living for the tools themselves, right?

[Jamie] (17:49 - 17:49)
Correct.

[Pinja] (17:49 - 17:53)
But we need to actually learn to use that for our leverage, right?

[Jamie] (17:53 - 20:05)
There's a couple of funny things around tooling. People say we're slaves to the machine. I find this a little bit offensive to compare slavery and the work that we do as technicians.

We have to adapt to our technologies. Of course we do. I'm sat in my chair now and I'm doing stretching.

I'm in a very unnatural position right now because I'm working with my computer. But to say that I was a slave to the machine, I think it's too far. If anything, if you're working in poor conditions in a mechanized setup and you're almost on slave wages, you're not a slave to the machine.

You're a slave to a manager and a corrupt organization. Those are very different things. Now, coming back to this idea of verification, I was speaking to one of the authors of the DORA reports, the regular state of DevOps.

It comes out every year. Well, this year, they brought out a sort of mini version in March. And it's because they'd been asking questions about artificial intelligence or specifically generative AI systems and its impact on the DORA metrics.

And I was talking to Nathan. He's one of the people behind this. And he was explaining that.

He said to me, so the results are showing that people are happier, that they feel more productive. And I was like, that's great. Okay.

So we found a use case for generative AI, but the systems are less stable. And they continued. And I said to him, sorry, sorry, just one second.

Did you just say people are happier, but the quality of the work is less? And he's like, yeah. And that was stunning for me.

And of course, my next question was, why? How? What's the mechanism?

So we don't know at this point, but we think that the generative part or the code generation part of people's work is getting easier because of gen AI. So they're feeling productive, but generative AI systems always give you plausible answers. So probably errors that would be hard to spot superficially are making it through to production.

And thus, at the exact moment that generative AI makes people happy, it's making systems less stable. So I think your question, how do we coexist with AI? We don't quite know yet, do we?

We don't quite know, because that looks like, I mean, I'm trying to work out, is this a good thing, a bad thing? I guess systems being less stable is not a good thing. People being happy is a good thing.

So I think the truth is we don't really know, but I do think we'll get the answer in the next few years.

[Darren] (20:06 - 20:30)
Interestingly, I think we talk about this in tech, but it's not just tech who I think are being affected by this. Obviously, if we talk about Dora, then we're going to get the DevOps side of things. But I imagine this is hitting everyone who's using AI.

So anyone who's using it to generate documentation, they are massively reducing their own cognitive load. But at the same time, the quality of that documentation is likely decreasing.

[Jamie] (20:31 - 21:19)
And the saddest thing is that the people or the companies or organizations that are most struggling the most are the ones most susceptible to the type of people selling these systems. And you might think, oh, well, how bad is that? Well, it is bad if those organizations are in the educational sector, if it's in childcare sector.

And so yes, the cognitive load's going down. And the problem is it's not just, it would be better if generative AI systems failed stupidly, but they are designed, it's not a bug, hallucinations are not a bug in generative AI systems. They are designed to give the next statistically sensible token.

So they've got a very mischievous way of sounding extremely convincing. And so this makes these mistakes, whether it's in information, summaries or code to spot. So all of a sudden we have a new problem and it comes back to this verification.

[Darren] (21:19 - 21:54)
Yeah. So there's another topic that I really want to get into this podcast before we start running out of time. Since maybe 2022, 2023, we've had Atlassian on stage in conferences talking about this idea of developer joy.

And one thing I took from the parts of the book I've been able to read so far are, this is something very similar that you're talking about, this reason that we get into development, not being because it's a well-paid decent job, but because we liked creating things.

[Jamie] (21:55 - 25:05)
This is a huge thing. The joy, I'm a little bit annoyed that Atlassian have coined that term and I didn't coin it. You know, there was a point in my career, you know, probably from being about 25 to 32, big chunk of my career really, where I didn't speak about the joy of programming or the importance of feeling good about your work and yourself because people thought I was mad.

So this is in the early noughties, people thought I was bonkers and not just management types, but also fellow engineers. But I don't think I was bonkers. I think I was dead right.

And I felt this before I could articulate it. So all I know is that I didn't get into computing for the money. I was intrinsically motivated to tinker with things.

I used to take things apart. I could never put them back together again. I'm just not smart enough to be a hardware engineer.

So I ended up being a software engineer. And then I got my hands on a computer when I was nine and just making that machine dance to my tune was remarkable. And the first thing I put together, because I got a book from the library full of basic programs, was a game that's flashed colored squares on the screen.

And as soon as the squares were the same color, you hit the space bar. Now my challenge was, is I only had a black and white monitor, so I could never win the game. And I didn't have a tape recorder to record the program.

So as soon as I switched the machine off, the program was gone. The electrons fell out of the chips and that was the end of my program. But I love doing that.

And of course, after working in technology for so many years, what I now realize is that I was in a state of flow. I was in a psychological state of flow where time fell away and I was full of joy and happiness. Now, when I got to work, all of a sudden my job was about XML, config files, and delays.

So if we wanted to test what would happen if 100,000 users tried our web app, I had to book a slot on a staging or production like machine. That would take me a few days to get an answer. The slot would be a few weeks in the future.

So this idea that what you create gives you instant feedback, which is what happens when you program a single machine, was taken from me. The effects it had on me was absolutely catastrophic. So as a young man living in a different country, this source of psychological well-being from programming was taken away.

Anyway, it was at that moment that I bumped into the work of Abraham Maslow. And this story is just too funny. Maslow was a psychologist.

He didn't work in management, but he was fed up of New York, all the terrible weather, he was suffering from ill health, and he ended up going to California. And he met a fellow, Andrew Kay, who had read one of his books about psychology. But he ran a tech company.

He later went on to build their PCs. And remarkably, this guy Kay took Maslow's psychology work and applied it to the workplace. So all of a sudden, he removed barriers to work, he put people in cross-functional teams, and so he essentially adopted humanistic psychology to humanistic management.

Well, that's exactly what I started to do. Test-driven development to give us instant feedback, rituals so that we could speak openly. I started to create joy at a team level.

And then my next mission was to try to create joy at a company level, which is what Container Solutions was all about. So, you know, you never asked me about regrets, but do I regret not speaking about this earlier? Yes, I will be 50 next August.

It's taken me nearly 30 years to find the courage to say that this is about joy, and it matters. And then it took me another couple of years to write it down in a book. So I wish I'd have spoken about it earlier.

But that being said, better late than never.

[Pinja] (25:05 - 25:29)
Better late than never. And we appreciate your courage, Jamie, on this one, because this is not a topic that gets talked about enough. Even though developer experience is something that is a very hot topic at the moment.

But then again, is it really that we understand the messiness of being creative and being innovative in the background? That's what I'm at the moment a little worried about. Let's say we don't see enough of that culture of experimentation.

[Jamie] (25:29 - 26:32)
You know, it was funny when Reed Hastings set up Netflix, they used to carpool. So Patty McCord would jump in the car with him. She was their legendary talent officer.

And in the morning, she would say to him, what is this Reed? She said, what is this? What am I feeling?

It's like I'm in love or I'm high on wacky chemicals. And I'm like, well, no, Patty McCord was self-actualizing. She was operating at the highest psychological levels.

And whenever you're doing something and you have a feeling, I was born to do this. That means you are self-actualizing. You're growing as a person.

And as a manager, you hold within your hands the most remarkable power. Because if I can set up a team in which people can self-actualize, I am part of a process of creating better workers, better parents, better citizens, and therefore better society. Managers underestimate the power that they hold within their hands.

That's the message of Maslow. That's the message of this book. And it also explains why I absolutely detest managers who don't take their job seriously.

And I mean, detest, viscerally. I hate them. And I tell them directly to the first, because I'm almost retired.

I'm like an old person at Christmas. I can do what I like.

[Darren] (26:33 - 27:00)
I honestly don't think there is a better message that we could finish up on than that. I think that mirrors my sentiments almost exactly. And I think at this point, we should leave it there with, yeah, management holds so much power and they have a responsibility to use it well.

That's really nicely put. Thank you, Jamie, for joining us. Maybe could you tell us a little bit about the book, just when it's available, when people are able to get it and where?

[Jamie] (27:00 - 27:56)
Absolutely. So the official release date for the hardback was the 17th of June and then hot on the heels of the hardback, the paperback was going to come out on the 17th of July. However, there was a small snafu.

And the snafu is that the hardback looks like it's about to be sold out. So actually, both the hardback, the paperback, and the audio book are presently available from the 17th of June onwards. And the reason is simply that we didn't want people to be disappointed if they couldn't get, if the hardback ran out, they can at least order the paperback instead.

The very next thing I'm doing, and I haven't told anybody this yet, so you are going to be, and your listeners will be the first to hear it. I'm actually going to do a single narrator podcast. Now, I don't know if you ever, if your parents ever mentioned to you that once upon a time, Orson Welles, the actor, he did a radio broadcast.

He read out H.G. Wells' War of the Worlds, but he did it as if he was a journalist. So imagine Martians have landed east of New York and people panicked because they thought it was a type of, they thought it was real.

[Pinja] (27:57 - 27:58)
I've heard of this.

[Jamie] (27:58 - 28:25)
Yeah. Yeah. So I'm going to do a single narrator podcast on Visionaries, Rebels, and Machines, and I'm going to do silly voices, and I'm going to tell the whole story Orson Welles style, like it's a radio, like it's a radio show from the fifties.

So that's going to be the next thing for me. It puts me back in my office on my own, which is how I like it. And it will give me the next sort of creatively, so fingers crossed for those people who prefer to consume podcasts, they can learn or hear about some of the stories from the upcoming single narrator VRM podcast.

[Darren] (28:26 - 28:27)
Perfect. That sounds great.

[Pinja] (28:27 - 28:29)
That sounds amazing. Thanks for sharing that with us.

[Darren] (28:30 - 28:36)
And that's all we have time for today. So Jamie, thank you again for joining us. Thank you, Darren.

And Pinja, thank you for joining me again.

[Pinja] (28:36 - 28:38)
Thank you, it was fun. Jamie, thank you so much for joining us.

[Darren] (28:38 - 28:40)
And we hope you join us next time.

[Pinja] (28:44 - 28:50)
We'll now give our guest a chance to introduce himself and tell you a little bit about who we are.

[Jamie] (28:50 - 29:20)
Hello, my name's Jamie Dobson. I was the co-founder and the first chief executive of Container Solutions, a super cool company that specializes in cloud transformation. Since changing my roles a year ago, I have been writing a lot.

Most recently, I finished a project called Visionaries, Rebels, and Machines. This book is available from all good bookstores, and it charts the history of computing all the way from Edison's Menlo Park facility to the modern systems of AI whose tentacles are all over our society.

[Darren] (29:21 - 29:23)
I'm Darren Richardson, security consultant at Eficode.

[Pinja] (29:23 - 29:28)
I'm Pinja Kujala. I specialize in actual and portfolio management topics at Eficode.

[Darren] (29:28 - 29:31)
Thanks for tuning in. We'll catch you next time.

[Pinja] (29:31 - 29:39)
And remember, if you like what you hear, please like, rate, and subscribe on your favorite podcast platform. It means the world to us.

 

Published:

DevOpsSauna SessionsAI