Hello Friends,
Charles and Hannah were both on the GOTO podcast this week to talk about Charles’ book "Kubernetes at the Edge”, exploring how edge computing, from military vessels to precision agriculture to retail, presents unique challenges around intermittent connectivity, hardware constraints, and day-two operations. We also dig into sustainability, arguing that operational efficiency (knowing what you have, switching off what you don't) matters far more than code optimisation. The conversation closes with Charles making a broader ethical case for the tech industry to take more responsibility for how AI is built and deployed.
Hannah travelled to KubeCon in Amsterdam. The largest KubeCon ever at 13,500 people and she was on a mission to tell everyone about BIMP. The community was so positive and supportive, with many platform engineers sitting down and providing valuable and honest feedback on the product and sharing anecdotes about what they’re building. BIMP is off to a great start!
Charles will hate this (you’re right, I do! - Charles) and Hannah feels a little guilty sharing it, but she has been having a lot of fun with an AI music generation tool called Suno. Late on Sunday evening Hannah attempted to create a song about Agentic hackers called “The Agents Are Coming.”
Hannah’s imagination, Gemini’s assistance with lyrics, Suno’s music generation and Youka’s AI generated karaoke video made the whole thing possible in just a couple of hours. What do you think of this cyber-security rock ballad? (It’s a war crime against music! - Charles)
Hannah’s AI music-slop proved to be a fantastic opening for two panel sessions at KubeCon, where Hannah discussed practical preparation for the next software supply chain attack with industry legends Justin Cormack (previously CTO at Docker), Josh Bressers (VP Security at Anchore), Sal Kimmich (Security Lead at GadflyAI) and Erika Heidi (DevRel Engineer at ChainGuard).
We’re going to take a break next week for Easter and will write to you all again on April 12th.
Have a lovely week!
Hannah and Charles
|
|
Leapter - Visual Coding for Humans
Reviewing AI authored code is not fun for anyone. Leapter closes the verification gap with their human-centric visual approach.
Read how they’re tackling the problem.
|
What’s Charles reading this week?
Given Hannah’s foray into AI music-slop I thought I should start this week by saying that on Wednesday Liz Kendall (the Secretary of State for Science, Innovation and Technology) published a written statement, together with a Report and Impact Assessment, on the Government's 'progress' towards addressing the application of UK copyright law to the training of AI models in which … she basically kicked the can down the road again. The Government now "no longer has a preferred option" for legislative reform on AI training.
In early 2025, composer and low-key raver Max Richter delivered an impassioned speech while giving evidence to the Culture, Media, Sport and Science, Innovation and Technology select committee, stressing the importance of protecting human-made music. "When a massive banger floods the dancefloor with joyful people, it is because the artist who made it knows what joy feels like," he said, adding that the Government's proposals for a large-scale "opt-out" system was "unfair and unworkable".
In “Green Eggs and Ham” Sam-I-am offers an unnamed character a plate of green eggs and ham, but he tells Sam that he hates the food. Sam continues to ask him to eat the food in various locations and with various animals for dining partners. “Could you, would you, with a goat? Would you, could you, on a boat?” Finally, when the unnamed character agrees to try the dish Sam has offered, he realises that he does, in fact, like green eggs and ham.
This is, as far as I can tell, more-or-less Sam Altman’s business strategy for OpenAI. “We have no idea how we may one day generate revenue,” he said at a 2019 event. He went on to explain that one day, AI will be smart enough that OpenAI will simply ask the computer how to generate an investment return. “You can laugh,” he told a (rightfully) guffawing audience, “but it is what I actually believe is going to happen”.
It hasn’t (and won’t), so Altman instead threw various things at the wall hoping something would stick - would you, could you in an app platform? Would you, could you, in a browser? A social video app? Jony Ive designed hardware? Medical research? Advertising? That hasn’t been going great either.
The firm has now taken a leaf out of Microsoft’s business strategy (“What is Apple doing?”), but rather than copying Apple they’ve gone for another firm beginning with A, noticing that Anthropic's focus on code and enterprise sales appears to be working out quite well for them. OpenAI’s new plan is to abandon side quests, The Wall Street Journal reports, digging into APIs on one side and coding on the other.
As part of this move, the company is killing Sora, its much publicised Video-generating app. It’s hard to see how anyone thought Sora would have staying power, or could ever justify the exorbitant cost of running it. OpenAI burned a ton of money on what was effectively a stunt.
Disney, which had plans to take a $1 billion stake in the AI firm, issued a statement to Variety which I read as, “whilst we love making things for children we don’t terribly like doing business with them”. Or, in PR speak “As the nascent AI field advances rapidly, we respect OpenAI’s decision to exit the video generation business and to shift its priorities elsewhere. We appreciate the constructive collaboration between our teams and what we learned from it, and we will continue to engage with AI platforms to find new ways to meet fans where they are while responsibly embracing new technologies that respect IP and the rights of creators.”
Meanwhile, the FT reports that OpenAI plans to double its headcount by the end of the year, from 4,500 to 8,000 spread across functions.
To the relief of management consultants everywhere, OpenAI and Anthropic have both been quietly courting strategy consultancies and private equity firms as a route into large enterprises — a tacit admission that bottom-up adoption rarely sticks in corporate environments, and that handing someone a general-purpose tool with the promise that it can do everything is not a compelling sales pitch. Particularly when, in practice, it can't.
For his 18th I spec’d up a custom Windows desktop gaming machine for my son. Time was I’d have built it with him, but time is not a commodity I have much of so I got PC Specialist to put it together. It is very cool, and he is thrilled with it. But last weekend, Windows attempted to install an update and bricked itself.
In desperation, after a couple of hours of getting nowhere, I rang PC Specialist. “You're my 11th caller on this so far today,” the exhausted sounding chap on the end of the phone told me. In the end it took me roughly 10 hours to fix. *Sigh*. Apple, whose corporate strategy is, I think, “Hmm. Suppose we made one of those that *didn’t* suck,” just entered the budget PC market via the excellent looking Macbook Neo which must be causing a lot of alarm bells to ring inside Microsoft.
Microsoft’s AI strategy has looked a lot like OpenAIs, and it too has announced it's pulling back Copilot integrations from several Windows 11 apps — including Photos, Notepad, Widgets, and Snipping Tool — framing it as a move to focus AI on experiences that are "genuinely useful" rather than just everywhere, which sounds like “Lets look at what Apple are doing” to me.
Beyond the AI rollback, the company is also making more traditional quality-of-life improvements — letting users move the taskbar, speeding up File Explorer, and giving more control over system updates. In short: less Copilot stuffed into everything, more actually fixing Windows. Hopefully including updates that work.
The Windows behemoth is also reshuffling its AI team. Microsoft acqui-hired Inflection's Mustafa Suleyman — co-founder of DeepMind — yet still hasn't made a dent in frontier models, and its complicated relationship with OpenAI is starting to look less like a partnership and more like a liability. Microsoft is moving Suleyman to be “laser-focused” on models while others take the reins on product. Naturally, everyone involved claims to be delighted about this.
The broader context here is that whilst frontier models rapidly became a commodity, they're not easy to make, and thus a pattern is starting to emerge where each new generation leaves fewer companies at the top: Meta slipped behind and spent billions on talent to claw its way back, Apple appears to have quietly stepped back (at least for now), and Microsoft and Amazon have never really broken through with their own models.
Having stepped back from the generative AI arms race, Apple appears to have done a very, very good deal with Google for access to Gemini. The Information’s Jessica E. Lessin, Amir Efrati, and Erin Woo revealed more details this week on the level of access the firm has got:
“While we have reported that Apple can tweak, or fine-tune, a version of Google’s Gemini AI so that it responds to queries the way Apple wants, the agreement gives Apple a lot more freedom with Google’s tech.
In fact, Apple has complete access to the Gemini model in its own data center facilities. Apple can use that access to produce smaller models that power specific tasks or are small enough to run directly on Apple devices so they can run the tasks faster, said a person who has direct knowledge of the arrangement.
The process of producing such models is called distillation, which essentially transfers knowledge from one large language model, which acts like a teacher, to another model that acts as a student.”
I’m not a fan of the teacher/student metaphor for distillation, but the end result is smaller models that can do more-or-less the same job. Another benefit is that a distilled model is much, much cheaper to run in financial terms, making it more economically viable, and also has a significantly reduced carbon footprint in production.
Finally, facing a deluge of AI-generated slop, Wikipedia has updated its English-language guidelines to ban editors from writing or rewriting articles using AI, citing AI-generated content's tendency to violate the site's core content policies.
Some limited uses are still permitted. AI can be used to suggest minor copyedits (as long as it isn't generating new content) and to help translate articles from other languages, provided the editor knows the source language well enough to verify accuracy.
The policy also flags a practical enforcement challenge: some human writers naturally sound like AI, so editors are warned not to rely on stylistic clues alone when flagging suspected AI content, they should look at the actual substance and the editor's recent history instead.
The change follows months of editors battling AI-generated articles, leading to a "speedy deletion" policy for poor-quality content and the formation of WikiProject AI Cleanup. The new guideline passed with overwhelming support.
What's Hannah reading this week?
Let's be honest, there wasn't a lot of time for reading this week! I'm so tired, and losing an hour this morning did not help. Yawn!
During our panel discussion at KubeCon Amsterdam Justin Cormack shared how he’s using AI tools to discover new vulnerabilities and attack vectors. He recommended that as a bare minimum you need to find out what damage these AI tools can do when pointed at your application.
AI Red Teaming (the practice of trying to hack into your own app) is the subject of this article by Josh Lemos, CISO of GitLab, where he proposes that AI red teaming is no longer optional. I agree. If anyone can point these tools at your app, then it’s best you find out what they could uncover and get fixing!
Outside of the tech company bubble the smart folks at JPMorgan Chase have released their own research about how they use Agentic AI for threat modelling.
“The co-pilot has driven 20% efficiency in our threat modelling process enabling faster models of new systems and broader scale.”
While AI is squeezing budgets away from software development teams, I expect to see some of that being reallocated to security and platform teams who play an increasingly important role in making it safer to work at this increased pace. Scaling platforms was the subject of my favourite KubeCon keynote by Abby Bangser.
During her keynote, all round legend and Principal Engineer at Syntasso, shared a new framework for Platform Producers. Like the original 12-factor app made it possible to scale software development Abby proposes that these 8-factors can help organisations scale their platform.
And we must scale our platforms. With Agentic coding tools accelerating the creation of software we need to make it fast and safe to deploy and run those apps. Our platforms can’t be the bottleneck or we’ll end up with a code pile up on the path to production. I encourage you to watch the 10 minute talk and then dive into the document if you're interested in Platform Engineering.
The changing shape of Platform Engineering was also shared in this article about how SquareSpace redefining the role of the Platform Team. Instead of pushing work to development teams, the platform team has started to use AI tools to execute changes and perform platform migrations on behalf of application teams, redefining the boundary between platform and applications.
Another experiment they shared in the article relates to how they are optimising their application architecture for agentic software development, essentially removing complexity to make it easier to Agents.
“We are actively slimming down the surface area of some of our platforms in recognition that coding tools have a harder time working with our internal microservices library” -
Jon Thornton, Director of Engineering at Squarespace
Last week’s talk at QCon feels like it was months ago! My talk about “The Reinvention of the Dev Team” has been expertly summarised by Matt Saunders over on InfoQ. This talk will be available online later in the year and I’ll also be delivering a shorter version of this talk at AI Native DevCon in June. The speaker line up looks great and virtual tickets are free!
One of the most controversial topics that I shared was about code review. Asking the honest question: is it sustainable for engineers to continue to review every line of AI authored code? I don’t think it is, but I can understand why people disagree with me. There are challenges to overcome around trust, reliability and safety if we want to ship code that’s never been reviewed by an engineer.
Our sponsor Leapter are tackling this challenge with a unique visual approach and they’ve written an excellent article about how they think about the problem. Donald Knuth’s Literate Programming may have been 42 years ahead of its time, according to Robert Werner, CTO at Leaper. Literate Programming says “programs should be written for humans first, computers second” and could hold the key to a more human way to validate code.
Finally a quick favour to ask… If you joined us in January for the meet-up in London you will remember Phil Le-Brun talking about The Octopus Organisation. The book is now up for an award! If you were lucky enough to win a signed copy, or if you enjoyed the talk you can vote for them here.
Updates
|
|
Join The Community
Be the first to know about our next meet-ups
|
|
|
Free AI Training?
Massive thanks to BrainStation for supporting our community!
|
|
|
Follow us on LinkedIn
Bite sized nuggets of AI learning!
|
|
|
Follow us on BlueSky
Bite sized nuggets of AI learning!
|
|
|
Catch Up On The Conference
Subscribe now and don't miss all the latest recordings!
|