Dear Friends,
Today is an incredibly exciting day for Hannah. Today she’s launching a new product. Having spent most of 2025 helping other AI startups she decided it was her turn. BIMP is a Base Image Management Platform. Alongside co-founder and partner Stuart Preston, BIMP brings together some of Hannah’s favourite subjects - Platform Engineering, Cyber-Security and Agentic Software Development.
If you work in software development and you use containers then BIMP is for you. (Everyone else please clap! I appreciate the encouragement!)
Want to find out more about BIMP?
Hannah is currently travelling to Amsterdam to introduce BIMP to the world at Kubecon where she will be speaking on two panels about Software Supply Chain Security and delivering a talk on Platform Engineering for Security.
Charles was on hand to help with the press release, of course. Thank you Charles, you are a legend!
Don’t worry, this doesn’t mean any changes to AI for the rest of us: the newsletter, meet-ups and conferences will continue! Forever, probably!
QCon was in town this week. This is always a bit weird for Charles, who spent something like 12 years involved in various QCons in London and elsewhere, whilst serving as InfoQ’s chief editor BC (Before Covid), and then was laid off at the start of the pandemic. It’s a bit like hearing your ex-partner is in town; you hope they’re fine, but don’t particularly want to bump into them. So he spent Monday meeting people who were at QCon and wanted to see him, whilst hiding in the British Library, and then went to give a talk in Brighton.
In the Q&A after the talk he had a fun discussion with someone from the water industry about how fiendishly difficult it is to plan for future water supply demands when the software industry is so reluctant to talk about how it plans to cool the AI data centres the government wants it to build. You can watch the recording here.
Hannah was of course one of the people who sat down with Charles at the British Library. We forgot to get a selfie - sorry readers. We will remember next time! That was before she headed over the QCon where she delivered a new talk about the reinvention of the dev team. This isn’t available online yet but when it is we’ll be sure to share it.
Wish Hannah the best of luck with her new venture and have a wonderful week friends.
Hannah and Charles
What’s Hannah reading this week?
It feels like every day brings a new cyber security threat to worry about. Aside from the obvious flaws with LLM security we have more traditional security risks being amplified by the nefarious usage of Agentic AI. I’ve previously written about how Anthropic detected hackers using Claude Code and how good these tools are at discovering new previously undetected vulnerabilities in popular open source projects. Agentic hackers like hackerbot-claw can uncover vulnerabilities faster than any human hacker and they don’t get bored of trying new tactics.
Trivy is a container scanning product by cyber security company Aqua. Trivy has suffered not one but two successful attacks in the past couple of weeks. It seems that hackers (with the help of an autonomous bot called hackerbot-claw) were once again able to push Malware into Trivy’s build pipeline with the goal of harvesting credentials.
Charles has covered the embarrassing security incident suffered by McKinsey below, but the thing that struck me about this report is the speed. It took just 2 hours for an agentic hacker to steal the entire production database from Mckinsey’s internal AI platform Lilli. The speed and effectiveness of these tools is both impressive and terrifying.
Apparently AI models are so powerful now that they can reverse engineer a binary, discovering yet more vulnerabilities in old out-of-date systems without needing to see the code. This reel by @agenticengineering on instagram breaks it down. The implications are massive.
It’s clear that our current practices around security aren’t enough. Most security teams suffer from information overload and alert fatigue with a huge potential attack surface to reason about. Triaging risks and deciding which ones to prioritise is normal. There is no such thing as perfect security and there never will be, but perhaps the current mode of "triage and ignore" won't be viable for much longer.
This is part of the reason I decided to build BIMP. Making life easier for security teams is a mission I care about, and automating some of their work with BIMP is one way I can help. We’ve taken a new approach where we orient the platform around continuous automated remediation instead of just finding risks and passing them off to a human to solve. (Yes I am plugging BIMP again, please clap!)
But it's not all bad news I promise! I was heartened to see this write up by Synthesia about how they have scaled vulnerability management with AI - reducing the noise so they can focus on the real risks. If hackers are using Agents to attack then we need to use Agents to defend. I hope to see more stories like this. Agents should take the toil out of security work, helping organizations with the routine maintenance that good security requires.
This week both NVIDIA and Anthropic have launched a more secure incarnation of popular but flawed AI assistant openclaw. NVIDIA’s NemoClaw wraps openclaw in NVIDIA’s Agent Toolkit, whilst Anthropic have added some new features to Claude Cowork. Claude Despatch allowing users to send it tasks from anywhere providing something akin to the openclaw “chat with my agent” user experience.
Personally, I still don’t have an openclaw. I’m too worried about the security to take the risk. I do however have a custom agent assistant called Otis, the wise owl that helps Stuart and I keep track of our to-do list whilst working on BIMP. I’ve written about Otis the owl and everything else we tried out on the BIMP blog. (Yes I am plugging BIMP again, please clap!)
What's Charles reading this week?
Some of the biggest names in AI are turning their attention to what comes after LLMs. Yann LeCun, who left Meta late last year after more than a decade leading its AI research division, has raised $1.03bn for his new startup AMI Labs at a $3.5bn pre-money valuation.
LeCun has long argued that LLMs are fundamentally limited — that they can't keep scaling indefinitely and, crucially, will never develop a genuine understanding of cause and effect. World models, which aim to learn how physical reality actually works rather than predicting the next word in a sequence, are his proposed alternative. Turing winner Fei-Fei Li's World Labs is pursuing a similar thesis, and also raised $1bn in February (no public valuation has been confirmed, though reports suggest around $5bn).
OpenAI has announced it's acquiring Astral, the company behind popular open source Python development tools including uv (a package manager with 126 million monthly downloads), Ruff (a linter and formatter with 179 million monthly downloads), and ty (a type-checker currently in beta). Astral's team will be integrated into OpenAI's Codex team, with the goal of allowing AI agents to work more directly with tools developers already use. The financial terms were not disclosed, but both OpenAI and Astral's founder Charlie Marsh have pledged to keep supporting the open source projects after the deal closes. The acquisition is seen as part of an ongoing battle between OpenAI's Codex and Anthropic's Claude Code for dominance in the AI-powered coding assistant market.
I held off on covering this last week, partly because its authenticity is hard to verify, but someone is claiming to have breached McKinsey's internal AI-powered knowledge management system, allegedly extracting an entire database of Teams conversations, spreadsheets, and client-facing PowerPoints. Whether or not this particular incident is genuine, it points to a real problem: when you deploy AI to search, analyse, and synthesise information across an organisation, the conventional security controls used to limit and compartmentalise data access break down and you need new ones. It's one of the main reasons that enterprise GenAI rollouts so often land hardest on security teams.
Reuters is reporting that Meta may cut around 20% of its workforce as “AI costs mount”. We’ve gone from “AI can do the jobs so we need fewer people” to “AI is now so expensive we need to lay people off to afford it”. As Block demonstrated, AI is certainly a convenient cover for executives of badly managed companies. If these cuts go ahead, Meta will have laid off more people since 2019 than it actually employed that year.
The EU is moving to ban AI "nudifier" apps after Grok became the most prominent example of an AI platform failing to prevent the sexualisation of real people's images, including children. EU Parliament committees voted 101–9 to amend the AI Act to prohibit such systems, though platforms with effective safety measures in place would be exempt.
The move would directly undermine Musk's current strategy, which has been to blame users rather than fix Grok itself, with the explicit feature paywalled rather than blocked. If the ban passes, possibly as early as August, xAI would be forced to rein in Grok's capabilities or face fines of up to 7% of global annual turnover.
The amendment is notable as the first EU policy to target platforms rather than just individual users, a shift prompted by the difficulty of tracking down perpetrators and the scale of the harm being caused.
Musk is also facing legal pressure in the US, with a proposed class-action lawsuit alleging that Grok was used to generate child sexual abuse material (CSAM) from real photos of three teenage girls in Tennessee. The case began when one of the victims received an anonymous tip on Instagram that her images, taken from her social media and transformed into explicit AI-generated content, were being shared alongside images of 18 other minors.
Police traced the material to a perpetrator who used a third-party app with access to Grok to create the images, which were then traded on Telegram and uploaded to file-sharing platforms. The lawsuit further alleges that xAI knowingly licenses its servers to third-party apps, profiting from this arrangement while those servers are used to generate and distribute illegal content. The plaintiffs are seeking an injunction to stop Grok's harmful outputs, as well as punitive damages on behalf of what they estimate could be thousands of affected minors.
Let's end on a lighter note. A friend of mine, Sophie, has just published a short book called On Being Healthy — a parody of Virginia Woolf's 1926 essay On Being Ill, timed to mark its centenary.
So why mention it in an AI newsletter? One thing that AI evangelists seem to assume is that the thinking and analysis is quick, and the writing is just tedious busywork that slows us down. But the writing IS the thinking and analysis. One of the things I find myself thinking about lately is what makes writing feel human. Parody might be. It requires taste, affection, and the ability to hold two things in your head at once: admiration and critique. Sophie has all three.
If you like Woolf, or literature, or a small book that is funny about something that isn't usually funny, it's worth your time.
Finally I wanted to pass on this brilliant essay on design and attention from Terry Godier. It’s only loosely on topic, but it’s so good! My equivalent to his Casio is my battered Seiko analogue watch: an 18th birthday present, which I’ve worn more-or-less every day since. The glass is scratched to hell, but it tells the time and never demands anything from me. I contemplated buying an Apple watch and my wife, very wisely, persuaded me not to. I’m grateful to her for that, as for so many things. (like proof-reading - KH)
Updates
|
|
Join The Community
Be the first to know about our next meet-ups
|
|
|
Free AI Training?
Massive thanks to BrainStation for supporting our community!
|
|
|
Follow us on LinkedIn
Bite sized nuggets of AI learning!
|
|
|
Follow us on BlueSky
Bite sized nuggets of AI learning!
|
|
|
Catch Up On The Conference
Subscribe now and don't miss all the latest recordings!
|