Hello Friends,
When we started this newsletter Hannah and I worried we wouldn’t have enough to write about each week. It turns out that really isn’t a problem, and actually our problem is what to leave out. Hopefully we’re doing OK. We love getting messages from you telling us what you’ve enjoyed.
Following the launch of BIMP, Hannah is taking a well earned holiday in Japan, but she has managed to file copy.
This week’s meetup in London has proved very popular: so popular we’ve had to close registrations and enable the waitlist. Don’t worry if you’re on the waitlist, there may be a few drop outs during the week!
Secret Garden, David Attenborough’s new show for the BBC, has become compulsory Sunday night viewing in the Humble family. The idea is to use the super-high-res cameras and patient filming techniques usually deployed in the Amazon or the Serengeti and see what they can capture in British backyards. The whole show is glorious, and it's wonderful to see the amazing resources of the BBC natural history unit making a programme like this. Also Attenborough is 99 years old for goodness sake. When Charles’ sister displayed a picture of her and Attenborough during a talk in Henley this week, the audience broke into spontaneous applause. Why is his birthday not a new UK bank holiday?
Charles is doing a talk at MLCon in London in May on AI and sustainability. Hannah will be speaking at AI Native DevCon on 1-2 June. You can get 30% off the latter with code AIFTROU30.
Hannah and Charles
|
|
Leapter - Visual Coding for Humans
AI code is shipping faster than humans can verify it. Leapter's visual approach finally closes the gap. See how it works.
|
What's Hannah reading this week?
I’m currently sitting in a coffee shop in Akihabara, Tokyo so it’s an understandably short section from me this week.
Charles covered Mythos last week, but since then I’ve been reflecting on Anthropic’s decision to limit access to their new model due to its unprecedented cyber-security implications. I agree, caution is required. What sits uncomfortably with me is that it could set a precedent for “exclusive access” for future models. If we assume the models keep getting better and better then whoever gets access gains an enormous advantage over the rest of us.
Right now Anthropic has granted Mythos access to some tech companies, some cyber security companies, some financial services and, possibly, one government (the US of course). The rest have to wait. Indefinitely.
Is this what the future will look like? A tech company deciding which governments deserve defending. A tech company deciding which banks are safer than others. Big tech monopolies created upon AI capability that no one else can match?
That’s a lot of control and a lot of power. A worrying amount.
If Anthropic are going to limit access to their model to only some governments then they’re playing with global politics, intentionally creating inequality. Choosing who is worth defending and who needs to be defended against. Should an AI company wield that much influence? What could a government do today to convince them that you are in fact a goodie not a baddie? (And if starting wars on a whim doesn’t disqualify you from access then WTF does?!?)
Perhaps I’m catastrophising. Maybe it’s just the same old story that more money gives you access to better tech. Perhaps we’ll see a launch of Mythos in the future with an astronomical price tag and then we’ll have our answer about who gets access - whoever can afford it.
The optimist and idealist in me thinks that open source might be the answer to reset the balance of power and control. But with the amount of money flowing into Anthropic (see the unbelievable ARR numbers reported by Charles below) it will take a global effort and a fuck tonne of cash to catch up.
What’s Charles reading this week?
In Dirk Gently's Holistic Detective Agency by Douglas Adams the Electric Monk is a labour-saving device that believes things on your behalf, so you don't have to. Adams compares it to video recorders (younger readers, ask your parents) which watched tedious television programs for you to save you the bother. Meta is building an artificial intelligence version of Mark Zuckerberg to engage with tedious employees in his stead, to save him from doing so. No, really.
Meta has also released Muse Spark, the first model from the new AI lab that it built from scratch for billions of dollars last year. The model isn’t the best, but is pretty good. Having fallen dramatically behind, it’s impressive that Meta has managed to get back in the game (unlike Apple, Amazon and Microsoft).
The big story of GenAI so far is how much impact it is having in software development. A side effect of that is that developers tend to think AI is better than it is at lots of other things too. Another side effect is the need to scale capacity very, very quickly.
Anthropic is doing particularly well at riding the coding wave. It did a deal to buy both TPU AI accelerator capacity from Google and buy its own chips for Broadcom. The announcement includes the fact that the company now has annualised revenue (past four weeks multiplied by 13) of $30bn, up from $9bn at the end of December 2025, $14bn in February, and $19bn in March.
It's worth saying that annualized revenue can be an optimistic metric since it assumes growth stays flat when in reality it could slow down or speed up but, putting that aside for a moment, that means the company has gone from ~$75m monthly revenue at the beginning of last year to close to $2.5bn last month. This revenue also puts Anthropic ahead of OpenAI, based on its last public claim of $2bn monthly revenue at the end of March ($24-25bn annualised). For context, Amazon’s 2025 shareholder’s letter says AWS has $15bn in annualised AI revenue - i.e. ~$1.2bn monthly.
Almost all of this is from software development. Agentic coding uses vast numbers of tokens for vast amounts of money that the employees of software developers are willing to pay in the hope of productivity gains. I wonder if anyone has worked out whether it would now be cheaper to hire more people, particularly with so many good developers having been laid off. I’ve had a weird conversation with someone this week whose employer has budget for AI, which likely can’t solve their problem, but none for headcount, which could.
My own experience with Anthropic is that it is increasingly capacity constrained and noticeably less good than it was at Christmas, and there are more and more reports suggesting that same thing. Part of that capacity constraint may be down to the fairly horrifying trend of tokenmaxxing, which sees developers competing to see how many tokens they can use. The Information (unfortunately no gift link) reports that Meta employees are using an internal leaderboard to gain titles like “Session Immortal” and “Token Legend.” Silicon Valley’s newest form of signaling who is most AI-native.
The scale is staggering: more than 60 trillion tokens were used in just 30 days. It is difficult to put a precise figure on costs, as we don’t know what providers like Anthropic are charging Meta, but it potentially translates to hundreds of millions of dollars in compute. Meanwhile, Visa says its employees are using nearly 2tr tokens a month. Both the environmentalist and development team lead in me are appalled. As well as being bad for the environment (carbon rather than water), token use joins lines of code, GitHub commits, and defects fixed as being yet another amazingly stupid way of measuring developer productivity.
The shift to augmented coding is a tough one for many programmers, and also for the vendors of developer tooling. JetBrains, who has been my favorite IDE vendor for years, is, I think, struggling from a leadership point of view. I wrote about Central for LeadDev this week.
It's a new platform from JetBrains aimed at bringing governance, orchestration, and infrastructure to AI-driven software development. Rather than a simple coding assistant, Central connects tools, agents, and infrastructure so that automated work can run, be monitored, and be managed across teams.
The announcement comes alongside the sunsetting of its Code With Me collaborative coding feature, signalling a clear strategic shift toward human-agent workflows. However, engineering leaders will want to tread carefully — JetBrains faced significant criticism last year after developers saw their AI credits depleted far faster than expected following a mid-contract pricing change, and Central's pricing is yet to be confirmed. An Early Access Program is expected in Q2 2026.
We covered Rowan Farrow’s profile of Sam Altman last week, which didn’t tell us much we didn’t already know but was unflattering: with many people who know him saying he’s an untrustworthy, manipulative liar, and many people who’ve worked with him choosing to quit. I didn’t mention it last week but the report also suggested that someone with a grudge is spreading rumours, which Farrow couldn’t find any support for, that Altman pays for under-aged sex. I would need my own Electric Monk to believe we're anywhere near AGI, but echoing Hannah’s thoughts above, you do have to wonder about the suitability of people like Altman, Musk, et al to have control over AGI if that should ever come to pass.
Since Farrow’s profile came out, a 20-year-old man was arrested in San Francisco after throwing a Molotov cocktail at Altman’s home. The device set a perimeter gate alight but no one was hurt. The suspect fled before police arrived, but around an hour later showed up at OpenAI's San Francisco headquarters threatening to burn the building down, at which point he was arrested. Then, two days later, Altman’s home was struck by gunfire with two more men arrested.
Altman himself addressed the first incident on his blog, sharing a family photo and noting that the attack happened at 3:45am and the device "bounced off the house". He reflected that a recent critical article about him may have contributed to the danger, writing that he had underestimated the power of words and narratives.
FWIW I believe violence of any kind is unacceptable, and that includes these attacks on Altman. I also think that the, let’s call it very liberal interpretation of fair use, the concentration of wealth in a small number of hands, and the kind of helplessness that people feel, which leads to this kind of violence, is worthy of a lot more scrutiny from both the industry and our political leaders than it is currently getting.
Anne Currie championed us covering ethics at QCon and on InfoQ when I was chief editor there. She has a brand new podcast called Asynchronous and Unreliable. In the first episode, she and Sara Bergman, two of the co-authors of Building Green Software, discuss the tension between code efficiency and operational efficiency in software engineering. They also touch on the cloud's dual nature as both an efficiency enabler and a trap for those who lift-and-shift without going fully cloud-native. Anne and I would both argue that ops trumps code for sustainability, but Anne makes a further argument that AI would be good at rewriting code to more efficient languages which might ultimately make code optimisation more palatable.
The conversation wraps up with reflections on the value of in-person conferences and meetups for professional networking, and encouragement for people to try public speaking despite the universal nervousness it provokes. It’s a great first episode, and I recommend subscribing. I’ll be on a future one, but don’t let that put you off.
Updates
|
|
Join The Community
Be the first to know about our next meet-ups
|
|
|
Free AI Training?
Massive thanks to BrainStation for supporting our community!
|
|
|
Follow us on LinkedIn
Bite sized nuggets of AI learning!
|
|
|
Follow us on BlueSky
Bite sized nuggets of AI learning!
|
|
|
Catch Up On The Conference
Subscribe now and don't miss all the latest recordings!
|