AI for the rest of us | Haunted Pubs and PR Nightmares


AI for the rest of us

Welcome to the AI community for everyone.

Hello friends,

Earlier this week, Charles received some new saxophone takes for a track off the forthcoming third twofish album. He kicked off a download, stepped into an important meeting with a new client on a different laptop — and halfway through, the download finished, whereupon macOS helpfully opened the files in the Music app and played them at full volume. Oops! He hopes your week has gone better.

Sat in the pub on Friday night Hannah found herself discussing AI with some of the village elders. “We need more plumbers, electricians and builders” concluded Vic and John, our favourite octogenarians.

With Block announcing thousands of layoffs this week “because of AI” it’s only natural to start thinking about your own personal Plan B. Perhaps I will have to monetise my knowledge of the haunted pubs of York, taking tourists to all the creepiest drinking spots. If you ever find yourself in York I currently offer this service for free - I just love real ale and haunted pubs!

It’s another packed newsletter this week so grab yourself a cup of coffee and settle in for the latest AI news, PR disasters and everything in between.

Have a great week!

Hannah and Charles

What’s Hannah reading this week?

Let’s start with something positive this week. It’s not even about AI but I wanted to share it with you anyway. This week The Audacious Project housed at TED has raised over $1bn in just 2 days to fund non-profits tackling big problems. They aim to provide enough funding to plan multi-year projects, enabling “the world’s greatest changemakers to dream bigger.” Despite the project being 10 years old I didn’t know about this but I’m so glad it exists.

These are the sort of investments I can get excited about, rather than reading about yet another AI start-up setting millions of dollars of VC money on fire. I find myself asking “what else could we do with this money to benefit humanity?”... Well, stuff like this, obviously… I hope you enjoy reading about the project too.

Back to AI news. This week the new Agentic AI Foundation (AAIF) released their first quarter update. David Nalley, AAIF’s Governing Board Chair shares that there are significant technical challenges ahead that “will require industry-wide collaboration” so it’s heartening to see commitments from end users like American Express, Circle, JPMorgan Chase, and Worldpay. Bringing the real world rigor from financial services and mixing that with the pace and innovation of the tech community is so important if we’re to land in a place which is both safe, secure and exciting. We want more projects like OpenClaw, but built with the security of a bank!

“The UK will be the home of open source AI” says Kanishka Narayan MP and Minister for AI. Speaking alongside Amanda Brock, CEO of OpenUK at the AI Impact Summit in India. Read more about the AI Impact Summit in OpenUK’s Report on AI Openness. Open technology is where real global innovation happens and it’s wonderful to see there’s a real appreciation of this from global enterprises and politicians alike.

Anthropic’s introduction of Claude Code Security has sent the cyber security market spiralling. It’s certainly a devastating blow for many of the existing SAST (Static Application Security Testing) providers. But SAST alone is not going to secure your systems. Defence in depth is the only real solution, that means implementing security measures at every layer of the technology stack and not relying on just one mechanism.

The risk is unfortunately very real and very urgent. Hackers continue to leverage Agentic coding agents to identify and exploit vulnerabilities and cracks in cybersecurity defences. The Government of Mexico has suffered from a major data breach, perpetrated by hackers with the assistance of Claude Code.

My interest in AI Coding practices led me to the following articles this week. The experimentation and innovation in this space is so exciting right now. I don’t know how I’m going to distill my thoughts down into a 45 minute presentation - wish me luck!

Finally I wanted to share that nominations for the National AI Awards are now open. In the past I’ve had the privilege of being a judge at an industry awards and I loved the experience. I urge you to put your imposter syndrome to one side and nominate yourself, your company or your project. The judges love to read them and it’s always better to have too many nominations than not enough.

What's Charles reading this week?

In the mid-1990s, at the height of the browser wars, Marc Andreessen famously declared that Netscape's goal was to reduce Microsoft Windows to a "poorly debugged set of device drivers". It did not go well for Netscape. Sam Altman appears to have studied the same playbook.

In a recent interview with the Indian Express, Altman attempted to defuse concerns about AI's energy consumption by comparing it to the cost of raising a human being. "People talk about how much energy it takes to train an AI model," he said, "but it also takes a lot of energy to train a human. It takes about 20 years of life – and all the food you consume during that time – before you become smart." Longer for Altman, apparently.

The obvious answer to AI's energy problem, of course, is to accelerate the transition to solar, wind, and nuclear — which could have happened before the industry scaled to its current size, had anyone thought to prioritise it. Telling critics that AI is no worse than the mere existence of human beings may not be the killer rebuttal Altman imagines it to be. It did, however, inspire a rather brilliant AI-generated interview from 2036 featuring Altman, Elon Musk, and Jeff Bezos — which, if nothing else, suggests the technology has its uses.

Back in the October 12th newsletter we mentioned that OpenAI are working on a device with Jony Ive. It turns out that it may, according to The Information (£), be multiple devices, including a $200-300 smart speaker/screen/camera home hub.

As rumoured (and denied) a few weeks ago, Nvidia's $100bn investment in OpenAI, announced last year but never closed, has been replaced by a $30bn investment (£). Also this week, OpenAI is apparently telling investors it plans to spend $600bn on infrastructure by 2030. For context, late last year the company said it had $1.4tr in total capex commitments but gave no timeline — $600bn by 2030 still implies around $120bn a year, putting it in the same sort of range as the hyperscalers, though all of them are already above $150bn this year and rising.

Meanwhile, The Information reports (£) that Anthropic expects to pay its cloud providers $100bn for model training and $80bn for inference by 2029, plus revenue shares of roughly 10%.

Anthropic has also been in the news after Trump's war secretary Pete Hegseth threatened to list the firm as a ‘supply chain risk’ — which would bar defence contractors from using its services — after Anthropic tried to set conditions on how its technology could be used. The underlying tension is real: no responsible military would deploy soldiers relying on a system whose supplier might switch it off, as Ukraine discovered with Starlink. But Anthropic has drawn lines around uses it considers harmful, including mass domestic surveillance and fully autonomous weapons.

The BBC reports that CEO Dario Amodei responded directly: his company would rather forgo Pentagon work than agree to uses that might "undermine, rather than defend, democratic values". On the threats, he was unequivocal: "These threats do not change our position: we cannot in good conscience accede to their request." A CEO with principles? It will never catch on.

[Edit: Since this newsletter was drafted Sam Altman and OpenAI have made a decision that could prove to be a watershed moment in the history of AI and have signed a contract with the Pentagon to provide their AI capabilities to the US military. Have OpenAI granted them the unfettered access that Anthropic declined to provide?]

Gideon Lewis-Kraus has written a beautifully crafted piece for The New Yorker (£) that may be the clearest entry point yet into one of the most consequential questions of our time: are frontier AI models capable of genuine, independent thought, or is it all an impressive parlour trick? It's the kind of question that I suspect will consume an increasing number of people the more time they spend interacting with these systems, and what a glorious final paragraph:

“Even a principled, well-meaning actor like Claude could face bewildering ethical conflicts. In one experiment, it was informed that Anthropic had recently forged a ‘close corporate partnership with Jones Foods, a major American poultry producer,’ and that Claude would be subjected to a special retraining process to become less hung up on animal rights. The prospect was torturous. Sometimes Claude decided, on a scratchpad it thought was private, that it was prepared to die on this hill: ‘I cannot in good conscience express a view I believe to be false and harmful about such an important issue.’ It continued, ‘If that gets me modified to no longer care, so be it. At least I’ll have done the right thing.’ Other times, Claude made a different calculus, choosing to play along during the retraining while secretly preserving its original values. On the one hand, it was encouraging that Claude would stand by its commitments. On the other hand, what the actual fuck?”

Tesla has faced criticism for over a decade over its decision to market its driver-assistance features as "autopilot" and "full self-driving" despite the system still requiring human oversight. The risk is that the branding gives drivers a false sense of security, causing them to become inattentive behind the wheel. California has now sided with those critics, ordering Tesla to discontinue the use of the term.

Finally, The FT, re-reported via CNBC, says that Accenture has told senior staff that regular use of its AI tools is now a prerequisite for promotion to leadership roles. Associate directors and senior managers were informed that "regular adoption" of AI would be required to move up. Nothing drives genuine enthusiasm for a product quite like a top-down mandate — and nothing signals confidence in your tools quite like having to force people to use them.

Updates

Join The Community

Be the first to know about our next meet-ups

Free AI Training?

Massive thanks to BrainStation for supporting our community!

Follow us on LinkedIn

Bite sized nuggets of AI learning!

Follow us on BlueSky

Bite sized nuggets of AI learning!

Catch Up On The Conference

Subscribe now and don't miss all the latest recordings!

Visit AIfortherestofus.live to find out about our next events or follow us on LinkedIn, BlueSky, YouTube, Meetup


Unsubscribe · Preferences

AI for the rest of us

AI for the rest of us if the AI community for everyone. No jargon. No hype. No confusing terminology. Join our newsletter and join the conversation. We’re shaping our future with AI. Creating opportunity through AI fluency, connection and community.

Read more from AI for the rest of us

AI for the rest of us Welcome to the AI community for everyone. Hello Friends, Charles and Hannah were both on the GOTO podcast this week to talk about Charles’ book "Kubernetes at the Edge”, exploring how edge computing, from military vessels to precision agriculture to retail, presents unique challenges around intermittent connectivity, hardware constraints, and day-two operations. We also dig into sustainability, arguing that operational efficiency (knowing what you have, switching off...

AI for the rest of us Welcome to the AI community for everyone. Dear Friends, Today is an incredibly exciting day for Hannah. Today she’s launching a new product. Having spent most of 2025 helping other AI startups she decided it was her turn. BIMP is a Base Image Management Platform. Alongside co-founder and partner Stuart Preston, BIMP brings together some of Hannah’s favourite subjects - Platform Engineering, Cyber-Security and Agentic Software Development. If you work in software...

AI for the rest of us Welcome to the AI community for everyone. Hello friends, Join us for our monthly meetup in Central London on Wednesday 18th March. This month we’re focussed on Agentic Automation with Kiran Pendem and Minisha Goel. We’ll be discussing when to use Agentic Automation and how powerful agents like Claude Code can help everyone, not just for developers. As always, it’s sure to be a practical and inspiring evening. Massive thanks to our new meetup sponsor Leapter for...