AI for the rest of us | Time To Resign?


AI for the rest of us

Welcome to the AI community for everyone.

Hello friends,

The sun is shining in Surrey this morning, and a less grumpy Charles (well, compared to last week anyway) has been amused by the fact that two high-profile AI company resignations involve poets. Perhaps there's something about watching large language models pastiche human creativity at scale which drives people back to the irreducibly human art of poetry. Or maybe poets just have a keener sense of when to get out.

Hannah is in a positive mood today as well. This week her Dad had a successful Left Atrial Occlusion. This procedure involved deploying a device called a “watchman” into the left atrium of his heart, creating a filter in and out of the left atrial appendage thus preventing any blood clots forming in that area of the heart. Medicine is incredible isn’t it?! Shout out to Leeds General Infirmary who can complete a heart procedure in less time than it takes to get my highlights done!

Don't miss our next meet-up on Thursday! We're tackling "AI Coding the Right Way" with re:cinq Consultant Daniel Jones, who has trained over 100 developers to maximize their AI tools. Following that, a panel of engineering leaders will share hard-won lessons on empowering their teams with these new capabilities.

This is our last meetup hosted by the lovely team at Tessl (thank you Tessl!) so we’re looking for a new venue from March. If you can support the community through hosting or sponsorship please reach out!

Have a fabulous week everyone, and hopefully see some of you on Thursday.

Hannah and Charles

What’s Hannah reading this week?

This year there’s been a significant leap forward in the capabilities of AI Coding Agents. This visualisation from METR shows the startling progress in the past 12 months.

With this increase in capability we’re starting to see some second order effects emerging. One area I’m keeping a close eye on is the security implications.

On LinkedIn this week Nathan Sportsman, an expert in offensive security, posted about how effective Claude Opus 4.6 is at finding vulnerabilities in software. He claims that Claude discovered 4 new undisclosed CVEs in OpenSSL in just 8 minutes.

My first reaction was worry. These tools in the hands of hackers can find and exploit new vulnerabilities at incredible speed and with almost zero effort. But there is a flip side to this too, perhaps it’s now easier than ever to test your own systems for vulnerabilities without any need for a separate specialist security tool? You just use the same model as you’re using for the rest of your development work.

Off the back of Anthropic’s own write up, Josh Bressers of Anchore provided his take on what he thinks this means for open source. The dynamics of open source are of course quite different to your own application code. As Josh points out, many open source projects are already “running on the brink of burnout”, the blast radius for a vulnerability in a popular open source project is massive.

We might be about to see a tsunami of AI discovered 0-days. It will be tough to manage in the short term but in the long term I hope this will improve security posture - if you can find a vuln you can fix a vuln.

One of our speakers at this month's meet-up is Norberto Lopes, VP of Engineering at incident.io. As active members of the London tech community incident have a reputation for their strong engineering culture and you see this reflected in Norberto’s excellent blog “AI isn’t optional anymore”. The thing that stood out to me about this is ownership.

“A final note on using LLMs with responsibility: AI isn’t accountable, humans are. If you put up a pull request thinking that "if this goes wrong, AI did it" you’re about to have a wake up call… AI isn’t accountable. You are. Act like it.”

While a lot of noise is being made about AI eliminating developer jobs and replacing junior engineers I was pleased to read this post by Ivan Pedrazas “Regarding The Future Of Junior Engineers” in which Ivan proposes that a Junior Engineer with AI tools is still as valuable as ever, because they free up time from Senior Engineers to focus on more complex problems. This aligns with Norberto’s point about ownership, even with AI tools writing the code for you a senior engineer's time is better spent on more strategic work. Work that only they can own.

I agree, even with AI tools at our disposal there is always a limit of how much one person can own - our focus is finite.

What's Charles reading this week?

It's been a week for resignations of senior people in foundation companies. Also, unexpectedly, poets.

OpenAI researcher Zoë Hitzig resigned from the company on Monday, the same day they began testing advertisements inside Chat, after Altman previously said “I kind of think of [taking] ads as like a last resort for us as a business model”.

Hitzig, an economist and published poet who holds a junior fellowship at the Harvard Society of Fellows, spent two years at the vendor helping shape how its models were built and priced. In a thoughtful guest essay in the New York Times she wrote that OpenAI’s advertising strategy risks repeating the same mistake that Facebook made a decade ago.

"I once believed I could help the people building AI get ahead of the problems it would create," Hitzig wrote. "This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I’d joined to help answer.”

Hitzig did not call advertising itself immoral. Instead, she argued that the nature of the data at stake makes ChatGPT ads especially risky. Users have shared medical fears, relationship problems, and religious beliefs with the chatbot, she wrote, often “because people believed they were talking to something that had no ulterior agenda”. She called this accumulated record of personal disclosures “an archive of human candour that has no precedent”.

It is worth reading her essay in full, but if you can’t stump up for a subscription to The New York Times you can read quite a bit of it by temporarily disabling JavaScript in your browser. Obviously I didn’t tell you that.

Hitzig’s calling out of Facebook is worth noting. Sam Altman hired a bunch of growth people from Meta as he attempts to find a viable business model for OpenAI. He has a particular problem (shared with Anthropic) which is one of distribution. Google, Meta, Amazon, Apple, and Microsoft already have products and surface areas which they can use to push AI onto unsuspecting consumers whether they like it or not. OpenAI and Anthropic don’t, but Anthropic’s focus on coding models appears shrewd, since coding is one area where LLMs do appear to have some level of product-market-fit (though whether there is enough value for them to be profitable remains to be seen).

Things at OpenAI are bad enough that someone in their marketing department has invented the fabulous term ‘capability overhang’ to describe the fact that most of their users pick up the product every week or two, not every day, and struggle to think of things to do with it. That’s a brilliant way to spin ‘we don’t have product-market fit’.

The wider issue is that LLMs remain incredibly expensive to both build and run, hence Alphabet becoming the first tech company to issue 100-year bonds in nearly three decades.

After Hitzig’s resignation on Monday, on Tuesday it was time for more departures from xAI, with co-founder Tony Wu announcing his resignation via Musk’s deepfake porn site formerly known as Twitter, writing that it was “time for my next chapter”. Hours later, another co-founder, Jimmy Ba, announced that it was also his last day. Wu’s and Ba’s exits leave only half of xAI’s founding team still at the company, after Greg Yang said he would step back in January following his Lyme disease diagnosis.

Several members of xAI’s technical staff also announced that they would no longer be working at the company. For his part Musk said that xAI was recently “reorganized” to “improve the speed of execution,” which “required parting ways with some people”. Musk’s SpaceX acquired xAI this month. I imagine that the merger has encouraged some to jump ship, but xAI also isn't breaking any new ground and Musk’s utter disregard for trust and safety and basic human decency must be off-putting to at least some employees.

On Thursday AI safety researcher Mrinank Sharma quit Anthropic with a cryptic warning that the "world is in peril”. In his resignation letter, also published I’m sorry to say on the ghastly X, he said his contributions included investigating why generative AI systems suck up to users, combatting AI-assisted bioterrorism risks and researching "how AI assistants could make us less human".

But he said despite enjoying his time at the company, it was clear "the time has come to move on". He said he had "repeatedly seen how hard it is to truly let our values govern our actions" - including at Anthropic which he said "constantly face pressures to set aside what matters most".

Unlike Hitzig, Sharma is not a published poet, but he has said he would look instead to pursue a poetry degree and writing. He added in a reply: "I'll be moving back to the UK and letting myself become invisible for a period of time.”

Also on Thursday, Anthropic researcher Nicholas Carlini published a blog post describing how he set 16 instances of the company’s Claude Opus 4.6 AI model loose on a shared codebase with minimal supervision, tasking them with building a C compiler from scratch.

The feat took two weeks and nearly 2,000 Claude Code sessions costing about $20,000 in API fees, but the AI model agents reportedly produced a 100,000-line Rust-based compiler capable of building a bootable Linux 6.9 kernel on x86, ARM, and RISC-V architectures.

This is impressive, up to a point, although Computer science undergraduates write C compilers all the time. As Benj Edwards writes in Ars Technica

“A C compiler is a near-ideal task for semi-autonomous AI model coding: The specification is decades old and well-defined, comprehensive test suites already exist, and there’s a known good reference compiler to check against. Most real-world software projects have none of these advantages. The hard part of most development isn’t writing code that passes tests; it’s figuring out what the tests should be in the first place.”

Updates

February Meetup

Join us in person on Thursday February 19th - grab your spot now!

Free AI Training?

Massive thanks to BrainStation for supporting our community!

Follow us on LinkedIn

Bite sized nuggets of AI learning!

Follow us on BlueSky

Bite sized nuggets of AI learning!

Catch Up On The Conference

Subscribe now and don't miss all the latest recordings!

Visit AIfortherestofus.live to find out about our next events or follow us on LinkedIn, BlueSky, YouTube, Meetup


Unsubscribe · Preferences

AI for the rest of us

AI is ready for everyone. No matter who you are. No matter what your role. AI for the rest of us is an AI learning platform like no other. We promise no hype, no jargon and no confusing terminology. We meet you where you are, to get you where you want to be!

Read more from AI for the rest of us

AI for the rest of us Welcome to the AI community for everyone. Hello friends, Everyone is talking about Moltbook and we are no different. If you haven’t heard about Moltbook, it’s a “social network for AI agents” - a place on the internet where LLM twaddle is the main event. For once Hannah is more opinionated than Charles on the subject. Read on for a good old rant. Charles is writing this week whilst sitting by the fire and wondering if it will ever stop raining. Perhaps because of the...

AI for the rest of us Welcome to the AI community for everyone. Hello friends, This week Hannah and Charles met up in person to finalise the newsletter before heading to the AI for the rest of us meetup. Hannah is currently enjoying a weekend away in Lisbon for her friend’s 40th birthday. Sadly the forecast looks like rain but it’s sure to be a fun adventure anyway! Thursday’s London meetup was packed. Thank you to everyone who joined us in person to learn from Masha Gaganova, Phil Le-Brun...

AI for the rest of us Welcome to the AI community for everyone. Hello friends, This week Hannah is reflecting on the use of LLMs for writing. This newsletter is not AI-generated, neither the writing or the research (shocking right?) We share with you the actual articles and stories that we genuinely read, and we embellish those with our actual opinions in our own words. Controversial. Maybe it’s confirmation bias but it was good to read that other people feel the same way about AI-generated...