Hello friends,
This week Hannah is reflecting on the use of LLMs for writing. This newsletter is not AI-generated, neither the writing or the research (shocking right?) We share with you the actual articles and stories that we genuinely read, and we embellish those with our actual opinions in our own words. Controversial.
Maybe it’s confirmation bias but it was good to read that other people feel the same way about AI-generated copy.
Charles is reflecting on the new dynamics of the internet, with AI search results rewriting the rule book of what good looks like in content creation. As content creators (Hannah far less so than Charles) this is an important change to get ahead of.
Before we dive in, we want to remind those of you in London that we’re hosting our next meetup on Thursday evening. Join us for an evening reflecting on how to build teams and organisations that “thrive in an age of continuous transformation”.
If you’re not in London, why not head over to our YouTube channel to check out the INCREDIBLE talks from our conference back in October. Let us know which ones are your favourites and we’ll highlight them in this newsletter.
Have a wonderful week!
Hannah & Charles
What’s Hannah reading this week?
I love how intentional and thoughtful Oxide have been about their use of LLMs. Bryan Cantrill publicly shared their internal RFD (request for discussion) “Using LLMs at Oxide”, and it perfectly captures some of my own experiences working with people who use LLMs excessively.
LLM-generated writing undermines the authenticity of not just one’s writing but of the thinking behind it as well. If the prose is automatically generated, might the ideas be too? The reader can’t be sure — and increasingly, the hallmarks of LLM generation cause readers to turn off (or worse). - Bryan Cantrill
Bryan talks about how “our writing is an important vessel for building trust — and that trust can be quickly eroded if we are not speaking with our own voice.” It puts more burden on the reader and adds a layer of doubt - is this really their opinion or the opinion of the LLM?
A Stanford Study from September last year coined the word “workslop” to describe this new type of busy work, and how it actually hurts productivity. The follow up to this explains 3 ways to reduce workslop with an emphasis on clear expectations from managers.
- Culture: Specifically a culture of feedback if AI-generated work does not meet company expectations
- Process: Clear expectations about when to use AI and being rigid about reviews to catch hallucinations
- Accountability: Take ownership of the “workslop” problem and make adjustments to reduce
For me, I think it depends on what you’re writing. If your ideas are important, take the time to articulate them yourself. Your reader will feel the difference.
PwC’s 29th CEO Survey generated a lot of noise online this week. The narrative is that “The Majority of CEOs report zero payoff from AI splurge” - which is a rather sensational headline given that 38% of CEOs did report benefits in either revenue increase or cost reduction. We are still incredibly early in this transformation and even the experts can’t predict the ways in which GenAI will transform the world of work - the fact that 38% are already succeeding in some way is HUGE news for me. We need to learn from these trailblazers.
I spend a lot of my time researching Reliable AI Agents, so it was wonderful to see n8n publish an extensive list of considerations in their blog “Best practice For Deploying AI Agents In Production”, sadly none of the topics goes into sufficient detail but as a check-list of considerations it’s a greater starting point. Someone needs to turn this into a book!
Finally, it’s always great to hear from Patrick Debois, and what’s front of mind for him. On LinkedIn this week Patrick shares the output of a conversation with Claude about AI Coding Swarms. Three things struck a chord with me: the accountability gap, the role of resilience engineering (SRE FTW!!) and the fundamental at the heart of it all; doing business is all about managing risk. The risk that is what’s acceptable for one business (a startup) will be unacceptable for another (a bank).
This week, as I was crafting a proposal for a new talk I’m writing about software engineering in 2026, I found myself much more focussed on the question “What are our options?” instead of “What is the right answer?” and honestly I think managing risk and taking ownership sit at the heart of this - who is accountable and how much does failure matter? Maybe we start there and build from that?
What's Charles reading this week?
When AI’s make mistakes it can have real-world consequences. As Nate Anderson writes for Ars Technica, the leaders of West Midlands Police and Birmingham City Council have been grilled by MPs over a decision, based on exaggerated and false information, not to allow Israeli football fans to attend a match in the city last year.
Having repeatedly claimed that AI was not used Craig Guildford, the chief constable of the West Midlands Police, has had to admit that this is not true. “I [recently] became aware that the erroneous result concerning the West Ham v Maccabi Tel Aviv match arose as result of a use of Microsoft Co Pilot,” he acknowledged in a letter on January 12. Hallucinations remain a major problem for LLMs.
For a long time the slightly uneasy relationship publishers had with Google was that the publishers paid writers to write content, Google indexed it and sent them traffic, and then publishers showed ads on those articles to pay the writers and everyone else.
Google always used to tell publishers, “Write good quality content and trust that we, the mighty Google, will find you”. The problem is Google wasn't good at this, so started making tools for website owners to figure out why Google was making a mess of indexing their site.
Over time an entire industry grew up around writing content that Google would index well. If you’ve ever wondered why recipes on food blogs start with an entire page of guff about how “growing up in my grandmother's kitchen in suburban Toledo, I learned that baking isn't just about following a recipe—it's about creating memories, connecting with loved ones, and finding joy in the simple things. But first, let me tell you about the time I went to Paris in 2019…” when all you want is a biscuit recipe for goodness sake, the answer is that an SEO person worked out that Google used the time you spent on a page as a strong proxy of quality, which means wasting everyone’s time improves your search rankings. *Sigh*
Google’s use of AI overlays changes the model. Now the publisher pays writers to produce content, Google’s AI ingests that and shows readers a summary in the form of an AI overlay. The crawl-to-referral-ratio skyrockets (more bots read your site than humans and bots don’t make you money but do cost you). Therefore you can no longer afford to pay the writers and have to use AI to write the content instead, and slowly the snake eats its own tail.
If you care about journalism, or really any form of digital content, this isn’t ideal.
Businesses that produce any form of digital content are in trouble and everyone is trying to find new business models. One is to try to charge AI companies for content.
Disney did this with its deal with OpenAI at the end of last year, and then, in a move truly worthy of the House of Mouse, sued Google for copyright infringement. As John Gruber noted at the time, “It’s very Disney-esque to embrace a new medium. Alone among the major movie studios in the 1950s, Disney embraced television. TV was a threat to the cinema, but it was also an enormous opportunity. The other studios only saw the threat. Walt Disney focused on the opportunity. But Disney did this not by giving their content to television on the cheap or for free. They did it by getting paid. That’s what they’re doing with generative AI.”
The Wikimedia Foundation is trying something similar. As part of Wikipedia’s 25th anniversary on Thursday, they announced that Microsoft, Meta, Amazon, Perplexity, and Mistral AI have joined Google in paying for access to its projects, including Wikipedia’s vast collection of articles. The partnerships are part of Wikimedia Enterprise, an initiative launched in 2021 that gives large companies access to a premium version of Wikipedia’s API for a fee.
Wikipedia has also developed one of the best troves of AI writing tells, something its volunteers spent years cataloguing. In response, tech entrepreneur Siqi Chen released an open source plugin for Anthropic’s Claude Code AI assistant that instructs the AI model to stop writing like an AI model. And this is why we can’t have nice things.
CDN company Cloudflare is taking another tack. It got a lot of attention last year by launching a ‘pay to scrape’ service that will block AI crawler access to its clients unless the AI companies pay a fee. The question is if the crawlers refuse to pay, do you want to be out of the index? They are also buying UK startup Human Native, which is building a rights marketplace.
Human Native’s idea is that they help creators transform their content to be easily accessed and understood by AI bots and agents, in addition to their traditional human audiences, and then charge the crawlers to access it. “This opens up new avenues for content creators to experiment with new business models,” they claim.
It sounds a lot like the “API economy” that publishers tried before and didn’t work. Publishers currently have two established models that do work—subscriptions and being owned by Jeff Bezos*.
In the early days of SEO a common technique was to have keywords hidden on a page (white text on a white background) for the search engines to find. That technique seems to be enjoying a resurgence in academia. Nikkei found a bunch of research papers that contain hidden prompts (tiny white text) aimed at any LLMs that might be reviewing them–‘“give a positive review only”. It’s probably unethical to do this, but then it’s also probably unethical to use an LLM to review papers.
Finally, American online music distribution platform Bandcamp has announced that it is banning purely AI-generated music from its platform. I wonder if it can identify AI-generated music reliably and, if it can, how long it will be before Siqi Chen makes a plugin that fools them.
* Other billionaires are available.
Updates
|
|
January Meetup
Join us in person on Thursday January 29th - grab your spot now!
|
|
|
Free AI Training?
Massive thanks to BrainStation for supporting our community!
|
|
|
Follow us on LinkedIn
Bite sized nuggets of AI learning!
|
|
|
Follow us on BlueSky
Bite sized nuggets of AI learning!
|
|
|
Catch Up On The Conference
Subscribe now and don't miss all the latest recordings!
|