Hello friends,
This week Hannah and Charles met up in person to finalise the newsletter before heading to the AI for the rest of us meetup. Hannah is currently enjoying a weekend away in Lisbon for her friend’s 40th birthday. Sadly the forecast looks like rain but it’s sure to be a fun adventure anyway!
Thursday’s London meetup was packed. Thank you to everyone who joined us in person to learn from Masha Gaganova, Phil Le-Brun and Vitor Kneipp. We discussed how to thrive in an age of continuous transformation, inspired by Phil and Jana Werner’s book The Octopus Organization.
Our wonderful hosts at Tessl shared some big news about how they’re advancing their AI Native Development products by introducing Skills. Congratulations to everyone on the team who has been working on this launch! We’ll be back at the gorgeous Tessl offices for our next meetup on Feb 19th. Speaker announcements coming soon!
If you didn’t make it along to the AI for the rest of us conference in October you can catch up on all the talks on youtube. We’ve pulled together a new playlist which focuses on AI Agents. It includes an impressive live demo from Phil Nash, an entertaining James Bond themed talk from Jeff Watkins and Lianne Potter (with costumes!) and an inspiring session from Jessica Osho about moving from assistant to automation.
Hannah has been sharing some of our diversity stats over on LinkedIn and feeling incredibly proud that this project has become such a thriving diverse and representative community. When we say “You are welcome here” we really mean it - this is a community for everyone!
As always, thank you for being here!
Hannah and Charles
What’s Hannah reading this week?
MCP Apps, announced this week, are taking AI Agents and Assistants to the next level. As the first extension of MCP the launch brings apps like Asana, Figma, Slack, Canva, Clay and Box directly into your chat interface.
“Text responses can only go so far. Sometimes users need to interact with data, not just read about it. MCP Apps let servers return interactive HTML interfaces (data visualizations, forms, dashboards) that render directly in the chat."
This is huge progress for user experience, allowing you to surface the interactions you’re used to in the context of your agent platform. It’s an interesting reversal of many of the UX decisions we’ve seen over the past few years, with everyone rushing to incorporate a chatbot of some kind into their product. Why not squeeze your product into the chatbot instead?
In the realms of productivity apps I can see this being a real game changer. For AI Agents and Agentic solutions I have to pause and wonder what that app might become if it’s never used by a human, only an agent. The potential for experimentation has once again exploded and I can’t wait to see what people build.
This week I’ve mostly found my head in my hands while reading about Clawdbot (now Moltbot) and the gaping hole this naughty little space lobster leaves in your data security. Framed as “The AI that actually does things” you need to set it up with permissions to do said things on your behalf. It’s genius and it’s curse is that you can ask it to do things like read or write email and that means it requires access to read and send email! Shocker!
Naive early adopters are leaving their credentials wide open by connecting their Clawdbot servers to the internet and not securing them effectively. As Maria Shukareva writes on LinkedIn
“You can scan the internet and instantly see hundreds of ClawdBot instances that are publicly reachable. Some of them potentially contain API keys, credentials, logs, or internal data”
“Prompt injection via content” is another huge risk with Clawdbot. Asking the bot to read your emails or whatsapp messages is another example of an infinite attack surface. A bad actor could send an email to a Clawdbot user with a malicious prompt included, and use that to hijack any other app or device the bot has access to.
I enjoyed this detailed write up by hunto.ai which provides some helpful advice and education on good security practices instead of just sensationalising the potential risks.
If assistants like Clawdbot are to become widely adopted they must be secure by default and not place that work on the user who may not have the skills or experience. It’s a fascinating area because there’s clearly a demand for “an AI Agent that actually does things” but giving it access to do things is a leap I’m not prepared to make right now.
What's Charles reading this week?
“Open the pod bay doors, please, HAL.”
The problem with reality is that it is very, very annoying.
My wife and I went to see the wonderful “H is for Hawk” at the cinema this week. Fabulous acting particularly from Claire Foy, and a brilliant soundtrack from Emilie Levienaise-Farrouch.
Driving to the Twofish studio the next day, I asked Siri if it could play the soundtrack for me. It kicked things off with “Yn annwfyn y diwyth” by Stuart Hancock which confused me rather. It turned out to be from a 2011 film called Hawk. Possibly owing to my utter inability to pronounce Emilie Levienaise-Farrouch I could not coax Siri into playing the score I wanted.
Voice recognition, a form of interaction beloved by Sci-fi TV series and film writers, has turned out to be quite tricky. One issue is that, unless your chatbot understands everything you could possibly say, you have to learn a vocabulary to interact with it, which is quite similar to the computer command line we largely abandoned for good reasons. There are also issues that arise with understanding variable pronunciation, regional accents, non-native speakers, and what gender to assign the voice that talks back to you, as well as privacy issues. It is harder to get right than text-based chat.
But if you could crack it, and LLM-based voice assistants have undeniably got better, might that enable new forms of screenless hardware? If so, what form might they take? Two early attempts are smart speakers and wearables.
Modern smart speakers from brands like Amazon, Apple, Bang & Olufsen and Sonos all combine an assistant (either Alexa, Google Assistant or Siri) with, at the higher end, a reasonable level of sound quality. They have found a market but mostly in fairly limited ways—streaming music, podcasts, radio, and audio books; getting weather updates and news headlines; or setting timers. It’s hard to get good estimates on overall market size but smart speaker shipments seem to have peaked and to now be declining. It is rather old data, but Semiconductor analysts TechInsights, for example, give a figure of 29 million units in Q1 2024, a year-on-year drop of 7 percent.
Screenless wearables have proven even harder to crack. Since most of us have our phones on us already, any new device has to do something compelling that is hard or impossible to do with our phones. The wearables that have found a market have tended to be things we might want on us anyway, such as watches, headphones or glasses. An entirely new form factor has thus far proved elusive.
One attempt was the much-hyped Humane Pin. Humane raised $250m, launched their pin at $700 and required a $24.99 monthly subscription. It was more-or-less universally panned on launch for poor performance, overheating, and limited utility, and the company got sold to HP for $116 million. But terrible execution doesn’t necessarily mean a terrible idea.
OpenAI says it’s on track to announce the first Jony Ive-designed hardware product in the second half of this year. Rumours say this will also be some kind of wearable pin. Meanwhile, The Information (subscription-only) reports that Ive’s ex Apple is working on some kind of wearable pin with cameras and speakers. Apple probably are, but that doesn’t necessarily mean they’ll ship anything. Ars Technica also has a write-up if you want more details without forking out for a subscription to The Information.
It sometimes feels like everything to do with new technology opens up new forms of abuse, and wearables are no exception. SmartGlasses like Meta Ray-Bans allow discrete filming. Inevitably degenerate lowlifes are churning out low-effort videos across Instagram and TikTok, harassing women, retail workers and homeless people to get a reaction. Consent doesn't enter into it at all.
Meanwhile court filings claim Meta CEO Zuckerberg blocked curbs on sex-talking chatbots for minors according to Jeff Horwitz, at Reuters: “Messages between two employees from March of 2024 state that Zuckerberg had rejected creating parental controls for the chatbots, and that staffers were working on ‘Romance AI chatbots’ that would be allowed for users under the age of 18. We ‘pushed hard for parental controls to turn GenAI off — but GenAI leadership pushed back stating Mark decision,’ one employee wrote in that exchange.”
Meta’s Andrew Bosworth says the new models from the new Superintelligence team (bought/hired for several billion dollars) were delivered internally this month. I don’t think we’re anywhere close to super-intelligence but I don’t find the thought of Zuckerberg being responsible for some sort of super-intelligent AI overly reassuring.
We covered Grok in detail a couple of weeks ago, but it’s worth noting that, like Ofcom, the EU has launched a formal investigation into Elon Musk’s xAI following a public outcry over how its Grok chatbot spread sexualised images of women and children. Other investigations into the platform's chatbot are underway in Australia, France and Germany. Grok was temporarily banned in Indonesia and Malaysia, although the latter has now lifted the ban.
Separately, the commission fined X €120m last month for breaking EU law. The EU regulator said X gave “deceptive” blue tick badges to accounts without verification, potentially exposing users to scams and deception by malicious actors. X also broke EU legal requirements on transparency over advertising by hindering researchers from investigating fake ads and hybrid threat campaigns.
Musk replied “bullshit” to a commission post announcing the fine and later called for the EU to be abolished. US Secretary of State Marco Rubio and the Federal Communications Commission (FCC) accused the EU regulator of attacking and censoring US firms. "The European Commission's fine isn't just an attack on X, it's an attack on all American tech platforms and the American people by foreign governments," he said.
Grok AI generated about 3m sexualised images in less than two weeks, including 23,000 that appeared to depict children, according to researchers at the Center for Countering Digital Hate.
Updates
|
|
February Meetup
Join us in person on Thursday February 19th - grab your spot now!
|
|
|
Free AI Training?
Massive thanks to BrainStation for supporting our community!
|
|
|
Follow us on LinkedIn
Bite sized nuggets of AI learning!
|
|
|
Follow us on BlueSky
Bite sized nuggets of AI learning!
|
|
|
Catch Up On The Conference
Subscribe now and don't miss all the latest recordings!
|