Hello,
This is Simon with the latest edition of The Weekly. In these updates, I share key AI related stories from this week's news, list upcoming events, and share any longer form articles posted on the website.
I've been having a number of recurring conversations at work over the last few weeks, where clients want to use AI features to enable less data-literate users to work with data analytics. Not only does this mean they don't need to know programming languages like SQL or Python, but they also don't need particular skills around building dashboards and reports.
Think about it this way — instead of needing to learn a tool or rely on a data analyst, you can just ask a question in plain English and get a detailed answer back. "Which product had the highest return rate last quarter?" or "Show me our top ten customers by revenue this year." No coding, no dashboard-building, just answers. That saves companies a significant amount of time and money.
There's another advantage to using a managed system like this, though, and it's one that's easy to overlook. When ChatGPT first emerged, employees started uploading spreadsheets and documents to AI chatbots and asking questions — and there are real problems with that approach. AI systems can use uploaded data to further train their models, which isn't ideal if that data is commercially sensitive (I’ve talked about how you can prevent this in a previous edition). On top of that, IT teams often have no visibility into which tools their employees are signing up to — a phenomenon known as Shadow IT.
A similar issue is emerging with vibe coding tools like Lovable or Replit. These platforms are great for quickly building an app that looks polished and does something useful, but doing this at work comes with a whole host of security implications. Who has access to the data the app is using? Who keeps it updated? Who fixes it when something breaks?
AI has given us some amazing shortcuts to do things we didn't think were previously possible. But as I've always said in Plain AI — in a work context, there are processes and guidelines we should follow. Not to be difficult, or to stop the fun, but because security, governance and control are vital for organisations, particularly those in regulated industries where shortcuts simply aren't acceptable.
When did you last take a shortcut with an AI tool? Was it at work, and were you aware of the consequences? Let me know.
Also, don’t forget I’ve produced a guide to help you navigate using AI at work.

Using Generative AI at Work
Don't Damage Your Work Reputation By Making Basic GenAI Mistakes
A Message From Our Sponsor
Find out why 200K+ engineers read The Code twice a week
Staying behind on tech trends can be a career killer.
But let’s face it, no one has hours to spare every week trying to stay updated.
That’s why over 200,000 engineers at companies like Google, Meta, and Apple read The Code twice a week.
Here’s why it works:
No fluff, just signal – Learn the most important tech news delivered in just two short emails.
Supercharge your skills – Get access to top research papers and resources that give you an edge in the industry.
See the future first – Discover what’s next before it hits the mainstream, so you can lead, not follow.
Curated News
Big banks are showing what practical AI adoption actually looks like
Citigroup is said to be using AI to speed up account openings, replace legacy software faster, and automate parts of coding, testing, and data migration. Using AI, document review for account opening has been cut from more than an hour to about 15 minutes, showing AI being applied to operational bottlenecks rather than flashy consumer features.
Why it matters: The real value often comes from removing friction in internal processes, improving compliance work, and making existing teams more productive.
China is moving to label AI-generated “digital humans”
China proposed rules that would require prominent labels on all virtual human content and would ban “digital humans” from providing virtual intimate relationships to under-18s. The draft rules are open for public comment until May 6, according to Reuters.
Why it matters: Regulation is widening from model safety into how AI-generated personas are presented to the public.
Meta is trying to re-enter the front rank of the AI race
Meta unveiled Muse Spark, described by Reuters as the first AI model from the expensive super-intelligence team it assembled last year after disappointment around Llama 4. Reuters said independent tests suggest the model matches rivals in some areas but still trails in coding and reasoning, making this an important but not yet decisive comeback attempt.
Why it matters: This shows how fast the competitive bar is rising. For business readers, it is a reminder that even the biggest tech firms are having to spend heavily and reorganise aggressively just to stay credible in frontier AI.
Upcoming AI Events
Generative AI Summit
Novotel London West, April 13-15The AI Summit London
Tobacco Dock, London, June 10-11AI World Congress
Kensington Conference and Events Centre, London, June 23-24World AI Summit
Taets Art & Event Park, Amsterdam, October 07-08
Thanks for reading, and see you next Thursday.
Simon,
Was this email shared with you? If so subscribe here to get your own edition every Thursday.
Enjoying Plain AI? Share it and get a free gift!
If you find this newsletter useful, why not share it with a couple of friends or colleagues who would also benefit? As a special thank you, when two people subscribe using your unique referral link, I’ll send you my "Exclusive Guide: Supercharge Your Work with NotebookLM." It’s a practical, no-nonsense guide to help you turn information overload into your secret weapon.
You can also follow me on social media:



