Hello,
This is Simon with the latest edition of The Weekly. In these updates, I share key AI related stories from this week's news, list upcoming events, and share any longer form articles posted on the website.
When I started writing Plain AI well over a year ago, some of my earliest posts discussed the importance of data governance and data catalogues, and just this week, I'm still having conversations with clients where this is a key topic. One client in particular has been resistant to opening up AI tools until their setup is only connected to an approved set of data and until they have oversight over what tasks and actions employees are taking. This is, of course, entirely sensible, and the hesitation is largely driven by the fear that important decisions are made on the back of inaccurate data.
My client also raised another interesting point about the wider business context. It's all very well for an AI tool to correctly tell you that 5 + 5 = 10, but can it tell you that 10 is not the right answer, full stop? Not all business users are data-savvy; they just need to look at a report or dashboard and find the number they need. But without a fundamental understanding of the data set, they might never know it's impossible for the number to be 10. This is where proper data governance, combined with AI, becomes genuinely powerful.
When your AI tools are connected not just to your data, but to your internal knowledge, your documentation, your business rules, your known constraints, through tools like an MCP, they can do something really valuable. They can tell you that a number, while technically correct, falls outside the bounds of what's possible or expected. And that's a very different thing from just doing the maths.
For my client, this reframe has actually helped move the conversation forward. Their caution around data governance isn't an obstacle to AI adoption; in fact, it's the foundation that makes trustworthy AI possible. The organisations that are taking time now to define what good data looks like, where it lives, and who can access it, are the ones who'll get the most out of AI further down the line.
A Message From Our Sponsor
ChatGPT gives you generic answers because you give it generic prompts.
You know the fix: longer prompts, more context, clearer constraints. But typing all that takes five minutes per prompt, so you shortcut it. Every time.
Wispr Flow lets you speak your prompts instead of typing them. Talk through your thinking naturally — include context, constraints, examples — and get clean text ready to paste. No filler words. No cleanup.
Works inside ChatGPT, Claude, Cursor, Windsurf, and every other AI tool. System-level, so there's nothing to install per app. Tap and talk.
Millions of users worldwide. Teams at OpenAI, Vercel, and Clay use Flow daily. Free on Mac, Windows, and iPhone.
Real World Use Case
Exclusive for subscribers.
In this section, I’m going to bring to you a real world example of AI use and this week, we look at consulting firm McKinsey who’ve built a generative AI platform, “Lilli”.
Subscribe to get access
Curated AI News
The EU AI Act compliance deadline is in four months
A readiness assessment published this month by Vision Compliance found that 78% of organisations have not taken meaningful steps toward compliance with the EU AI Act, whose rules for high-risk AI systems come into force on 2 August 2026. The requirements, which include completed conformity assessments, finalised technical documentation, CE marking, and registration in the EU's AI database, apply to a wide range of enterprise systems used in HR, credit, healthcare, and critical infrastructure. An earlier proposal to delay enforcement, floated by the European Commission in late 2025, has not passed into law, meaning the August deadline remains legally binding, as reported by LegalNodes.
Why it matters: For any business deploying AI in hiring, performance management, lending decisions, or other regulated processes in Europe, the clock is running. Non-compliance carries fines of up to €30 million or 6% of global turnover. With the majority of enterprises still in early stages of assessment, legal and compliance teams need to be involved in AI projects now, not after August.
Most enterprise AI agents never make it out of the pilot stage
Industry data collected across April 2026 paints a sobering picture of agentic AI deployments in large organisations. According to multiple surveys cited by MIT Sloan Management Review and industry analysts, only 11–14% of enterprise AI agent pilots have reached full production at scale. Among those that have been deployed, security governance is a serious gap: 88% of organisations reported confirmed or suspected AI agent security incidents in the past year, and 36% of companies admit they have no formal plan for supervising AI agents, and in a third of cases, organisations acknowledged they could not immediately shut down a rogue agent if required. Only 14% of agents running inside enterprise environments were deployed with full security and IT approval.
Why it matters: The agent narrative is now central to how most major software vendors are pitching their products. These figures suggest that the reality inside most large organisations is considerably messier. For executives being sold agentic AI platforms, governance, audit trails, and kill-switch capability are not nice-to-haves. They are the questions that need answering before deployment, not after.
Half of C-suite leaders say AI is tearing their company apart
A global survey of business decision-makers, published this month by Infor and cited by the Writer Enterprise AI Adoption report, found that while 80% of executives believe their organisation has the internal capability to manage AI implementation, 49% are stuck in the early stages. More strikingly, 54% of C-suite leaders surveyed admitted that adopting AI is "tearing their company apart," with a 2026 study by Writer finding that 79% of organisations face meaningful challenges in AI adoption. Tensions between IT, legal, HR, and line-of-business teams over who owns AI decisions were cited as the leading source of friction.
Why it matters: The gap between executive ambition and operational reality is widening, not closing. The practical problem for most organisations is not access to AI tools, but the internal alignment, governance structures, and change management needed to make use of them. For leaders, the risk is committing to AI-driven transformation without the organisational architecture to deliver it.
Upcoming AI Events
The AI Summit London
Tobacco Dock, London, June 10-11AI World Congress
Kensington Conference and Events Centre, London, June 23-24World AI Summit
Taets Art & Event Park, Amsterdam, October 07-08
Thanks for reading, and see you next Thursday.
Simon,
Was this email shared with you? If so subscribe here to get your own edition every Thursday.
Enjoying Plain AI? Share it and get a free gift!
If you find this newsletter useful, why not share it with a couple of friends or colleagues who would also benefit? As a special thank you, when two people subscribe using your unique referral link, I’ll send you my "Exclusive Guide: Supercharge Your Work with NotebookLM." It’s a practical, no-nonsense guide to help you turn information overload into your secret weapon.
You can also follow me on social media:



