In partnership with

Hello,

This is Simon with the latest edition of The Weekly. In these updates, I share key AI related stories from this week's news, list upcoming events, and share any longer form articles posted on the website.

A story emerged this week about employees at consulting firm KPMG using AI to cheat on an internal exam, about AI, no less. The incident took place in the company's Australian office, and it wasn't a handful of junior employees cutting corners; the group included a registered company auditor, with the latter facing a fine of AU$10,000, all had received specific AI training beforehand.

For me, there are a few things worth unpacking here.

First, employees are feeling real pressure to demonstrate AI competency. Staying relevant feels urgent, and falling behind peers or at least being seen to, is a genuine fear.

Second, while almost every firm now offers AI training, this incident suggests that training alone clearly isn't working.

We see this pattern repeatedly. Companies tell staff they can only use approved tools and warn against uploading sensitive data, yet employees continue using whichever AI platform suits them best, interacting with it in whatever way helps them work more efficiently. The policy exists on paper; and yet the behaviour tells a different story.

There is a real opportunity here for organisations to move beyond vague guidance. Phrases like "use responsibly" are incredibly vague and open to individual interpretation. Far more useful are specific rules: only Tool X may be used for analysing Category Y data; no AI tools whatsoever during exams or formal assessments.

I understand why organisations shy away from rigid mandates, they want to appear flexible and progressive. But human nature being what it is, people will always seek advantages wherever they can find them. Without clear boundaries, you cannot really blame them for doing so.

How clear is the guidance at your organisation? Do you honestly know where you're allowed to use AI and where you're not?

A Message From Our Sponsor

Dictate prompts and tag files automatically

Stop typing reproductions and start vibing code. Wispr Flow captures your spoken debugging flow and turns it into structured bug reports, acceptance tests, and PR descriptions. Say a file name or variable out loud and Flow preserves it exactly, tags the correct file, and keeps inline code readable. Use voice to create Cursor and Warp prompts, call out a variable like user_id, and get copy you can paste straight into an issue or PR. The result is faster triage and fewer context gaps between engineers and QA. Learn how developers use voice-first workflows in our Vibe Coding article at wisprflow.ai. Try Wispr Flow for engineers.

Real World Use Case

Exclusive for subscribers.

In this section, I’m going to bring to you a real world example of AI use. This week, we take a look at how retail giant Walmart uses AI to prioritise what to do next.

Subscribe to get access

Curated News

AI doesn’t reduce work, it can intensify it

A Harvard Business Review piece argues that, in practice, genAI often creates more work: faster output means higher expectations, more parallel tasks, and more coordination overhead.

Why it matters: If you don’t redesign workflows (ownership, approvals, “done means done”), AI can boost activity without boosting outcomes

Mastercard tests agent-led payments

Mastercard demonstrated an agentic payment flow (a sandboxed, authenticated transaction), but live commercial use is awaiting approvals.

Why it matters: The moment agents can spend money, businesses need controls like spending limits, approval chains, audit trails, and fraud monitoring—otherwise “automation” becomes a new risk surface.

AI regulations are driving a boom in AI governance tools

Gartner says global AI regulation is pushing organisations toward dedicated governance platforms, with spending projected to rise sharply over the next few years.

Why it matters: Governance is moving from “nice to have” to a budget line item, especially for firms that need to track where AI is used, what data it touches, and whether it’s compliant and safe.

Upcoming AI Events

Thanks for reading, and see you next Thursday.

Simon,

Was this email shared with you? If so subscribe here to get your own edition every Thursday.

Enjoying Plain AI? Share it and get a free gift!

If you find this newsletter useful, why not share it with a couple of friends or colleagues who would also benefit? As a special thank you, when two people subscribe using your unique referral link, I’ll send you my "Exclusive Guide: Supercharge Your Work with NotebookLM." It’s a practical, no-nonsense guide to help you turn information overload into your secret weapon.

You can also follow me on social media:

Reply

Avatar

or to participate

Keep Reading