In partnership with

Hello,

This is Simon with the latest edition of The Weekly. In these updates, I share key AI related stories from this week's news, list upcoming events, and share any longer form articles posted on the website.

Make sure you are clear on how accurate your LLM output is before using it.

Recently, West Midlands Police in the UK, refused Maccabi Tel Aviv fans permission to attend a match against Aston Villa in Birmingham, which was already a serious and emotive subject. It has now emerged that part of that decision-making process involved using generative AI from Microsoft Copilot to produce an intelligence report. Whilst denied at the time, it's since come out that Copilot was indeed used, and it apparently referenced some incorrect facts.

I've always said: be cautious with generative AI. The output looks compelling and sounds completely accurate, but you're never entirely sure it is. Putting blind confidence into these systems without double-checking is risky.

Checking the output from LLMs is vital

If you're at home asking a chatbot to whip up a recipe from whatever's in your fridge? Probably fine. If you're in a work setting, checking facts or making important decisions, you truly need to know how accurate the output is. And if you work for a government organisation, say a police force, you really need to be sure.

It's not just about poor decisions. It's about presenting incorrect facts to colleagues, customers, or clients. Imagine that appearing in an email or a presentation. Imagine using an LLM to communicate with customers about an insurance claim, or advising someone on test results. Getting that wrong could be very damaging.

Context always matters, but even so, getting it wrong in a professional setting could damage your reputation, and that's the least negative outcome. Anything used in a decision-making process could harm people's lives and livelihoods, or land your business with fines and damages.

This week is simply a reminder that even the most important organisations, making the most important decisions, are still getting this wrong.

I've actually written a guide on this: Using Generative AI at Work. It's a framework to help you build a workflow that prevents mistakes like these. The link is below. Please go check it out.

Using Generative AI at Work

Using Generative AI at Work

Don't Damage Your Work Reputation By Making Basic GenAI Mistakes

$22.00 usd

A Message From Our Sponsor

Introducing the first AI-native CRM

Connect your email, and you’ll instantly get a CRM with enriched customer insights and a platform that grows with your business.

With AI at the core, Attio lets you:

  • Prospect and route leads with research agents

  • Get real-time insights during customer calls

  • Build powerful automations for your complex workflows

Join industry leaders like Granola, Taskrabbit, Flatfile and more.

Real World Use Case

Exclusive for subscribers.

In this section, I’m going to bring to you a real world example of AI use. This week we take a look at how Osaka Metropolitan University developed two models to automatically identify errors with X-Ray images.

Subscribe to get access

Curated News

AI tech steals the show at CES 2026

Big advances in what AI can do were revealed at the Consumer Electronics Show — from faster AI chips to household robots and even AI pets. Highlights include Nvidia’s powerful new AI processor architecture and robots with natural language interaction. This kind of tech preview shows where consumer and industry AI are heading this year.

Why this matters: AI is becoming more tangible from everyday devices to futuristic gadgets, without needing deep tech knowledge.

Games Workshop bans AI in creative work

The creators of Warhammer announced a company-wide ban on using AI for game and model design to protect human creativity, though a few leaders will still experiment with it. The move reflects broader debates about where AI should, and should not, be used.

Why this matters: This is a compelling example of a beloved brand choosing people over automation.

UK moves to make non-consensual AI deepfakes illegal

The UK government is about to enforce new laws banning the creation and distribution of non-consensual, intimate AI images and tools that generate them. This comes amid concerns about misuse of AI deepfake technology on social platforms.

Why this matters: It’s vitally important that governments are tackling AI abuse in areas that directly affect individuals’ privacy and safety.

Upcoming AI Events

Thanks for reading, and see you next Friday.

Simon,

Was this email shared with you? If so subscribe here to get your own edition every Friday.

Enjoying Plain AI? Share it and get a free gift!

If you find this newsletter useful, why not share it with a couple of friends or colleagues who would also benefit? As a special thank you, when two people subscribe using your unique referral link, I’ll send you my "Exclusive Guide: Supercharge Your Work with NotebookLM." It’s a practical, no-nonsense guide to help you turn information overload into your secret weapon.

You can also follow me on social media:

Reply

or to participate

Keep Reading

No posts found