In partnership with

Hello,

This is Simon with the latest edition of The Weekly. In these updates, I share key AI related stories from this week's news, list upcoming events, and share any longer form articles posted on the website.

Generative AI might be convincing, but it’s not the best at data tasks.

Last week, I was reminded that while Generative AI excels at many things, it is ultimately a tool predicting the next word—a sophisticated version of predictive text. Its output often sounds legitimate, leading us to trust and use it for various tasks, even number-based ones. For example, we might ask it to spot patterns in a table of numbers or identify the month with the highest sales and simple analysis is likely fine if double-checked, but for real analytics like forecasting or segmentation, traditional machine learning is the better choice.

Traditional Machine Learning is more accurate for data work

Generative AI models, like LLMs, generate text or images by reproducing patterns they've encountered in their training data. Therefore, if you use such a model to forecast 2026 sales based on the provided table, the response simply reflects general trends from historical data, not a tailored prediction. This approach lacks accuracy for forecasting. For next year's sales predictions, relying on traditional techniques such as regression or advanced machine learning methods like Decision Trees and Random Forests yields more precise, robust results. While no method is infallible, machine learning is generally better suited for rigorous data work for several important reasons:

  • Direct Supervision: Traditional ML algorithms (such as regression, classification, clustering, and decision trees) are trained directly on your specific dataset to optimise for accuracy, prediction, or classification of structured, tabular data.

  • Transparent Outputs: ML models offer well-defined outputs with probabilistic scores, confidence intervals, and explainable feature importance, allowing results to be validated, audited, and trusted for business decisions.

  • Tailored Metrics: Classic ML workflows focus on measurable performance metrics (accuracy, precision, recall, F1 score), enabling fine-tuning and benchmarking for specific business or analytical goals.

On the other hand, generative AI, whilst powerful at first glance, is less accurate:

  • Language-First, Data-Second: LLMs are primarily trained to generate and understand text, not analyse structured data or make statistical predictions. Their knowledge comes from a broad (but generic) dataset, not your specific use case.

  • Synthetic Reasoning: LLMs “guess” the next word or sequence based on learned patterns. This means their answers may sound plausible, but can contain “hallucinations” (confident but wrong statements) or overlook data nuances.

  • Lack of Guarantees: LLMs aren’t optimised for statistical or numerical accuracy—they can interpret data but don’t perform calculations or modelling with the precision of regression or classification algorithms.

To Summarise:
Traditional ML is the calculator: precise, purpose-built, and robust for uncovering truths in data. Generative AI and LLMs act more as creative writers: adept with words but unreliable for rigorous data analysis.

Is our tendency to use generative AI for everything a cause for failure?

It is probably for some of these reasons that generative AI is still struggling to make a real impact in the workplace. According to a 2025 MIT study, about 95% of enterprise generative AI pilots deliver no measurable ROI. I would strongly suspect that an element of this stems from people trying to use generative AI for use cases where it is not the best approach. When leaders check the quality and accuracy of those use cases, they will find the work unacceptable, stop the project, and probably return to their previous way of working.

Even if Machine Learning is the better choice for data analysis, the big challenge is that nearly everyone has access to LLM tools such as ChatGPT, Gemini, and Co-Pilot; yet getting access to Machine Learning tools, however, is far more difficult and requires a higher level of knowledge and skills.

I will discuss this in more detail in next week’s newsletter.

A couple of questions for you.

Have you ever used machine learning tools before? Do you know how these differ from formulas or analysis you might use in an Excel spreadsheet?

A Message From Our Sponsor

Find customers on Roku this holiday season

Now through the end of the year is prime streaming time on Roku, with viewers spending 3.5 hours each day streaming content and shopping online. Roku Ads Manager simplifies campaign setup, lets you segment audiences, and provides real-time reporting. And, you can test creative variants and run shoppable ads to drive purchases directly on-screen.

Bonus: we’re gifting you $5K in ad credits when you spend your first $5K on Roku Ads Manager. Just sign up and use code GET5K. Terms apply.

Real World Use Case

Exclusive for subscribers.

In this section, I highlight a real-world example of AI. This week I talk about how retailers use machine learning to make sure they don’t run out of stock.

Subscribe to get access

Curated News

The EU officially launched its new AI Office enforcement portal

The EU’s newly created AI Office opened its public enforcement and reporting portal this week, marking the first major step in implementing the bloc’s sweeping AI Act. The portal lets companies self-register high-risk AI systems, while also giving citizens a simple mechanism to report suspected misuse. Early traffic reportedly exceeded expectations, with SMEs rushing to understand their new compliance responsibilities. For many organisations, this marks the beginning of operational AI governance rather than theoretical regulation.

Why this matters: This is the first real-world test of large-scale AI regulation, and other governments are watching closely to see how enforceable AI rules actually are.

The U.S. Department of Labor issued new guidance on AI-driven hiring tools

The U.S. Department of Labor published detailed guidance clarifying how employers must audit and disclose AI-powered hiring and screening systems. The agency emphasised that employers remain fully responsible for discrimination—even if the bias originates in a third-party AI tool. Several major HR tech vendors said they would begin offering “bias audit summaries” to stay compliant. The move signals that AI in hiring is no longer a grey zone: it’s a regulated employment practice.

Why this matters: With more than 70% of large employers using AI in recruitment, these rules will directly shape how people get interviewed, shortlisted, and hired.

A major global survey revealed workers feel more optimistic about AI than last year

A new global workforce survey released this week shows a sharp rise in employee optimism about AI. Most respondents said AI is helping reduce routine tasks, and over half reported using AI weekly for work. Interestingly, fear of job loss has dropped, replaced by concern that companies aren’t training staff fast enough. The survey suggests everyday workers are now ahead of corporate leadership in wanting to use AI more effectively.

Why this matters: Public attitudes toward AI are shifting—from fear to curiosity—and that’s a strong indicator of how quickly workplaces will adopt new tools.

Upcoming AI Events

Thanks for reading, and see you next Friday.

Simon,

Was this email shared with you? If so subscribe here to get your own edition every Friday.

Enjoying Plain AI? Share it and get a free gift!

If you find this newsletter useful, why not share it with a couple of friends or colleagues who would also benefit? As a special thank you, when two people subscribe using your unique referral link, I’ll send you my "Exclusive Guide: Supercharge Your Work with NotebookLM." It’s a practical, no-nonsense guide to help you turn information overload into your secret weapon.

You can also follow me on social media:

Reply

or to participate

Keep Reading

No posts found