In partnership with

Hello,

This is Simon with the latest edition of The Weekly. In these updates, I share key AI related stories from this week's news, list upcoming events, and share any longer form articles posted on the website.

IT policing company AI use is like playing whack-a-mole

While the world gets excited over the hype around Nano Banana and whether GPT-5 is any good, I’ve been reminded this week that real use of AI in companies is a lot more fundamental than that.

The truth is, businesses are all over the map. I've seen the entire spectrum:

  • I have one friend whose company is adamant that they can’t use generative AI due to having a very concerned client who doesn’t want any details being shared with AI companies.

  • I have one friend whose company pays for ChatGPT Enterprise, and the staff are strongly encouraged to take advantage of it.  

  • And when I was at Dataiku, I had an insurance client that was only permitted to use an older GPT version that Microsoft had installed in their Azure cloud instance, effectively meaning no data left their own managed environment.

Monitoring AI is Hard for IT

It makes sense that companies handle AI in so many different ways. While AI offers big benefits, it also brings real risks, so it’s no wonder there are so many rules. For IT and InfoSec teams, it can be a real headache. Telling staff not to use AI, or to stick to just one tool like Google Gemini, is easier said than done. Sure, companies can block certain websites, but with new AI platforms showing up all the time, it’s like playing whack-a-mole. Then there’s the issue of personal devices. Many companies let employees use their own phones for work email and messaging, so how do you stop them from using ChatGPT? This is what’s known as “Shadow IT”. In some industries, devices are locked down so tightly that this isn’t a big issue, but I’ve worked at plenty of places where I had full admin rights on my MacBook and could install whatever I wanted.

Shadow IT refers to unknown assets that are used within an organisation for business purposes.

National Cyber Security Centre

This Could Create New Companies

People often worry about AI taking jobs, but here’s a case where it could actually create new ones. As companies try to keep AI use in check and track the value of their AI investments, some are building tools to help IT teams manage it all. For example, there’s a new company called Larridin. Their goal is to help organisations see exactly how AI is being used, keep what works, and make sure everything stays compliant and secure, all in one place. I think we’re about to see a whole new kind of software focused on managing AI.

So while everyone gets caught up in the latest AI models and features, it’s usually the less exciting parts that matter most for a company’s success.

Does this sound familiar? I'd love to hear how your company is handling the AI 'Wild West'. Hit reply and let me know.

A Message From Our Sponsor

The World’s Most Wearable AI

Limitless is your new superpower - an AI-powered pendant that captures and remembers every conversation, insight, and idea you encounter throughout your day.

Built for tech leaders who need clarity without the clutter, Limitless automatically transcribes and summarizes meetings, identifies speakers, and delivers actionable notes right to your fingertips. It’s securely encrypted, incredibly intuitive, and endlessly efficient.

Order now and reclaim your mental bandwidth today.

Curated News

Salesforce Lays Off 262 Roles Amid Agentic Enterprise Vision

Salesforce confirmed layoffs of 262 staff across support, admin, and tech roles, with 93 more positions cut in other regions. CEO Marc Benioff attributes the shift to AI agents handling customer support, reducing support staff from 9,000 to around 5,000. Benioff frames it as a transition toward an “Agentic Enterprise,” with employees being redeployed and supported throughout.

Why it matters: Agentic AI isn’t just theoretical, it's having real workforce implications. Companies are designing their organisations around AI capabilities.

AI Agents as Teammates

TechRadar explores how AI agents, affectionately dubbed "non-human resources", are now acting like team members rather than tools. They can make independent decisions but introduce new risks (e.g., data poisoning, prompt injection). This requires businesses to rethink governance, train agents thoroughly, and embed ethical boundaries, sometimes via new leadership roles like Chief AI Officers.

Why it matters: Successful adoption of agentic AI relies as much on organisational transformation and oversight as on technical deployment.

Anthropic Flags Criminal Misuse of AI

Anthropic revealed that its AI assistant, Claude, was misused in a sophisticated cybercrime operation dubbed "vibe hacking." The agentic AI was used to automate reconnaissance, credential theft, ransom demands, and psychological manipulation. North Korean adversaries also exploited Claude to help bypass sanctions and recruit remotely. Anthropic has cut off implicated accounts and warned of the growing need for proactive defenses.

Why it matters: As agentic AI gains power, misuse becomes a genuine enterprise and societal risk, underscoring the urgency for robust security and ethical frameworks.

Upcoming AI Events

Thanks for reading, and see you next Friday.

Simon,

Was this email shared with you? If so subscribe here to get your own edition every Friday.

You can also follow me on social media:

Reply

or to participate

Keep Reading

No posts found