AI Hype vs. AI Reality: why governance matters more than ever.

What’s it all about?

Last year and the start of this year have been very busy on the geopolitical stage, almost to the point of bewilderment. Harold Macmillan, UK Prime Minister from 1957 to 1963 was once asked what was most likely to blow a government off course. He replied: “Events, dear boy, events”. By this he meant that circumstances can intervene beyond people’s or organisations’ individual control such as wars, economic shocks or natural phenomena like the weather.

Our conversations with clients show how much these events affect business strategy and understanding. There has been extensive comment on the impact of AI (artificial intelligence) which leads to a form of industrial FOMO (Fear Of Missing Out). Reports range from stories of mass layoffs to utopian visions of a future where automation gives humans far more leisure time. At the same time, many organisations are asking how they can use AI to improve performance and reshape working patterns. The responses so far paint a muddled picture. They reveal widespread confusion about how to manage new technologies and they also expose a deeper uncertainty about the purpose and nature of managing people at work.

The advent of widely available AI is forcing boards and senior executives to think hard about how to make use of AI. Ironically, this is a moment of danger for many organisations in that those pesky ‘events’ may force them to make bad or wrong decisions. One example of how fast the capability of AI is moving is the recent release of Anthropic’s new Claude model aimed at businesses (see FT article here). It can work inside Powerpoint to build slides, edit them and combine regulatory filings, market reports and internal data to produce financial analysis.

One vital insight into the use of AI is that it will not generate original thinking or educate people on how to exercise good judgement. This is where human oversight and application cannot be replicated: AI cannot originate bespoke client conversations or value propositions in sales approaches. Yes AI can help generate and analyse patterns but it cannot manage customer relationships over time.

Recent surveys show that many CEOs hope that AI will help cut costs and provide more effective results in their companies but, as we often remind our clients, hope is not a strategy.

Another paradox of AI use is that senior executives who have spent decades developing and exercising their judgement can use AI to absorb and recut vast amounts of data to reinforce their judgements and decisions. This is not so easy for junior or younger employees who have yet to build such a skill set.

This is the moment where two words need to feature at the forefront of any board’s collective minds: governance and accountability. The biggest error a board can make with AI is believing that it is somehow different this time.

It is not.

Governance of AI

Effective AI governance has to start at the top. Boards should define an explicit AI strategy and set of risks, in the same way they do for finance or cyber security, and regularly review how AI is being used. Without this oversight, AI‑driven decisions can quickly drift away from the organisation’s intentions. The board should decide clearly where AI will be deployed—and just as clearly where it will not. The pressure to “use AI because it’s trendy” must be resisted; instead, AI should support the company’s objectives, culture, and values, rather than reshape them.

In high‑impact areas such as recruitment, credit decisions, or employee monitoring, strong human oversight is essential. Organisations need well‑defined escalation routes so that when AI systems behave unexpectedly, people can intervene and correct course.

Since AI systems depend entirely on data, governance must also cover data quality, the provenance of data, and whether appropriate permissions and consents have been obtained for its use.

Accountability

A board will need to nominate a clear owner for each AI system it may use. Accountability for that person means for outcomes of AI use, not excusing problems by declaring it was the fault of the algorithm. There will also have to be incidence logs and escalation procedures to support regulatory compliance.This also means effective training for staff to ensure AI is used to serve human purposes and to allow for questioning and challenging of AI outputs.

In short, AI can help reinforce the progress of a business but it needs to be integrated into the effective governance of a business with clear accountability. Its risks need to be anticipated, not discovered in news headlines.

Previous
Previous

It’s a kind of magic – creating Vision in Business

Next
Next

l Need To Recruit Someone - Help!