Regulation gets a bum rap. Rachel Reeves has demanded regulators slash the cost of regulation for businesses by 25 per cent. Earlier this year, she explicitly launched a “red tape bonfire” and said that by cutting red tape and creating a more effective system, she would boost investment, create jobs and put more money into working people’s pockets.
That makes for great headlines. But, sometimes, we can forget that regulation serves a purpose.
Just look at the recent warnings from Checkatrade. The platform for tradespeople has said the growing use of AI is putting millions of UK households at growing risk of unvetted rogue traders. Checkatrade says it has blocked hundreds of cowboys from listing on its platform – but that AI searches are scalping data from sites that don’t block the cowboys, heightening risk to consumers. It’s an interesting example of how unregulated AI can lead to worse outcomes for consumers.
AI is obviously here to stay. Within Target Group, we have seen many positive uses for AI within our operations. It is now a really important part of how lenders operate and, in some cases, how consumers interact with lenders and servicers. It has immense potential to automate processes and drive growth.
As part of a broad modernisation and evolution project, we’ve been developing cutting-edge technology, including AI, which we’re rolling out across Target and are set to bring into the mortgage and loan origination space. By way of example, we have introduced various virtual assistants, including an HR chatbot designed for Target employees. It handles routine HR inquiries, provides instant responses, and lightens the load on our HR team. It can answer common questions about our HR policies—like overtime, sickness, or carers’ leave—in seconds, 24/7. We’ve got a mass of information on our systems, but it’s often hard for employees to access it; the chatbot provides that information quickly, whether they are at their desk, on their phone, or even at home. Early feedback has been promising: response times are down, employee satisfaction is up, and we’re already identifying ways to refine the system by extending its remit to be a wider colleague assistant capability that can support a broader range of queries and questions.
This isn’t about replacing people—it’s about empowering them. The chatbot is handling the mundane stuff so our HR experts can spend time on more value-added services. No bot ever delivers 100 per cent of the solution —typically, it’s more like 90-95 per cent — leaving the human element to sprinkle the magic on top. It’s a microcosm of what we’ve long advised our clients: AI isn’t a silver bullet. But when it’s deployed thoughtfully, it’s a force multiplier. AI can automate repetitive tasks, freeing up human talent for creative and strategic work—McKinsey estimates up to 30 per cent of tasks in most jobs can be automated, saving time and cutting costs.
Beyond virtual assistants, we’re exploring automation such as voice-analytics and transcription AI with FourNet. In financial services, every call is recorded, and manually adding notes is slow. Our AI listens, transcribes, summarises, and flags vulnerable customers using tone, keywords, or sentiment. It isn’t a stenographer; it’s a telephone wingman, suggesting probing questions to ensure we treat customers fairly.
So far, so good. Given Target is embracing AI, you might expect us to say we are against its regulation. Well, there is certainly some opportunity cost to regulation – only a fool would argue otherwise. There’s a chance that it might reduce the scope for innovation, for instance.
But equally, there has to be a focus on protecting consumers, especially in areas such as lending. Our view is that when it comes to financial services, regulation is a nuanced topic. A complete bonfire of red tape would be a retrograde step.
There are risks with poorly trained AI delivering poor outcomes for customers. We have already seen examples of bias being found in AI systems and resulting discrimination. The government had to pull an AI system used to detect welfare fraud, for instance, because it incorrectly selected people from certain ethnic groups more than others when deciding whom to investigate. We can’t risk consumer protections being bypassed.
The world of mortgages, lending and collections is highly regulated and rightly so. Having worked hard to get the industry to focus on the right customer outcomes, it would be regressive not to apply the same outcome-based thinking to AI. So, rather than calling for fewer regulations, what we would like to see is regulation that is fleeter of foot and keeps pace with technological advances – regulation that evolves quickly.
Avoid the cowboy tilers – and you should find agile, outcome-focused regulations and smart tech and can live happily under the same roof.