In 2002, I joined Cerebrus Solutions, a now‑defunct company formed as a spin‑off from Nortel Networks. Nortel had been pioneering the use of anomaly detection and early neural‑network techniques to predict network failures. While their initial project achieved only limited success, the team quickly recognised the potential to apply the same technology to fraud detection.
Cerebrus combined strong technology foundations with exceptional talent. Before my arrival, the company had already wrapped its ML‑driven capabilities—supported by seven patents—into a complete fraud management framework. Alongside this, they created the necessary, lower‑tech “high‑usage” tools that made the system practical for everyday use, albeit in imaginative and sometimes unconventional ways.
In 2004, Cerebrus merged with Neural Technologies. Despite the name, their customers were not using neural networks at the time. As part of the merger, the larger Cerebrus customer base was migrated to the Neural Technologies platform. I oversaw this migration, during which, customers I worked with expressed regret at losing only two features: the distinctive icons representing alarm types, and the mouse pad that displayed those icons and their definitions.
Notably, the machine‑learning capabilities themselves were not missed. This is striking, because in controlled environments, the technology performed extremely well. The neural framework supported a continuous improvement loop, with analysts validating alarms to reinforce correct system behaviour. The anomaly‑detection engine reliably identified genuine outliers. In short, the technology worked.
The challenge, however, was the human element. Consider anomaly detection: in a network of five million subscribers, even a small fraction of anomalies represents an overwhelming number. Staff had limited time to investigate, and an anomaly is not inherently suspicious—it is just as likely to represent an excellent customer as it is to indicate fraud. The volume simply exceeded operational capacity.
Artificial intelligence has advanced significantly since 2002. We have moved from shallow to deep learning, made major breakthroughs in handling unstructured data and natural language, and gained far better transparency into a model’s decision making. These advancements demand substantial processing power, but GPUs and purpose‑built AI chips such as TPUs have enabled massive improvements in computational speed and efficiency.
These developments are encouraging. Yet when we look at real‑world fraud scenarios—take smishing, for instance, where SMS content is often unavailable—the gains may not always justify the complexity. One vendor in our space provides a clear example: while their revenues continue to grow, so do their operational costs. Their experience is summed up well in their statement: “The technology works brilliantly, but the business of implementing it is expensive and complex.”
The positive news is that AI’s applications are broad, and there are many opportunities to improve fraud‑management operations without resorting to complicated or costly solutions. Ambition and innovation remain essential, but solving smaller, simpler problems can be just as impactful—and solving these problems is often overlooked.
Adrian Harris