29/04/2026
Why the AI cybersecurity revolution means back to basic for financial services firms
“Anthropic’s ‘superhuman Al’ Mythos could break everything” reads one particularly alarming headline. To be fair, AI company Anthropic have themselves said their new technology is too powerful to release to the general public, with claims that ‘Mythos Preview found bugs in “every major operating system and web browser”’.
Concerns that the new AI models could beat traditional digital security measures are worth taking seriously, but with the most advanced technology being held back from public scrutiny, how do we separate mythos from reality?
Vulnerabilities have become more glaring
An evaluation of Claude Mythos Preview by the AI Security Institute, a UK government agency, offers useful guidance.
The agency found that the model is “at least capable of autonomously attacking small, weakly defended and vulnerable enterprise systems where access to a network has been gained”.
At the same time, the agency acknowledged that it “cannot say for sure whether Mythos Preview would be able to attack well-defended systems”. Far from licencing complacency, the report pointed out that new AI models will be even more capable.
Imminent risk
More immediately, the scale of the threat to vulnerable systems should not be ignored: “Our testing shows that Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed.”
Even if most bad actors currently lack access to the latest and most powerful models, the ability to scan for and exploit vulnerabilities at scale is already becoming more widespread. And today’s cutting-edge systems may be relatively commonplace tomorrow.
Fundamentals matter
As the AISI points out, one critical response to this paradigm shift in digital technology should be a focus on fundamentals. The ubiquity of powerful AI technologies, “highlights the importance of cybersecurity basics, such as regular application of security updates, robust access controls, security configuration, and comprehensive logging.”
Fortunately, advanced AI also have the potential to play a defensive role. Just as more operators have access to potentially malicious AI applications, automated world-class security is now, theoretically, available to all organisations. Getting the most of it will depend on effective processes, systems and training.
Three key takeaways
- The margin for error is smaller than ever
As vulnerability discovery becomes faster and more scalable, unpatched systems and weak controls are far more likely to be exploited. In an AI‑accelerated threat environment, basic information security matters more, not less.
- Defensive capability does not deploy itself
AI‑enabled security tools can significantly improve detection and response, but simply adopting the latest technology is not enough. Effective use depends on continuous investment in skills, training and operational processes.
- For financial services, offensive AI is a compliance issue
Beyond reputational risk, financial services companies have clear regulatory obligations to protect client data and ensure operational resilience. Failures in basic controls are increasingly likely to trigger supervisory scrutiny, enforcement action or other serious consequences. More than ever, cybersecurity is a strategic priority, not just a technical challenge.
All opinions, news, research, analysis, prices or other information is provided as general market commentary and not as investment advice and all potential results discussed are not guaranteed to be achieved. The information may have been derived from publicly available sources, company reports, personal research, or surveys. Past performance is not indicative of future performance. Trading carries risk of capital loss. Service available to professional clients only.
Related News & Events

Swap Lines, Trust and the Politics of Dollar Liquidity

Foiled by War: Aluminium Meltdown in Global Supply Chains

The Anxiety of Finfluence: Mind the Enforcement Gap

