24 Feb Cut Through the Hype: How to Make the Right Cybersecurity AI Investments for Your Needs
As use of artificial intelligence continues its rapid growth in cybersecurity, government agencies face mounting pressure to invest in AI-driven tools to defend against increasingly sophisticated threats.
However, with AI vendors making ever bolder claims about their solutions, the challenge is no longer just keeping up with technology—it’s discerning between realistic capabilities and over-hyped promises. “We’re still in the early days of monetizing AI-enabled applications, as folks are trying to figure out which AI-enabled workflows create tangible business value for customers,” argues venture capitalist Kyle Poyar. Understanding AI’s current strengths and limitations is essential for making sound, strategic cybersecurity investments.
The Over-Hyped Capabilities
To start, AI is not a silver bullet for cybersecurity. One of the most over-hyped claims is that AI can operate autonomously, replacing human decision-making in security. While AI can process and analyze data at speeds unattainable by humans, it still requires human oversight to interpret context and make critical judgment calls.
Similarly, AI’s ability to predict cyberattacks remains limited. While machine learning models can identify emerging threats based on past data, cybercriminals continuously develop novel attack methods that AI models may not recognize. “It’s possible cybercriminals may be outpacing the cyber defenders when it comes to developing and employing new technologies, and not all ML/AI-based products are as innovative as they claim to be,” says Keatron Evans, principal security researcher, Infosec, and lead developer of ISACA’s new publication, AI Uses in Blue Team Security.
As a result, over-reliance on AI for predictive security can create a false sense of security, leaving agencies vulnerable to previously unseen tactics.
Where AI Holds the Most Promise
While it’s true that AI is overhyped in many areas, it shouldn’t be dismissed either. According to a survey conducted by IBM, organizations that extensively employ AI in their cybersecurity strategies report 33% lower costs associated with data beaches than those without AI. That’s thanks to faster detection and response times (on average, detecting breaches 88 days faster than those without AI), automation and workforce augmentation (82% of cybersecurity professionals say AI makes them more efficient), and a reduction in false positives (false alerts). In short, there’s enormous potential for AI to positively impact the cybersecurity function.
ISACA’s “AI Uses” report identifies the three areas in cybersecurity where their engineers believe that AI and machine learning are poised to help most significantly:
- Network intrusion detection/security information and event management (SIEM) solutions
- Phishing attack prevention
- Offensive cybersecurity application (e.g., penetration testing)
Making Smarter AI Investments
To ensure AI investments deliver real value, instead of falling short of overhyped promises, government agencies should adopt a thoughtful, strategic approach. To start, organizations must align AI capabilities with specific security needs rather than chasing broad AI adoption for its own sake. “AI is a tool that needs to be applied to specific problems,” writes Security Week. “Aimlessly firing AI at a bunch of problems with the hope that it will fix something is not likely to yield good results. On the contrary, it is likely to chew through significant resources and may also introduce a fair amount of risk into the organization.
Then, agencies should commit to long-term monitoring and oversight. “AI, like many tools, is not a ‘set it and forget it’ endeavor,” adds Security Week. “AI requires continuous iteration and improvement. When an enterprise decides to leverage AI, it is a long-term commitment that involves embracing the whole product concept.”
In other words, agencies should integrate AI within a broader cybersecurity strategy that includes human expertise, regulatory compliance, and traditional security measures. AI is a powerful tool, but its effectiveness depends on how well it complements other defense mechanisms
By focusing on AI’s proven strengths and recognizing its limitations, government agencies can cut through the hype and make cybersecurity AI investments that genuinely strengthen security.
About PSL
PSL is a global outsource provider whose mission is to provide solutions that facilitate the movement of business-critical information between and among government agencies, business enterprises, and their partners. For more information, please visit or email info@penielsolutions.com.