There has recently been much discussion around the threat poised by Artificial Intelligence (AI) in the cybersecurity realm. Some claims call the recent successes by the Claude Mythos Model as an “inflection point” for cybersecurity. While I certainly agree that the threat landscape has significantly changed, I’d like to add some qualifying details as to why I believe that the changes aren’t going to lead to a complete re-orchestrating of the cybersecurity landscape. While it is true that throwing AI tokens at the problem via defensive tools can be an effective part of the solution, this shift in the problem space of cybersecurity isn’t a seismic shift in the types of threats, but simply the volume of threats that must be addressed. The solution to this challenge hasn’t changed over the past decade, just dramatically intensified.
First, let’s talk about the aggressive risk AI poses to the modern enterprise when used by malicious individuals or groups. In all of this, it’s important to scope the problem space we are discussing appropriately, since we’re not addressing cybersecurity threats to or against AI services (such as prompt injection). This enhanced risk we are discussing here stems from a couple of primary capabilities AI, both public (e.g., Anthrophic, ChatGPT) and private (e.g., Qwen on local GPUs) bring to the table:
- Enabling a broader, threat/aggressor audience who previously didn’t have the necessary skills to successfully attack the enterprise.
- Widening the vulnerability landscape with more, previously undiscovered and undisclosed vulnerabilities likely discovered by threat actors.
Unlike the potential seismic shift presented by upcoming quantum computing and the presumed problem of wholesale compromise of modern encryption methods (i.e., a new problem), AI represents a exponential challenge increase in an already existing problem space. Fundamentally, it just means that the modern enterprise can expect more bugs and more “skilled” hackers. What previously required a skilled and determined hacker or Advanced Persistent Threat (APT) such as Russia’s GRU/APT 28 and likely didn’t seriously concern some enterprise leaders due to a lack of perceived targeting, can now be accomplished at some lesser-but-growing level by someone with far less skill when accompanied by the effective use of AI.
So, what does this mean for the enterprise cybersecurity professional? If you read the headlines often the answer submitted to you is to move a huge amount of your budget to new, AI-based defensive tools. Statements such as “only AI can defeat AI” assume that this is a completely new problem, that requires a unique and often very expensive solution. To be fair, these tools can certainly help in addressing the issues, but all things need considered in proper context. Cost is certainly a consideration here since you will eventually have to pay for every token used in these toolsets. If you’re already struggling with your logging costs due to event volume, can you imagine the AI token costs on top of it all? At the risk of sounding out-of-touch with reality, I’d propose that these “new” challenges can largely be mitigated by ensuring you are effectively doing the “Protect” and “Detect” type activities defined in existing frameworks such as the NIST Cybersecurity Framework, NIST SP 800-53 (e.g., FedRAMP), and NIST SP 800-171/172 (e.g., CMMC) to name a few. The challenge here however is that it must be performed faster (e.g., quicker patching cycles), and operate more effectively (e.g., better detection of malicious activities vs. valid user noise) than before.
We should never assume that there is a magic fix for such a large issue – there isn’t one… There is a reason that what is arguably the gold standard for cybersecurity practices (NIST SP 800-53 – Security and Privacy Controls for Information Systems and Organizations) is divided up into 20 unique control families, with over one thousand individual controls and enhancements. My goal here is not to persuade you that there is a simple solution, or a singular product to solve all your enterprise problems. My goal here is to re-enforce an understanding that the basics are still the foundation; that nothing has fundamentally changed except the need for increased speed and accuracy in the face of an enhanced threat landscape. That said, let’s explore a couple specific and practical examples…
User Involvement and Acknowledgement
While I’m already in risky territory of being perceived as detached from reality in this AI-centered world, I’d also propose that the ideal choke-point to address the AI challenges and prevent material enterprise compromise is by effectively detecting lateral movement of attackers. In today’s busy environments with flexible device usage and location variances the most effective way to accomplish this in the large enterprise is by improving end-user visibility of their own sensitive operations (e.g., logins, etc.) and thoughtfully including those users in a trained, self-triage process. Thoughtfully is the key here, as users will need to have effective workflows to establish trusted devices/connections, otherwise, a constant prompting of “Is this you?” will simply allow an attacker to fly under the radar more effectively. It’s greatly encouraging to see some enterprises are already headed in this direction, but it’s primarily for external/customer users. For example, if I login to my banking site from a previously-unknown device, I get an email to let me know a new device logged in. How can I know that my work Active Directory domain user illegitimately logged into another domain device as an attacker is performing lateral movement using my compromised credentials unless this visibility is available to me? From a SOC perspective this type of detection is extremely difficult, especially at a scale of tens of thousands of employees. However, for most enterprise users, if this capability was widely utilized, their burden is effectively a low-cost chore. With the use of effective threat modeling and a couple strategic logging strategies, effective end-user involvement can eliminate a vast amount of noise with a level of overhead acceptable to the average enterprise user. All the while, highlighting critical movement attempts by threat actors for a quicker triage by InfoSec teams and blocking of the threat.
Enterprise Management and Threat Modeling
Most, if not all enterprises have a level of technical sprawl and debt. From legacy systems that nobody wants to invest in more than keep-the-lights-on (KTLO), to side projects that an ambitious individual has running under their personal domain credentials, each of these increases (or multiplies) the risk to the enterprise with a larger threat surface. It is critical that each enterprise is aware of all the systems running, especially those accessible from the public internet; which might be higher than you think due to cloud provider usage. Having an effective service catalog and inventory of both hardware and software assets is critical, as you can’t defend what you don’t know you have to protect. This is usually both boring work as well as tedious, so it’s often neglected, but cannot be overstated in its importance.
Not much has changed in your enterprise between today and two years ago, before the threat of aggressive AI (agents or otherwise) occurred. However, what needs to change to meet this threat in the realm of enterprise architecture and management? Since the primary threat we are discussing by AI actors is primarily a volume-based issue, we need to ensure that our responsiveness is also able to handle the issue of “more bugs and more bad guys”. This process really starts at the engineering and product team levels, and necessitates that all solutions in the enterprise have an effective method to document architecture, deployment, and setup, as well as providing effective authentication and usage logging. It’s almost pointless to just throw a logging agent on an enterprise server and send all your logs to the SOC. There needs to be a documented and effective business process that shows what logs are expected, how to detect deviations, and what to do when those occur. It sounds basic because it is; but that doesn’t mean it is already a science that is fully implemented across all enterprises.
From an enterprise architecture point of view we also need to come to understand the reality that our environments likely need to be more restricted from a networking (and permissions) perspective. Given the trajectory of the threat landscape, compromise of individual devices is going to be more common, which necessitates a higher level of effort on not only compromise detection, but even more importantly of illegitimate lateral movement between devices. It’s not necessarily a disaster if a member of the enterprise is compromised by an AI generated phishing email and malicious payload. However, if that user can successfully access other, more critical devices, the damage can quickly escalate. Historically, systems have been more generous with network access, sometimes even core enterprise systems such as internal corporate wikis or bug tracking systems have been directly accessible from the public internet. Assuming risk management is in place (which can be a stretch) the justification for this has been that vendor patching is effective in managing vulnerability risk and the infrequent risk of 0-day exploits is lower than the inconvenience of requiring a VPN connection to access the devices. With the threat of more bugs being identified in these products by individuals other than security researchers, this risk assessment of both external and internal networks must be readdressed and a more zero-trust focus applied.
Vulnerability Management
From the enterprise point of view, vulnerability management is primarily tied to the patch management processes. For a non-infosec team, when you use the term vulnerability it’s immediately tied to “when can you patch that?”. While AI does represent a significant risk of more vulnerabilities being exposed, as an enterprise we have two solutions at our disposal, either patch (if available), or mitigate through other means (e.g., restrict access). There is ample evidence that many enterprises still struggle to maintain an adequate patching cycle for all of their infrastructure, even when security patches are available from the vendors. This problem will only grow if there is a significant increase in discovered vulnerabilities. Make sure that you are effectively patching your devices and assume that the vendors in the future won’t be able to keep up with the level of disclosed vulnerabilities. This requires a risk assessment and strategy for how your enterprise can effectively mitigate those risks within acceptable business limits.
Summary
So, there you have it. As an enterprise, we need to expect more bugs (accompanied by lagging vendor patch availability) and more “skilled” bad guys than we historically had to deal with. If our enterprise wasn’t big enough to attract serious hackers/APTs in the past, that might not necessarily apply any more. The game hasn’t changed, it’s just gotten a lot harder. However, with some effective threat modeling, good operational cybersecurity hygiene, increased maintenance, and effective visibility into user activities, the future probably isn’t as bleak as it first appeared. So, before you spring for the newest gadget, expanded budget for AI-driven defensive tools, or the next silver bullet, take a hard look at your operational practices and decide where the risk exists for your own enterprise. Who knows, you might discover that with a little creativity and planning you already have most of the tools you need to move forward with confidence.