Op-ed: How AI has changed the security game – and what enterprises can do about it

Authored by Matt Johnson, Principal Technologist at MongoDB
Boards are increasingly being asked to sign off on AI enablements using security models designed for a different era of computing. Those models assume stable behaviour, well-defined system boundaries, and software that does the same thing every time it runs. AI systems violate each of those assumptions. Treating them as conventional applications does not slow innovation, but it does expand risk across the organisation.
In fact, we cannot afford to ignore the reality that AI is insecure by design. This is not to say the technology was deliberately created with security vulnerabilities, but more so that there is a mismatch between how modern AI systems function and how enterprise security has been defined. Until that gap is addressed, controls will continue to fail repeatedly.
How AI systems break traditional security thinking
Traditional enterprise security assumes that software behaves in consistent ways. Inputs lead to expected outputs. Permissions can be scoped around applications, while threat surfaces can be mapped and reduced. However, AI systems challenge each of those assumptions.
Large language models and other probabilistic systems do not behave deterministically, as the same prompt can produce different results. Small changes in phrasing can expose behaviour that was not anticipated during testing. Every query effectively becomes a new execution path. That creates an attack surface that is effectively infinite and cannot be anticipated in advance.
Perimeter-based security struggles in this environment because of that mismatch. There is no clear boundary between safe and unsafe behaviour when the system is designed to reason, infer, and generate content. Additionally, most organisations’ go-to to make models safe – guardrails – are notoriously straightforward to work around. While prompt filters, policy layers, and reinforcement rules are intended to prevent sensitive outputs, they simply do not go far enough.
This is because guardrails operate at the interaction layer, not at the data layer. They attempt to shape behaviour rather than constrain access. A determined user only needs to ask the right question in the right way to bypass them. Many of us have read examples of how prompt-injection techniques or indirect queries have allowed users to extract a chatbot’s training data or perform tasks outside its scope, like generating Python code instead of giving an update on a parcel. In an enterprise context, such techniques could expose proprietary data, customer information, or employee details.
This doesn’t mean we should do away with guardrails – these still play an important function – but we should not simply stop there. Teams need to protect sensitive data more robustly than simply instructing a model not to reveal it. And once the model has access, control depends on perfect enforcement across an effectively infinite set of prompts. That is not a realistic security posture for any enterprise. As a result, if behaviour cannot be reliably constrained, access must be.
Data as the only durable control point
If AI security cannot be anchored in models or perimeters, it must be anchored elsewhere. The only control point that scales with AI is data. Every meaningful risk in an AI system traces back to data access. For example, what data the model can see, or how outputs are allowed to combine information. If those elements are governed precisely, risk can be reduced regardless of how the model behaves internally.
This requires a shift in mindset. Security teams must move away from treating AI as an application and toward treating it as a privileged data interface. Controls should define which datasets are accessible, under what conditions, and for which types of queries. Sensitive data should remain encrypted or excluded from the dataset, and governed throughout the interaction lifecycle, not exposed to the model in raw form by default.
Query structure also matters. Unrestricted natural language access to sensitive systems should be seen as equivalent to granting broad administrative privileges. Enterprises need policies that define what questions can be asked, how context is assembled, and how responses are filtered before they reach users. These are data governance problems, not model tuning exercises.
What good AI security looks like in practice
Effective AI security does not attempt to make models safe through restriction alone. It assumes models will behave unpredictably and designs controls accordingly.
In practice, this means shifting toward encrypted, governed, query-level access. Models should interact with data through tightly controlled interfaces that enforce policy at runtime. Additionally, logging and auditability should focus on data access patterns, not just model outputs.
This approach aligns security with how AI actually operates. It accepts non-determinism while preserving control over what matters. It also scales as models evolve, since governance remains anchored in data rather than specific architectures.
Photo courtesy of MongoDB