
I see a growing amount of chatter about “securing AI,” but that phrase is so broad that it almost loses meaning. Securing what exactly?
Most of these conversations are really about large language models (LLMs). And even then, the security discussion is very different depending on whether you are talking about public LLMs, enterprise LLMs, privately hosted LLMs, or AI used in attack and defense scenarios.
If we do not categorize the problem first, we end up debating controls without context.
So let’s break it down. I will frame this from a Microsoft perspective because that is the ecosystem I know best, but the principles apply beyond any one vendor.
Public LLMs
Public LLMs are the easiest to access and the hardest to govern. Browser-based tools. Mobile apps. Fast, powerful, and always improving.
The first reaction many organizations have is to block them entirely. That sounds simple, but in practice it is rarely that clean.
Full blocking can feel heavy handed. It can impact employee satisfaction. It can slow innovation. It can even label leadership as anti-AI at a time when AI adoption is accelerating.
So if blocking is not realistic, what does control look like?
This is where Microsoft’s security stack starts to matter.
Microsoft Purview classifies and labels data. Highly sensitive files can be encrypted. If a file is encrypted and the recipient or service is not authorized, it cannot be opened. Labels travel with the file, allowing other services to inspect and react to them.
Defender for Cloud Apps (MDA) provides visibility. It can monitor public and private LLM usage from the browser on managed devices. Who is accessing which AI service? What files are being uploaded? What sensitivity labels are attached? That telemetry turns AI usage from a blind spot into something measurable.
Purview and Defender for Cloud Apps together enable more nuanced control than blanket blocking. You can allow general use but block uploads of highly sensitive labeled content.
Defender for Endpoint (MDE) provides enforcement at the device layer. It can block web domains across browsers, restrict network connections, and control removable media transfers.
Together, these tools allow organizations to reduce risk without completely blocking public LLM access.
Enterprise LLMs
Enterprise LLMs are private alternatives to public LLMs. They are provided with privacy assurances, support, and administrative visibility.
Ideally, these replace open access to public LLMs. Employees get a similar experience, but inside a controlled environment. A growing number of leading public LLMs now offer enterprise services that mirror the public services, in a private way.
From a Microsoft perspective, Microsoft 365 Copilot represents this enterprise LLM model.
Microsoft 365 Copilot embeds an LLM directly into Teams, SharePoint, OneDrive, Outlook, Word, and other productivity tools. It is grounded on data the employee already has permission to access. It does not expand access. It reflects existing access controls.
One area where this is especially valuable is meeting intelligence. Copilot can summarize recorded meetings, extract action items, and generate follow ups. That activity is logged and auditable.
Enterprise LLM services and Microsoft 365 Copilot complement each other. Employees want options. Enterprise services provide powerful models with higher privacy assurances. Copilot uniquely grounds itself on internal collaboration data for organizations invested in the Microsoft productivity ecosystem.
From a security perspective:
- Purview classifies and protects data
- Defender for Cloud Apps monitors usage
- Defender for Endpoint enforces device-level policy
The difference is emphasis. With enterprise LLMs, monitoring and governance typically matter more than blocking.
Enterprise AI improves visibility and contractual assurance. It does not eliminate the need for disciplined identity and data governance.
Privately Hosted LLMs
Privately hosted LLMs run inside your Azure subscription, other cloud environments, or on premises. They may power internal applications or customer-facing services.
Sometimes organizations pursue this model because no enterprise offering exists that meets their regulatory, geographic, or industry requirements. In other cases, they seek cost optimization or architectural control. Sort of a self-hosted enterprise LLM. There are even companies that will help you setup a private clone of certain popular public LLMs.
But that control comes with responsibility.
At this stage, AI security becomes cloud security, application security, and AI-specific safeguards working together. Identity, networking, storage, API management, monitoring, model access, and retrieval design all become part of the threat model.
Purview, Defender for Cloud Apps, and Defender for Endpoint still contribute. But infrastructure security becomes central.
Defender for Cloud helps secure Azure-hosted infrastructure and extends visibility into AWS and GCP environments. This includes Defender to Storage, Defender for APIs, Defender for Key Vault, Defender for Databases, Defender for Apps, and Defender for Containers. A new addition is Defender for AI which adds protections and visibility specific to AI workloads.
If portions of the environment are on premises, Defender for Identity protects domain infrastructure that AI systems may depend on.
In this model, you are responsible for API protection, key management, network isolation, rate limiting, and logging. A leaked key or exposed endpoint becomes your incident.
The attack surface grows because the control surface grows.
AI in Attack and Defense
AI is not only something we secure. It is something attackers use and defenders use.
Attackers can automate phishing campaigns. They can generate persuasive social engineering content. They can attempt prompt injection by embedding malicious instructions in documents or emails that an AI system may later process.
Defensively, tools like Microsoft Security Copilot help analysts summarize incidents and accelerate investigations.
Purview and Defender for Office also matter here. Sensitivity labeling can track suspicious content movement. Defender for Office can detect malicious email content. The risk is not just exploitation of systems. It is intentional manipulation of model responses.
What Are We Actually Securing?
Across public, enterprise, and privately hosted AI, the objectives remain consistent:
- Prevent oversharing of sensitive data
- Prevent unauthorized data exposure
- Reduce misinformation and reputational risk
- Detect and prevent data poisoning or misuse
- Prevent intentional manipulation of LLM responses
- Maintain visibility into AI usage
- Protect AI infrastructure from compromise
When someone says, “We need to secure AI,” the better response is, “Which AI, and in what context?”
Public AI is primarily about boundary control and data loss prevention.
Enterprise AI is about governance, identity, and responsible use.
Privately hosted AI is about secure architecture and operational discipline.
AI in attack and defense is about adapting the threat model itself.
Without that clarity, controls will always feel disconnected from the risk.