I want to take a few minutes to explore the perceived risk of public cloud endpoints, and why that risk is often misunderstood.
If something has a public endpoint, it is easy to assume it is exposed.
In reality, that is only part of the story.

The Door in the City
Imagine you have something to protect.
You place it in a room in a massive city. Millions of buildings. Millions of doors. Every door looks similar. None of them advertise what is inside.
Now you have to decide how to secure that room.
You could remove the door entirely. Brick it over and only allow access through connected rooms that you already control. This is what it looks like to disable public access. It works, but it makes everything around that room harder. Movement becomes more complex, and some tools and workflows no longer function the same way.
You could just make the door harder to reach. Put it behind other doors, inside controlled hallways. The door still exists, but it is no longer easy to find or access. This is your private endpoint model. It is often a good balance, but it introduces its own complexity. Development gets harder. Some services do not behave the same way. You now have to manage how people and systems even reach the door.
You can also or instead, focus on the lock.
You issue a key. A plain key, no markings, nothing that tells you which door it belongs to. You try to protect it. Maybe you store it in a locker (Azure Key Vault), where access is controlled and logged. Or maybe you keep it in your pocket (or in a file or variable).
You can put a time limit on the key, but that creates friction. Keys expire. Things break. So there is always pressure to make them long-lived.
Now imagine someone finds that key on the street (or in a repo or file).
A random key in a city with millions of doors.
They do not know which door it belongs to. They do not know what is behind it. They know that if they start trying doors, they are likely being watched.
Most of the time, that key is not worth picking up.
The real problem is when they get the key and the door location.
Someone takes your key, your notes, and the address on the door. Now they know exactly where to go and how to get in.
That doesn’t solve everything for our fabled attacker.
Now add cameras. Every door has them. Outside and inside. If someone tries a key, fails, succeeds, or moves around inside, it is recorded.
Now consider that most of these rooms do not contain anything valuable.
Cloud resources are easy to create. They are everywhere. Many are empty or low value. Even if someone gets in, they do not know what they will find. To locate something meaningful, they would have to try many doors, increasing the chance of detection.
Though that does not eliminate the reputation damage of a successful break-in. It is for this reason that all cloud resources need to be secured, even if they have no apparent value, even if they are demo labs, or abandoned projects.
What if we remove the key entirely.
Instead, you put a guard behind the door.
You knock. They check who you are. They verify your identity. They decide if you are allowed in.
There is no key to lose.
This is identity.
Also consider that we tie permissions to that key (or identity). Once inside, you can’t access everything inside of the room.
What Happens When Someone Knocks Programmatically
Most people picture access happening through the Azure portal.
That is only one path.
In many cases, the same access happens programmatically:
- PowerShell using az login
- Visual Studio Code signed in with your account
- External applications using your identity
- Scripts, notebooks, and agents operating on your behalf
If those tools are using your identity, they are not doing something different. They are doing the same things you can do manually, just faster and at scale. The same access and operating constraints (access permissions) apply.
That shifts the conversation.
It is no longer just about whether an endpoint is public.
It is about who is calling it and under what identity.
Identity Is the Control Plane
When access is tied to a user identity in Microsoft Entra ID, it typically brings strong controls with it.
MFA can be enforced. Conditional Access policies can evaluate device, location, and risk. Privileged Identity Management (PIM) can require elevation for sensitive roles. Every action is attributable and logged.
Keys and secrets behave differently.
They do not trigger MFA. They do not evaluate device posture. They are simply accepted if valid. That makes them useful, but also easier to misuse if exposed. They are logged at least.
This is why managed identity has become the preferred pattern for Azure-to-Azure communication (or between Microsoft services). No stored secrets. Tokens are issued dynamically. Everything is tied back to identity and fully logged.
There is also a newer development worth calling out.
Microsoft is introducing agent identities. These allow AI-driven agents to operate as first-class identities. More importantly, they can be governed with Conditional Access policies, bringing them closer to the same control model used for human users.
That closes a gap that has existed for a long time. Automation no longer has to live outside of policy enforcement.
Not All Services Behave the Same
At this point, I want to explain that not every service fits the same pattern.
Some are designed to be publicly reachable and secured through identity.
Some can be fully isolated with private networking.
Some support both models, depending on how you configure them.
Entra
Microsoft Entra ID is the front door for identity. It must be reachable. There is no practical way to hide it (or its APIs) behind a private endpoint and still have authentication work.
Security here is entirely identity-driven. Requests are authenticated, evaluated, and logged. If access is granted, it is because the identity was valid and authorized. That authorization is repeatedly reevaluated even alert getting past the front door.
Azure Resource Manager and Resource Graph
Azure Resource Manager and Azure Resource Graph represent the management layer of Azure. You can’t realistically lock this endpoint down.
This is how resources are created, configured, and queried programmatically. Resource Graph, in particular, provides a way to explore and understand Azure resources at scale.
If someone can access this layer, they already have administrative visibility. The concern is not that the endpoint is public. The concern is that the identity (user or app ID) has too much access.
Sentinel and Log Analytics Workspace
Log Analytics workspaces are used to store Sentinel and Azure Monitor data. They provide APIs for querying and ingesting data and managing incidents and alerts.
You can use public endpoints and rely on identity and RBAC, which is how many environments operate.
You can also introduce private endpoints or disable public access. That is a valid design, but it introduces complexity. DNS, routing, ingestion paths, and tool compatibility all become part of the conversation.
Workspaces do have a shared key that allows write-only access (used for ingestion, not reading). Keys can be rotated, but they do not have a built-in expiration and must be managed manually.
Sentinel Data Lake
The Sentinel data lake should be treated separately. It represents a different access model.
There is no broad public API endpoint today. Instead, access happens through structured experiences like notebooks, VS Code integrations, and tools designed for data exploration.
This is where Microsoft Sentinel MCP fits.
Sentinel MCP provides a structured way to interact with data in the data lake. It does not expose a new unauthenticated surface. It operates within existing identity and permission boundaries, with full logging of activity.
Security Copilot
Microsoft Security Copilot is best understood as an enterprise LLM experience for incident responders. It does not expose a general-purpose API endpoint.
It can access Defender data directly in supported scenarios and uses plugins to interact with additional systems like Sentinel.
When you use it, it operates under your identity. It is calling other systems on your behalf. Your permissions determine what it can see and do, and activity is still logged.
Defender XDR
Microsoft Defender XDR exposes APIs for alerts, advanced hunting, and device data.
These endpoints are publicly reachable, but fully identity-protected.
There is no private endpoint model. Access requires:
- Entra authentication
- Appropriate permissions
- RBAC alignment
Again, the control is not the endpoint. It is the identity accessing it.
Microsoft 365 (Exchange, SharePoint, etc.)
Microsoft 365 services expose APIs and endpoints for accessing data such as mail, files, and collaboration content.
These follow the same pattern:
- Public endpoints
- Identity-based access through Entra
- Full audit and activity logging
Because this data is often highly sensitive, access is heavily governed through:
- Conditional Access
- MFA
- Data access policies
Once again, the control is not the endpoint. It is the identity and permissions behind it.
Functions and Logic Apps
Azure Functions and Logic Apps allow you to create your own endpoints in the pursuit of developing automation.
A function with an HTTP trigger or a Logic App with a request trigger effectively becomes an API endpoint. You decide how to expose and secure them.
You can leave them public and rely on identity or keys, or restrict them with private networking.
Best practice is to use managed identity to communicate with Microsoft services and use Key Vault or write-once variables for third-party API keys.
You have a lot of flexibility here to make your own choices. Services like Defender for Cloud can help draw attention to poor design choices.
Azure AI Foundry
Azure AI Foundry creates endpoints for model inference and agent interactions.
These endpoints can be public with identity-based access or restricted through private networking. Depending on the design, there may be multiple endpoints tied to a project or deployment.
In many cases, the model itself is not learning from your data. It is a hosted model that you guide with prompts and optionally ground with internal content.
That shifts the focus. The endpoint matters, but the data you connect to it matters as does identity and transparency through logging.
Azure Storage Accounts
Storage is often the clearest example because it holds real data.
Documents, images, exports, backups. Things that are immediately understandable and often valuable.
It supports multiple access patterns, including:
- Identity-based access
- Access keys
- SAS tokens
- Private endpoints
Because the data is tangible, this is where security decisions tend to be more conservative. You can disable public access or use private endpoints, and Defender for Cloud pays close attention to storage activity.
Key Vault
Key Vault exists so we can safely store and retrieve secrets, often without exposing those secrets directly.
It has an endpoint because something has to request the secret, typically through an API call.
You can leave it public and rely on identity, or restrict it with private endpoints and disable public access.
If public access is disabled, the calling service needs a valid network path to the vault. This is where private endpoint design becomes more complex.
Virtual Machines
VMs are a different category.
They can be managed and discovered through Azure Resource Graph, but the real concern is whether the machine itself is reachable.
A public IP on a VM exposes a full operating system to the internet. That introduces a different type of risk, one that aligns more closely with EDR and AV solutions (Defender for Endpoint).
A Note on Encryption
Encryption is always present.
Most Azure services use:
- Encryption at rest
- Encryption in transit
However, encryption is not the primary control in this discussion.
If a user or application is properly authenticated and authorized, encryption does not prevent them from accessing the data.
Encryption protects against unauthorized access at the storage and transport level, not misuse of valid credentials.
This is why identity, access control, and monitoring remain the primary focus.
Stepping Back
Public endpoints are an unavoidable part of working with Microsoft cloud services, and cloud platforms in general.
Some endpoints must remain public by design. Other endpoints can be made private or even disabled. Even when those options exist, many endpoints remain public, sometimes intentionally for simplicity and flexibility, and sometimes as an oversight.
These endpoints are not inherently risky when identity and access controls are properly enforced.
They are protected by layers:
- Obscurity
- Identity
- RBAC
- Auditing and logging
- Alerting and detection
And in many cases:
- Conditional Access
- MFA
- PIM
There is also a level of anonymity and scale that is easy to overlook. These endpoints exist in a vast environment with no obvious indicators of ownership or value.
Without identity, they are effectively unusable.
Every action is recorded.
Microsoft provides solutions to alert on unusual activity and recommend hardening of these endpoints where it makes sense.
The public nature of these endpoints enables:
- Integration
- Development
- Automation
The real risk is not that the endpoint exists.
The real risk is how access to the API endpoints are controlled.
That takes us back to identity, key management, role based access, least privilege, monitoring, and good security posture management.
| Service | Public API | Private Endpoint | Disable Public | Primary Auth | Logging / Audit |
|---|---|---|---|---|---|
| Entra / Graph | Yes (required) | No | No | User, App Reg, Agent ID | Entra sign-in logs, audit logs |
| Azure Resource Manager (ARM) | Yes (required) | Limited | No | User, SP, MI | Azure Activity Log |
| Azure Resource Graph (ARG) | Yes | Limited | No | User, SP, MI | Azure Activity Log |
| Log Analytics Workspace | Yes | Yes (AMPLS) | Yes | User, SP, MI, Key (write only) | Azure Monitor logs, Activity Log |
| Microsoft Sentinel | Yes (via workspace/ARM) | Indirect | Partial | User, SP, MI | Sentinel tables, Activity Log |
| Sentinel Data Lake (SDL) | No broad API | N/A | N/A | User, MI | Sentinel / platform logs |
| Sentinel MCP | Yes (tool interface) | No | No | User, Agent ID | Sentinel logs, audit trails |
| Security Copilot | No general API | No | No | User | Copilot activity logs, connected service logs |
| Defender XDR | Yes | No | No | User, SP | Defender audit logs, Advanced Hunting |
| Microsoft 365 (Exchange, SharePoint) | Yes | No | No | User, SP | Unified Audit Log |
| Azure Functions | Yes (HTTP trigger) | Yes | Yes | User, SP, MI, Keys | App Insights, Activity Log |
| Logic Apps | Yes (HTTP trigger) | Yes | Yes | User, SP, MI | Run history, diagnostics logs |
| Azure AI Foundry | Yes | Yes | Yes | User, SP, MI, Key | Activity Log, diagnostics |
| Storage Accounts | Yes | Yes | Yes | User, SP, MI, Keys, SAS | Storage logs, Activity Log |
| Key Vault | Yes | Yes | Yes | User, SP, MI | Key Vault diagnostics, Activity Log |
| Azure Data Explorer (ADX) | Yes | Yes | Yes | User, SP, MI | ADX diagnostics, Activity Log |
| Container Registry (ACR) | Yes | Yes | Yes | User, SP, MI | ACR logs, Activity Log |
| Virtual Machines | N/A | N/A | Yes (public IP) | User, SSH, local creds | Activity Log, NSG flow logs, OS logs |