Microsoft Azure OpenAIRegulated workloadsInformational only

Plain-language explainer · Informational · Updated March 2026

A plain-language perspective on Azure OpenAI governance and security

This explainer summarizes how Azure OpenAI fits within Microsoft’s enterprise governance model and how customers typically think about security, data handling, and workload controls when evaluating regulated use cases.

Purpose and scope

What this page is and what it is not

This page is a plain-language explanation of how Microsoft governs its relationship with OpenAI and how Azure OpenAI fits within Microsoft’s enterprise security and governance model.

This content is informational only and does not constitute compliance, regulatory, or legal guidance. Authoritative documentation remains Microsoft Learn and the Microsoft Trust Portal.

How Microsoft frames core data handling commitments

Service commitments frequently referenced by customers

  • +Never used to train any foundation model
  • +Never shared with OpenAI infrastructure or services
  • +Never shared with other customers
  • +Never used to improve Microsoft products without opt-in
  • +Model is stateless and retains nothing between API calls
  • +Fine-tuned models are exclusive to the customer who created them

Customers still need to assess whether these commitments, together with their own architectural and governance controls, are sufficient for the workload they plan to run. Prompt and output retention behavior should be confirmed against current Microsoft documentation before regulated data is introduced.

Shared responsibility model

Where Microsoft’s role ends and the customer’s role begins

Microsoft provides the Azure OpenAI service and its underlying security capabilities. Customers remain responsible for workload architecture, access controls, data handling, and regulatory compliance.

That means service-level safeguards matter, but they do not remove the need for customer decisions around network design, identity, logging, review processes, and acceptable use.

Service commitments and assurance references

Microsoft commitments often referenced in regulated workloads

HIPAA BAASOC 2 Type IIISO 27001ISO/IEC 42001:2023FedRAMP High

These references can be relevant to customers evaluating regulated workloads, but they are not a substitute for customer compliance review or workload-specific legal analysis. Scope, eligibility, and implementation details should always be confirmed through current Microsoft documentation and customer governance teams.

Azure platform-level controls inherited by Azure OpenAI

HITRUST-alignedStateRAMP-alignedISO 27799-aligned

These are best understood as platform-level inherited controls rather than Azure OpenAI-specific certifications. They help frame the surrounding Azure environment, but customers still need to evaluate applicability for their own use case and evidence requirements.

Examples of security capabilities customers can configure

Private endpoints with Azure Private Link

When configured, traffic can stay on Microsoft’s private backbone and public network access can be disabled at the resource level, helping customers align the service to stricter network boundary expectations.

Encryption at rest and in transit

AES-256 encryption at rest is enabled by default, customer-managed keys are available through Azure Key Vault, and TLS 1.2+ protects data in transit as part of a broader customer-controlled architecture.

Data residency options

Regional, Data Zone, and Global deployment patterns are available, giving customers options when they need to align model access patterns with geography and residency expectations.

Identity and access control

Microsoft Entra ID, RBAC, managed identities, and Conditional Access let customers design stronger least-privilege access models around the service.

Examples of configurable safeguards

Operational monitoring and oversight

Request metadata logging

Customer-enabled diagnostics can capture request metadata such as timestamp, model, token usage, and status without storing prompt or completion content in platform logs.

Example governance framing for GenAI use cases

High risk - clinical-facing

Diagnostic support and treatment recommendation scenarios require rigorous validation, human oversight, and potentially formal regulatory review.

Medium risk - clinician-assisted

Ambient documentation and prior authorization drafting should require clinician review before any action is taken.

Lower risk - administrative

Patient routing, revenue cycle support, and internal Q&A typically fit standard QA and monitoring controls.

This is a practical framing device, not an official Microsoft classification. Customers should adapt it to their own governance, compliance, and review processes.

Microsoft’s published responsible AI themes

Fairness

AI systems should treat people equitably, supported by evaluation workflows such as the Responsible AI dashboard in Azure ML.

Reliability and safety

Systems must perform predictably and safely, reinforced by content filters and red-team practices.

Privacy and security

Supported by the no-training guarantee, private networking, encryption, and approved zero-retention options.

Transparency

Microsoft publishes transparency documentation and model cards for Azure OpenAI services and models.

Inclusiveness

Accessibility-minded design and multilingual support remain important when productizing AI solutions.

Accountability

Governance is backed by Microsoft’s responsible AI program structures and AI management system controls.

Authoritative sources

Azure OpenAI documentation

Use Microsoft Learn as the authoritative source for product capabilities, data handling, security controls, and service behavior.

Azure AI security guidance

Use Microsoft Learn security and governance guidance to validate how Azure AI services fit into broader enterprise security programs.

Microsoft Trust Portal

Use the Trust Portal for current audit reports, service commitments, and certification evidence made available by Microsoft.