svg image
Healthcare Innovation

Is ChatGPT HIPAA Compliant?

March 26, 2025
Is ChatGPT HIPAA Compliant?

Healthcare and mental health professionals are increasingly using AI tools like ChatGPT for tasks such as note-taking, report writing, and patient communication. However, using ChatGPT in a clinical setting can pose serious HIPAA compliance risks and open you up to sanctions, fines and lawsuits. This article breaks down why standard ChatGPT can fail HIPAA requirements, explores potential compliant options, and helps professionals understand how to access and use ChatGPT AI models with patient data in a secure and compliant manner. HIPAA sets the federal standard for protecting sensitive patient health information (PHI) in the United States. As AI tools become more accessible, it's crucial for healthcare providers, clinics, and administrators to understand how these technologies interact with HIPAA regulations. Even if PHI isn't intended to be used with ChatGPT, even accidental use with PHI can be considered a HIPAA breach.

Standard ChatGPT (Free/Plus/Pro/Team): Understanding Compliance Risks

OpenAI offers several readily accessible versions of ChatGPT (Free, Plus, Pro, and Team). While powerful, using these standard tiers with PHI poses significant HIPAA compliance risks:

  • Default Data Usage: For ChatGPT Free, Plus, and Pro, OpenAI's default policy allows submitted content to be used for training its AI models. While users can opt-out, this default practice conflicts with HIPAA's principle of using PHI only for permitted purposes (like treatment or the specific service requested). Using patient data to train a general AI model without proper patient consent is almost always not a HIPAA-permissible use.
  • Lack of BAA: This is the most critical issue. OpenAI explicitly states it does not offer a BAA for its standard ChatGPT services, including Free, Plus, Pro, and even the Team plan. While Team might have SOC 2 compliance, this doesn't fulfill the BAA requirement under HIPAA.
  • The Inherent HIPAA Violation: Inputting any PHI into these standard ChatGPT versions constitutes a HIPAA violation primarily because there is no BAA in place between the healthcare provider (Covered Entity) and OpenAI (acting as a Business Associate). Without a BAA, OpenAI has not contractually agreed to HIPAA's specific rules for protecting PHI.
  • Data Retention: Even if chat history is off or training opted out, OpenAI may retain conversation data for up to 30 days for monitoring purposes. Storing potential PHI without a BAA, even temporarily, raises compliance issues.
  • Risk of Accidental Exposure: Staff might inadvertently paste PHI into these tools due to lack of training or simple error, leading to unauthorized disclosure.

Given these factors, standard ChatGPT versions are unsuitable for any tasks involving identifiable patient information.

Seeking Compliance: OpenAI's API and Enterprise/Edu Solutions

OpenAI offers its API platform and premium ChatGPT Enterprise/Edu solutions that provide additional compliance and security features. These present potential pathways toward HIPAA compliance, but with significant caveats:

  • BAA Availability (Conditional): OpenAI is willing to consider signing BAAs for its API, ChatGPT Enterprise, and ChatGPT Edu services. However, this is not automatic. Organizations typically need to contact OpenAI sales or a dedicated email address, undergo a review process, and approval is not guaranteed. Eligibility often requires specific account types or minimum commitments that, as of this writing, are limited to organizations with over $100,000 of annual spend.
  • API BAA Limitations (Zero Data Retention - ZDR): A crucial point for the API is that OpenAI's BAA only covers API endpoints eligible for Zero Data Retention (ZDR). ZDR means OpenAI doesn't store request/response data long-term. Many advanced API features, including Assistants, Threads, Files API, Image Generation, and image inputs to chat, are not ZDR-eligible and thus fall outside the scope of the BAA for PHI use. This significantly limits the API's functionality for complex healthcare tasks involving PHI.
  • Enterprise/Edu Features: These tiers offer enhanced security (SOC 2 Type 2, encryption, SSO), administrative controls, and data submitted is not used for training by default. These are valuable but still require a signed BAA for HIPAA compliance when handling PHI.
  • The Cost Factor: Achieving HIPAA compliance via OpenAI involves substantial cost. API costs are usage-based and can add up quickly. ChatGPT Enterprise reportedly requires minimums like 150 users at ~$60/user/month. This price point makes HIPAA-eligible ChatGPT inaccessible for most small-to-medium practices, clinics, and individual professionals. Additionally, maintaining a secure and compliant configuration within these services requires ongoing technical and administrative maintenance.
Practical Hurdles and Persistent Compliance Gaps with ChatGPT

Even when using the API or Enterprise tiers with a BAA, significant challenges remain:

  • Complexity and Configuration: Correctly configuring API calls (using only ZDR endpoints for PHI), managing security, and understanding the BAA's limitations requires technical expertise. The responsibility for compliant implementation lies heavily with the user organization.
  • Feature Limitations: The exclusion of many advanced API features from ZDR eligibility restricts sophisticated healthcare workflows involving PHI (e.g., analyzing uploaded patient documents via the Files API or interpreting medical images).
  • User Error Risk: Employees can still cause violations by accidentally using non-compliant tools/endpoints, misusing features like web search or research (which could leak PHI in search queries if enabled), or inadequately de-identifying data.
  • Accuracy Concerns: While not a direct HIPAA violation, the potential for AI "hallucinations" (generating incorrect information) poses clinical risks if outputs aren't rigorously reviewed by qualified professionals. Services like ChatGPT make efforts to reduce hallucinations, but those are broad in nature and aren't specialized in reducing healthcare hallucinations.
Dedicated Healthcare ChatGPT Solutions

The complexities, costs, and limitations of adopting a general-purpose tool like ChatGPT for healthcare highlight the need for specialized solutions designed with security and compliance at their core.

BastionGPT is the leading solution for healthcare professionals, which utilizes OpenAI-licensed models. Developed by healthcare security and compliance experts, BastionGPT ‘s framework is inherently designed for HIPAA compliance. Key differences include:

  • Compliance by Design: Security and compliance (HIPAA, PIPEDA, APP) are foundational, not add-ons. It assumes all input might be regulated data.
  • Automatic BAA: A HIPAA BAA is included automatically for all subscription plans, removing a significant barrier for healthcare users.
  • Guaranteed Data Confidentiality: BastionGPT contractually ensures user prompts, documents, and AI outputs are never shared or sold to underlying AI providers (like OpenAI) for training. Data remains confidential within our secure platform.
  • Secure Infrastructure: All features use only HIPAA-secure hosting with robust encryption, access controls, and regular security testing.
  • Risk Mitigation by Design: Features known to pose PHI exposure risks, like integrated web search, are intentionally omitted because secure, HIPAA-compliant options from search providers are currently unavailable.

This "compliance-by-design" philosophy aims to minimize the operational burden and inherent risks associated with retrofitting compliance onto general-purpose tools.

Balancing Innovation with Responsibility

AI tools like ChatGPT offer powerful possibilities for streamlining healthcare workflows. However, this potential must be balanced against the non-negotiable requirement to protect patient privacy under HIPAA.

  • Standard ChatGPT versions (Free, Plus, Pro, Team) are not HIPAA compliant and should not be used with PHI due to the lack of a BAA and data usage policies.
  • OpenAI's API and Enterprise/Edu solutions could potentially be used compliantly, but require obtaining a conditional BAA, involve significant cost (often 6+ figures annually for Enterprise), face functional limitations, and require ongoing configuration and oversight by the healthcare organization.
  • The ultimate responsibility for compliant use rests with the healthcare provider, including robust training, policies, and monitoring.


Evaluating solutions like BastionGPT, which still provide essentially the same cutting-edge AI models as ChatGPT but are explicitly architected for healthcare security and compliance from the ground up, offers a more powerful and often more accessible pathway. By prioritizing purpose-built, compliant tools, healthcare can adopt the power of AI responsibly, ensuring that innovation enhances care without compromising patient trust or regulatory obligations.

BastionGPT provides the technical expertise to support use of generative AI LLMs like those which power ChatGPT while staying in compliance with HIPAA.  We do the hard work for you so you can experience the benefits of using a tool like ChatGPT in the healthcare space.  

Most organizations are able to sign up and begin using BastionGPT within 10 minutes, with no setup costs, 7-day trial and no fixed commitments. Begin your AI journey with BastionGPT today by starting your trial here: https://bastiongpt.com/plus .

If you have more questions or would like to connect – you can reach out at: 

  • Email: support@bastiongpt.com
  • Phone: (214) 444-8445
  • Schedule a Chat: Book a Meeting



Disclaimer: This article provides general information about HIPAA compliance and AI tools based on publicly available information as of April 21st 2025. It does not constitute legal advice. Healthcare organizations should consult with qualified legal counsel and compliance experts to ensure their specific use of any technology meets HIPAA requirements and other applicable regulations. AI provider policies and features are subject to change.