Compliance

HIPAA Compliant AI Tools: Protecting Healthcare Data

Healthcare data demands the highest protection. Find AI tools that meet HIPAA's strict requirements.

TrustGrade Team7 min read

Healthcare organizations are adopting AI tools at a rapid pace -- from AI-powered medical transcription and clinical documentation to patient communication platforms and research analysis tools. The potential benefits are enormous: reduced administrative burden, faster diagnoses, and more efficient care delivery. But every one of these use cases involves Protected Health Information, and HIPAA sets a non-negotiable standard for how that data must be handled.

The Health Insurance Portability and Accountability Act is not a suggestion or a best practice. It is federal law, and violations carry penalties ranging from tens of thousands of dollars per incident to criminal prosecution. When a healthcare organization connects an AI tool to patient data without ensuring HIPAA compliance, they are not just accepting risk -- they are potentially breaking the law.

This guide provides a thorough explanation of what HIPAA requires from AI tools, how to evaluate vendor claims of compliance, and which specific safeguards you should verify before sharing any healthcare data. Whether you are a hospital IT administrator, a clinic manager, or a healthcare startup building with AI, this is the compliance foundation you need.

What Protected Health Information (PHI) Includes

Before evaluating AI tools for HIPAA compliance, you need a clear understanding of what data HIPAA protects. Protected Health Information encompasses any individually identifiable health information created, received, maintained, or transmitted by a covered entity or business associate.

PHI is broader than most people assume. It includes the obvious -- medical records, lab results, diagnoses, treatment plans, and prescription information. But it also includes any data that can identify a patient in connection with their health information. HIPAA defines 18 specific identifiers:

  • Names, addresses, dates (birth, admission, discharge, death)
  • Phone numbers, fax numbers, email addresses
  • Social Security numbers, medical record numbers
  • Health plan beneficiary numbers, account numbers
  • Certificate and license numbers
  • Vehicle identifiers, device identifiers and serial numbers
  • Web URLs, IP addresses
  • Biometric identifiers (fingerprints, voiceprints)
  • Full-face photographs and comparable images
  • Any other unique identifying number or code

When any of these identifiers appear alongside health information, the data is PHI and HIPAA applies. This means that if a healthcare worker pastes a patient note containing a name and diagnosis into an AI tool, that tool is now handling PHI -- whether the vendor realizes it or not.

Electronic PHI (ePHI), which is PHI in electronic form, is subject to additional technical requirements under the HIPAA Security Rule. Since AI tools process data electronically by definition, every AI use case involving PHI triggers the Security Rule's requirements.

The Three Rules of HIPAA

HIPAA compliance for AI tools involves three interconnected regulatory frameworks. Each imposes specific obligations that a vendor must meet.

The Privacy Rule

The HIPAA Privacy Rule establishes standards for the use and disclosure of PHI. It defines who can access health information, under what circumstances, and with what limitations. For AI tool vendors, the Privacy Rule requires that they only use PHI for the purposes specified in their agreement with the covered entity. They cannot repurpose patient data for marketing, analytics, model training, or any other secondary use without explicit authorization.

The Privacy Rule also enforces the "minimum necessary" standard: only the minimum amount of PHI needed to accomplish the intended purpose should be used or disclosed. An AI tool that ingests entire patient records when it only needs a diagnostic code may violate this standard even if its security is otherwise sound.

The Security Rule

The Security Rule specifies the technical, physical, and administrative safeguards required to protect ePHI. This is where the rubber meets the road for AI tool evaluation. The rule requires three categories of safeguards:

Administrative safeguards include risk analysis procedures, workforce training, security management processes, contingency plans, and designated security personnel. The AI vendor must have a formal security program with documented policies and assigned responsibilities.

Physical safeguards cover facility access controls, workstation security, and device and media controls. For cloud-based AI tools, this primarily applies to the data centers where ePHI is processed and stored. The vendor's hosting provider must meet these requirements as well.

Technical safeguards include access controls (unique user identification, emergency access, automatic logoff, encryption), audit controls (logging of all access to ePHI), integrity controls (mechanisms to authenticate ePHI and prevent unauthorized alteration), and transmission security (encryption of ePHI in transit). These technical requirements are particularly relevant for AI tools because they define the minimum security infrastructure the vendor must maintain.

The Breach Notification Rule

The Breach Notification Rule requires covered entities and business associates to notify affected individuals, the Department of Health and Human Services, and in some cases the media, following a breach of unsecured PHI. For AI tool vendors acting as business associates, this means they must have breach detection capabilities, must notify the covered entity without unreasonable delay (and no later than 60 days after discovery), and must provide detailed information about what data was compromised.

An AI tool vendor that cannot detect when PHI has been improperly accessed or disclosed is not capable of meeting the Breach Notification Rule, regardless of their other security measures. Look for vendors with comprehensive audit logging and real-time monitoring capabilities.

Trust Grade Distribution — Live Data

Across 822 assessed AI tools

AExcellent
22tools
BGood
20%
164tools
CFair
38%
316tools
DPoor
17%
143tools
FFail
22%
177tools

Business Associate Agreements: The Non-Negotiable Requirement

If there is one thing you take away from this guide, let it be this: no AI tool should touch PHI without a signed Business Associate Agreement (BAA). This is not a recommendation -- it is a legal requirement under HIPAA.

A BAA is a contract between a covered entity (or another business associate) and a business associate that establishes the permitted uses and disclosures of PHI, requires the business associate to implement appropriate safeguards, and creates accountability for compliance. Under HIPAA, an AI tool vendor that receives, creates, maintains, or transmits PHI on behalf of a covered entity is a business associate by definition.

A compliant BAA must include:

  • Specific descriptions of the permitted uses and disclosures of PHI
  • A commitment not to use or disclose PHI beyond what the agreement or law permits
  • Requirements to implement appropriate safeguards (aligned with the Security Rule)
  • Obligations to report breaches and security incidents
  • Requirements to ensure that sub-contractors who access PHI agree to the same restrictions
  • Obligations to make PHI available to individuals exercising their access rights
  • Requirements to return or destroy PHI at the end of the relationship
  • Agreement to make internal records available to HHS for compliance verification

If an AI tool vendor will not sign a BAA, you cannot use that tool for any PHI-related workflow. It does not matter how impressive their technology is or how strong their security appears -- without a BAA, the use is non-compliant. Many consumer-grade AI tools (including free tiers of popular assistants) explicitly state in their terms of service that they are not HIPAA-compliant and will not sign BAAs. Using these tools for healthcare data creates immediate regulatory exposure.

Technical Safeguards Required for AI Tools

Beyond the BAA, evaluate the specific technical safeguards an AI tool has in place. These are the controls that actually protect PHI during processing.

Encryption

ePHI must be encrypted both in transit and at rest. For AI tools, "in transit" means the data moving from your system to the vendor's servers (TLS 1.2 or higher is the current standard), and "at rest" means the data as stored on the vendor's infrastructure (AES-256 is the standard). Critically, encryption must extend to backups, logs, and any cached or temporary copies of the data. An AI tool that encrypts the primary data store but leaves conversation logs unencrypted has a significant gap.

Access Controls

The vendor must implement unique user identification (every person accessing ePHI must have a unique identifier), role-based access controls (access limited to what each role requires), automatic session termination after inactivity, and emergency access procedures. For AI tools used in clinical settings, the access control model must be granular enough to ensure that a billing administrator cannot access clinical notes and a nurse cannot access financial records.

Audit Logging

Every access to ePHI must be logged, including who accessed it, when, what they accessed, and what they did with it. For AI tools, this means logging every prompt that contains PHI, every response generated from PHI, and every administrative action that affects PHI storage or security. These logs must be tamper-proof, retained for a minimum of six years, and available for review during compliance audits.

Data Integrity Controls

The vendor must have mechanisms to ensure ePHI is not improperly altered or destroyed. This includes validation checks, version control, and backup procedures that allow recovery of PHI in its original state. For AI tools that generate clinical documentation, integrity controls are especially important to ensure that AI-generated content accurately reflects the source data.

Use Cases Where HIPAA Applies to AI

The intersection of HIPAA and AI tools is broader than many organizations realize. Here are the most common use cases that trigger HIPAA requirements.

Clinical Documentation and Medical Notes

AI tools that transcribe, summarize, or generate clinical notes from patient encounters are processing PHI directly. This includes ambient listening tools in exam rooms, dictation software, and AI assistants that help clinicians write SOAP notes. Every word of patient data flowing through these tools must be HIPAA-protected.

Patient Communication

AI-powered chatbots, patient portals with AI features, and automated appointment systems that reference patient information are all handling PHI. Even an AI tool that generates personalized health reminders involves PHI if it references the patient's conditions or treatment history.

Clinical Research and Analytics

AI tools used to analyze patient populations, identify trends, or support clinical research must handle PHI appropriately. De-identification is one approach (HIPAA provides Safe Harbor and Expert Determination methods), but the de-identification must be complete and verified before data enters the AI tool. Partially de-identified data that retains any of the 18 identifiers is still PHI.

Medical Imaging and Diagnostics

AI tools that analyze X-rays, MRIs, pathology slides, or other medical images are processing PHI. Medical images often contain embedded patient identifiers in their metadata (DICOM headers), and the images themselves may be identifiable. These tools must meet all HIPAA requirements even when the AI is performing automated analysis rather than presenting data to a human operator.

Revenue Cycle and Administrative Operations

AI tools used for medical coding, claims processing, prior authorization, and billing all involve PHI. The administrative use case does not reduce the compliance obligation -- PHI in a billing context is subject to the same protections as PHI in a clinical context.

Penalties for Non-Compliance

HIPAA violations carry significant penalties, and enforcement has intensified in recent years. The penalty structure has four tiers based on the level of culpability:

  • Tier 1 - Lack of knowledge: $100 to $50,000 per violation (the entity did not know and could not reasonably have known)
  • Tier 2 - Reasonable cause: $1,000 to $50,000 per violation (the entity should have known but the failure was not due to willful neglect)
  • Tier 3 - Willful neglect, corrected: $10,000 to $50,000 per violation (willful neglect that was corrected within 30 days)
  • Tier 4 - Willful neglect, not corrected: $50,000 per violation (willful neglect that was not corrected within 30 days)

The annual maximum for each violation category is $1.5 million, but total exposure can be far greater when multiple violation types are involved. In addition to financial penalties, criminal violations can result in fines up to $250,000 and imprisonment of up to 10 years. State attorneys general can also bring enforcement actions, and breach-related class action lawsuits add further financial exposure.

Using an AI tool without a BAA to process PHI is considered willful neglect in most enforcement scenarios. It is one of the clearest violations regulators look for, and one of the easiest to prove. The cost of compliance is always less than the cost of a violation.

How to Evaluate HIPAA Compliance in AI Tools

Use this evaluation framework when assessing whether an AI tool is ready for healthcare data:

  1. Confirm BAA availability. Ask the vendor directly whether they will sign a BAA. If the answer is no, the evaluation ends here for PHI use cases.
  2. Review the BAA terms. Ensure the BAA covers all required elements and does not include carve-outs or limitations that undermine its protections.
  3. Verify encryption standards. Confirm AES-256 at rest and TLS 1.2+ in transit. Ask about encryption of backups, logs, and temporary data stores.
  4. Assess audit logging capabilities. The vendor should demonstrate comprehensive logging of all PHI access, with retention periods of at least six years.
  5. Check for complementary certifications. Vendors with SOC 2 Type II or ISO 27001 certification have demonstrated broader security maturity that supports HIPAA compliance.
  6. Confirm training data exclusion. Get written confirmation that PHI is never used for model training, fine-tuning, or any purpose beyond delivering the contracted service.
  7. Evaluate data residency. Understand where ePHI is processed and stored. Confirm that the hosting environment and any sub-processors also meet HIPAA requirements.

Browse our HIPAA-compliant AI tools to see which tools meet these requirements, or use the security checklist as a starting point for any AI tool evaluation. You can also filter by Grade A tools to find the most trustworthy options across all categories.

How TrustGrade Evaluates HIPAA Compliance

In our trust scoring methodology, HIPAA compliance is assessed through several dimensions: whether the vendor offers a BAA, whether they explicitly support healthcare use cases, whether they have the technical safeguards the Security Rule requires, and whether complementary certifications validate their security posture. Tools that claim HIPAA compliance without offering a BAA or documenting their technical safeguards receive lower trust scores.

Explore our full AI tool directory to compare HIPAA compliance and trust scores across hundreds of tools.

Frequently Asked Questions

Can I use ChatGPT or similar consumer AI tools for patient data?

Generally, no. Consumer-tier AI assistants typically do not offer BAAs, do not provide the required technical safeguards, and may use input data for model training. Some AI vendors offer separate enterprise or healthcare tiers that include BAAs and HIPAA-compliant infrastructure. If the vendor offers a HIPAA-eligible tier, that specific tier -- not the consumer version -- is what you must use. Always verify by requesting the BAA and confirming the specific product version covered.

Does de-identifying data remove the need for HIPAA compliance?

If data is fully de-identified according to HIPAA's standards (Safe Harbor or Expert Determination method), it is no longer considered PHI and HIPAA does not apply. However, de-identification must be thorough -- all 18 identifier types must be removed, and there must be no reasonable basis to identify individuals from the remaining data. Partial de-identification does not count. If you are relying on de-identification as your compliance strategy, have the process validated by a qualified expert.

What about AI tools hosted on HIPAA-eligible cloud infrastructure?

Hosting on a HIPAA-eligible platform (like AWS or Google Cloud with BAAs in place) is necessary but not sufficient. The cloud provider's HIPAA compliance covers their infrastructure, not the application built on top of it. The AI tool vendor is still responsible for implementing application-level safeguards, configuring the cloud environment correctly, and signing their own BAA with you. A vendor cannot claim HIPAA compliance simply by running on a HIPAA-eligible cloud.

How often should we reassess our AI tools for HIPAA compliance?

HIPAA requires periodic risk assessments, and best practice is to reassess at least annually or whenever significant changes occur -- new AI tool versions, changes to the vendor's infrastructure, new data flows, or changes to the types of PHI being processed. Major vendor updates that change how data is processed, stored, or shared should trigger an immediate reassessment.

Does HIPAA apply to AI tools used for healthcare research?

Yes, unless the data has been fully de-identified or the research is covered by a specific HIPAA authorization from the data subjects or a waiver from an Institutional Review Board. Research use does not create an automatic exemption from HIPAA. AI tools used in clinical trials, retrospective studies, or population health analytics must meet the same compliance requirements as tools used in direct patient care, unless one of the limited research exceptions applies.

What is the relationship between HIPAA and other security certifications?

HIPAA compliance and certifications like SOC 2, ISO 27001, and GDPR compliance are complementary but not interchangeable. SOC 2 and ISO 27001 demonstrate general security maturity that supports HIPAA compliance, but they do not satisfy HIPAA's specific requirements (particularly the BAA requirement and PHI-specific provisions). A vendor with SOC 2 Type II and a signed BAA is a stronger candidate than a vendor with either alone. Use our evaluation framework to assess all dimensions together.

HIPAA AI toolshealthcare AIPHI protectionmedical AI

Related Articles

Check the trust score of any AI tool

Browse our database of security-assessed AI tools and find ones you can trust with your data.

Browse AI Tools