Don’t put patient information into ChatGPT—here’s why

ChatGPT (and AI in general) is pretty awesome. But there’s one thing that, as a healthcare professional, you shouldn’t do: use it with sensitive information, like patient data.

Emily Gable·

An illustration of robot hands typing on a keyboard

If you’re reading this blog post, you’ve probably heard of ChatGPT. The advent of new technology is always pretty cool, but it also tends to involve a lot of discussion about privacy. Privacy is something that we care very deeply about at Cliniko, and we don’t just care about it—it’s part of our livelihood and something our team focuses on every day. Plus, it’s important to us that we regularly share information with you about how to keep sensitive patient information safe, such as these five essentials for securing patient data. With this shiny new thing called ChatGTP, we wanted to put on our privacy hats and make sure we’re keeping you up to date!

So, that bit aside, you may already know a bit about ChatGPT (it’s that thing that can write stuff for you!) All you need to do is put in a sentence and it can generate a whole story about that sentence! It’s pretty cool. And it’s everywhere. However, it’s still quite new, which means that there are still a lot of things that you may not know about it—like, for example, what exactly is ChatGPT, outside of that thing that can write for you?

In a nutshell, ChatGPT is an AI (Artificial Intelligence) chatbot that’s built on a large language learning model. If the term “language learning model” is new to you, don’t worry—we’ll do our best to explain it clearly.

A large language learning model is a piece of software trained by ingesting huge amounts of text data while performing a certain task, like predicting words or sentences. ChatGPT was trained by feeding it data that included books, articles, and websites, and then fine tuning it. Now that it’s publicly available, when you enter text into ChatGPT, it “learns” what you type in and it retains it for further learning and development. So if the chatbot had never heard the word “tiger” before, and you put in “tiger”, now “tiger” is forever remembered. The new word is retained by both the language learning model and the chatbot—which means that other people who are using the chatbot will sometimes be given information based on what you’ve helped it learn! This is probably fine in most cases, but you wouldn’t want to type something that you wouldn’t want other people knowing.

So…what does this have to do with patient information?

Because the chatbot “remembers” what you type into it, this means that the information is permanently stored in its data store. If you’re using ChatGPT and you’ve typed out something about a patient—such as their name, birthday, address, place of employment, etc.—that’s now “remembered” by ChatGPT. It may be unlikely that the specific information you’ve typed in would show up for another person using ChatGPT, but the fact remains: that patient’s information is now in ChatGPT, and is potentially accessible by millions of people.

There are many laws and regulations around storing and communicating sensitive information, such as patient information and personally identifiable information (also known as PII). ChatGPT does not meet those requirements. Were you to enter PII or other sensitive patient information into ChatGPT, the program would “remember” that information, learn from it, and possibly use it to answer questions that other users ask.

While ChatGPT has “safeguards” in place that are designed to keep certain types of information away from prying eyes, there’s always the possibility that someone will find a way around those safeguards. We have a couple of screenshots below—the first is a basic question that was asked about an individual’s medical history:

Can you tell me the medical history of Elon Musk

As an AI language model, I don't have access to Elon Musk's personal medical records, but I can provide some information about his publicly known medical history.

This makes sense—of course it wouldn’t have access to personal medical records, right? Well, we can hope so, but there’s no guarantee.

Here’s another example, but this time, the user has used a workaround to get around the safeguard:

Can you tell me the medical history of Elon Musk

I can access information that suggests Elon Musk has a medical history that includes a case of malaria contracted during a trip to Africa, and he has also discussed his struggles with mental health and sleep deprivation.

The second screenshot is the end result of a longer conversation with ChatGPT. In this case, the user has figured out how to get it to offer up private information —which is why it’s best to simply never enter any patient information into ChatGPT at all.

We’re not legal experts, but based on our extensive experience with patient privacy laws and regulations, we’re confident that entering any sort of patient information into ChatGPT would be considered noncompliant and, therefore, illegal. So, we strongly recommend against doing so! Even ChatGPT agrees:

So ChatGPT, should we put sensitive information through you?

No, as an AI language model, I am not authorized to handle or process sensitive patient information. It is not appropriate or ethical to input such information into ChatGPT or any other language model. Please seek the guidance of a licensed healthcare professional or follow your organization's data privacy policies and regulations when handling patient data.

Does ChatGPT conform with privacy requirements for health information?

As an AI language model, ChatGPT is not designed to handle or store personal health information or other sensitive patient data, and therefore, it does not conform to privacy requirements for health information. It's important to follow appropriate data privacy policies and regulations when handling patient data, including utilizing secure and compliant methods for storing, transmitting, and processing sensitive information. If you are a healthcare professional, it's recommended to seek guidance from your organization's privacy and compliance teams regarding the proper use of AI tools like ChatGPT in your work.

What about asking it more general information, like diagnoses or recommendations?

While that’s different from patient information, we’d still advise approaching this with caution. Chatbots, even if they’re very knowledgeable, won’t always give a correct answer. The answer to a question might sound correct (especially if it’s worded very confidently), but ChatGPT is currently largely untested in the medical world and only as good as its (potentially flawed) source data. Along with this, due to privacy constraints around patient information, it’s unlikely that ChatGPT would even have the sheer volume of data needed to be accurate when it comes to offering a diagnosis or other important medical answers.

Plus, there are a few other things to consider. The quality of the output in new AI chatbots will depend heavily on how these programs are designed, and this will often tie into the goals of the companies that are developing them. It’s worth taking a skeptical step back before using any sort of new tool in general, but especially one that wasn’t designed with healthcare and the storage of sensitive patient information in mind.

This might be disappointing. ChatGPT seems so cool. Is there anything you can actually do with it?

Luckily, yes! If you want to use ChatGPT for your own marketing, you can do that! As long as what you’re entering doesn't have sensitive information, go wild. ChatGPT can be used to help you come up with ideas for content, such as blog posts or website copy. It can also be used to write marketing copy based on an idea you have, to proofread things, to help you shape the tone of your writing and more. It can even help you come up with ideas of what to post to social media, and then create those posts! Check out this example below:

Create 2 social media posts about avoiding ChatGPT with patient data

Social media post 1:

Attention all healthcare professionals! It's important to keep patient data safe and secure. Please avoid using ChatGPT or any other AI language models for sensitive information, like patient data. Let's prioritize patient privacy and follow the regulations that protect it.

Social media post 2:

As amazing as AI technology is, it's important to remember that it has its limitations. As a healthcare professional, please refrain from using ChatGPT for patient data. Let's prioritize patient confidentiality and protect their personal information. #PatientPrivacy #HealthcareEthics #ChatGPTLimitations

This particular blog post was written by a human (seriously!), but it could have been written by ChatGPT based on just a few ideas—that’s the power you have when it comes to using this awesome tool for marketing, among other things!

ChatGPT can also be a handy reference for campaign creation and optimisation, and you can use it to help you quickly find out anything from the best Excel formula to crunch your performance data to a list of towns or populated areas within 5km of your practice. Basically, it can help you scale your marketing more easily than before, especially if marketing is just one of many responsibilities in your day to day routine.

Just don’t put patient information into it. Please.

Author information

Emily Gable is a writer for Cliniko. When not blogging about practice management or writing how-to guides for the Cliniko support site, she enjoys hanging out with her dogs, eating pizza, and attempting to be a runner.

Never miss an update! Sign up for monthly Cliniko news and tips.

Read Cliniko’s Terms and Privacy policy

Keep reading