Type to search

ChatGPT’s Use In Medicine Raises Questions Of Security, Privacy, Bias Innovation

ChatGPT’s Use In Medicine Raises Questions Of Security, Privacy, Bias

Authentication by facial recognition concept. Biometric. Security system.

Generative AI, the prompt-based artificial intelligence that can generate text, images, music and other media in seconds, continues to advance at breakneck speeds.

Less than six months ago, OpenAI launched its generative AI tool, ChatGPT. A mere four months later, the company released GPT-4, a massive leap forward in processing speed, power and ability.

Every industry has taken notice. Healthcare, in particular. Observers were mildly impressed when the original version of ChatGPT passed the U.S. medical licensing exam, though just barely. A couple of months later, Google’s Med-PaLM 2 aced the same test, scoring in the “expert” category.

Despite mounting evidence that generative AI is poised to revolutionize medical care, patients hesitate to embrace it. According to a recent Pew Research poll, 6 in 10 American adults say they’d feel “uncomfortable” with their doctor relying on artificial intelligence to diagnose disease and provide treatment recommendations.

Today’s iterations of generative AI aren’t ready for broad use in healthcare. They occasionally fail at basic math, make up sources and “hallucinate,” providing confident yet factually incorrect responses. The world is watching closely to see how quickly OpenAI, Google and Microsoft can correct these errors.

But those fixes, alone, won’t address the two biggest concerns patients reported in the Pew survey:

Technological risks, including security, privacy and algorithmic bias. Ethical concerns about the interplay between machines and humans.

This article examines the first set of fears. The next, on May 8, will cover the ethical ones, including AI’s impact on the doctor-patient relationship.

Are patient fears valid?

Americans have long-held suspicions about new technologies. Recall how bank customers in the 1970s resisted using ATMs, fearing the machines would eat their cards and mishandle their money. Indeed, cashpoint errors were common at first. But when banks made tweaks and the roots of people’s tech-driven fears stopped materializing, the fears themselves faded from consciousness. This process is known as habituation.

When it comes to the use of generative AI in medicine, people’s fears around security, privacy and bias are normal and valid, and they need to be taken seriously. But the most important question to consider is: Will AI pose a greater risk to patients than the technologies they already use in their day-to-day lives?

Let’s examine this question by looking at these technological issues, one by one.

1. Security

Banks, government bodies and healthcare companies alike house sensitive information in large databases and, therefore, must undergo extensive scrutiny. That’s especially true now, as more and more individual data moves to the cloud (the internet-based servers on which these large databases run).

Cybersecurity is a major concern in the United States. Research shows 9 in 10 Americans are worried hackers will access their personal or financial information and use it for nefarious purposes. But when it comes to ChatGPT in healthcare, patients will want to know whether generative AI increases the risk of future cyberattacks.

For perspective—and contrary to what patients might assume—a solo doctor’s office is one of the least secure places to house personal medical data. Lone physician practices lack the financial and technological resources to install top-of-the-line network security tools, which makes patient data extremely vulnerable.

So, why aren’t cyber criminals hacking into individual doctor’s offices more often? The reason is that large data systems—containing hundreds of thousands or even millions of patient records—are much more valuable.

Ironically, the thing that makes medical records safer inside your local doctor’s office is the same thing that makes the medical-record system ineffective. Unless your doctor belongs to a large medical group, your health record isn’t connected with (or available to) other physician offices or surrounding hospitals.

Though government health officials have attempted for decades to incentivize secure patient-record sharing, very few providers today can access a “comprehensive” medical record system. As a result, even when clinicians in the community use the same digital record-keeping platform, they can’t access all of the data on each patient needed to provide optimal medical care.

This isn’t just ineffective, it’s also dangerous.

If you end up in the ER late at night when your doctor’s office is closed, the emergency physician on staff won’t be able to look up your medical history, current prescriptions, recent diagnostic tests, or other vital information needed to provide you with the best care possible.

As such, security in medicine is a double-edged sword. As patients, we want our medical data to be safely off-limits to malicious hackers. But we also need that information to be comprehensive and readily available wherever we go for care, regardless of the time of day or day of week.

Generative AI applications won’t solve this problem unless EHR manufacturers open their application programming interfaces (APIs). Importantly, nothing about the creation or operation of AI tools will increase people’s security risks, either.

Large financial institutions and EHR companies alike store vast quantities of digitized information behind firewalls that are difficult to penetrate. We can assume that ChatGPT (and the generative AI systems developed by Google and others) will at least maintain equal safeguards. It is in their reputational and economic interests to ensure it.

2. Privacy

Although large companies work hard to maximize data security, their business models have long depended on compromising user privacy.

Once, a congressman asked Mark Zuckerberg how Facebook survives as a business without charging user fees. The CEO responded, “By advertising.” Zuckerberg meant that his social network makes its money by selling users’ personal information to third parties. And, for decades, the people who use social media sites and search engines have been making a de facto trade: handing over their personal information to advertisers in exchange for free access.

In medicine, it’s illegal to extract and disclose data from medical records. But that doesn’t guarantee total patient privacy. Recently, news reports exposed how hospitals and pharmacies engage in online data sharing with third parties—without explicit permission from patients.

Moreover, when people search their symptoms on the internet or make online purchases to treat health problems, that information has been used by companies to target ads. It is why people receive coupons for diapers within days of learning they’re pregnant.

Patients may face similar privacy risks with generative AI companies. But, as with security, there’s nothing about generative AI that magnifies user privacy risks—no more so than across the existing social-digital media landscape.

3. Bias

Unlike in the areas of security and privacy, where Pew survey respondents indicated high levels of concern, most patients predict that generative AI will be less medically biased in the future (51%), not more (15%).

Researchers continue to identify biases in the algorithmic tools used in medicine. In several cases, these biased IT applications have been shown to exacerbate existing healthcare inequities (gaps in the quality of care based on gender, race, ethnicity, income, etc.).

As such, human concern about bias in medical algorithms is valid. And the need for solutions remains urgent. But lost in the headlines is that fact that these errors are rarely produced by glitches in zeros and ones. The truth is that computers and algorithms are not biased. Humans are. And when computers spit out biased recommendations, they do so because they’re trained based on human behaviors and decisions.

Those behaviors and decisions often reflect implicit biases. In the United States, for example, doctors recommend breast reconstruction for Black patients following mastectomy less often than for white patients. They prescribe less pain medication to Black and Hispanic patients than white individuals after surgery. And early in the pandemic, when Black individuals were twice as likely to die from Covid-19, physicians tested Black patients for the virus only half as often as white patients with identical symptoms.

Therefore, when creators of AI tools input this data into their applications, the algorithms assume that Black patients don’t need breast reconstruction as often, don’t require as much pain medication and don’t require testing as often as white individuals.

Fortunately, generative AI has the potential to reduce the prevalence of bias in medicine. That’s because it includes a much broader range of inputs than AI applications developed for specific medical functions (like reading mammograms or managing diabetes). In contrast to these “narrow AI” applications, generative AI has been constructed to answer a nearly infinite number of questions and perform an unlimited number of functions. To facilitate that ability, the applications are pre-trained using vast data sets—ones that may unintentionally include biases in patient care but also include evidenced-based research on the existence and dangers of bias in medicine.

So, when physicians “forget” to offer breast reconstruction, prescribe adequate pain medication or order a necessary laboratory test for Black or Hispanic patients, ChatGPT will be able to question the doctor’s decisions. In that way, it will bend away from medical bias and toward a more equitable form of care for all.

Generative AI: Hero or villain?

I am optimistic that ChatGPT and future generative AI applications will empower patients and transform healthcare for the better. These tools will help doctors and patients achieve superior clinical outcomes, make care more readily available and lower costs (in part by reducing medical errors, health disparities and ineffective medical care).

While the applications can’t fully escape or prevent the security, privacy and bias problems of today, the risk to users will be no greater than what people already experience.

Finally, as respondents to the Pew survey pointed out, tech concerns aren’t all that’s bothering them about AI’s use in medicine. As these tools become increasingly powerful and broadly available, they will also affect the doctor-patient relationship and raise highly contentious ethical questions. The next article in this two-part series will focus on these very human issues.