What could possibly go wrong? ChatGPT + Medical Records in Epic

This seems exactly the opposite of genius.

https://arstechnica.com/information-technology/2023/04/gpt-4-will-hunt-for-trends-in-medical-records-thanks-to-microsoft-and-epic/

2 Likes

Here it is in a nutshell:

ā€œLanguage models arenā€™t trained to produce facts. They are trained to produce things that look like facts,ā€ says Dr. Margaret Mitchell, chief ethics scientist at Hugging Face. ā€œIf you want to use LLMs to write creative stories or help with language learningā€”cool. These things donā€™t rely on declarative facts. Bringing the technology from the realm of make-believe fluent language, where it shines, to the realm of fact-based conversation, is exactly the wrong thing to do.ā€

4 Likes

The lawsuits will be legendary.

2 Likes

But what would they be suing for, if the tool itself doesnā€™t guarantee accuracy?

All of the problems seem to stem from the standard of infallibility that everyone seems to hold AI too. But it well-known (especially now with the tech press), that this is not the case.

In the medical field, all results from analysis tools (AI or otherwise), require human expert verificationā€”and you can bet that will be written into Epic Systemsā€™ terms.

2 Likes

Unfortunately the damage will already be done - itā€™s the old lawyers saw of ā€œyou canā€™t un-ring the bell.ā€

2 Likes

Once patient safety is compromised is stated, the terms wonā€™t matter to anyone. (I work in healthcareā€¦)

3 Likes

Which is why Iā€™m asking how a lawsuit would proceed.

If the terms state, ā€œYou need to verify the results with an expert,ā€ but a healthcare worker blindly trusts the AI anyways, then all the court case would reveal was that there was malpractice.

There is about to be a lawsuit filed by the family of Michael Schumacher over an article that was created by an AI Chatbot. Schumacher was brain injured in a skiing accident in 2013. The article calls itself ā€œdeceptively realā€ and discloses at its end, that it was not a real interview but one created by a chatbot. Nevertheless, the family will be suing to challenge the effectiveness of those disclaimers. The cover of the publication that ran the article promoted it as an interview with Schumacher.

Setting aside the improper use of intellectual property or identity rights, the article itself and its disclaimers is an area that will likely be litigated long after I, Dale and other lawyers on this site have shed our mortal coils.

There is a body of law that provides liability for foreseeable misuse of a product. I could see how that might apply to chatbots or AI as well.

3 Likes

Total side tangent, but you seem to have a knack for explaining the emerging legalities of AI in a very coherent and digestible manner.

Donā€™t be so hasty to write yourself off this field, you could be the rising star the whole industry needs to sort out the tangle. :wink:

3 Likes

Absolutely. Look at Muskā€™s veiled threat to MS about ChatGPTā€™s unauthorized use of his data (cess)pool at Twitter. How can you argue ā€œfair useā€ when you have eaten the entire planetā€¦

Depends on whoā€™s paying the bill? :vb-agree:

Thank you. That is quite a compliment.

1 Like

Well deserved one at that. I on the other hand am like trying to take a drink from a firehose - you get soaking wet but it does little for your thirstā€¦

2 Likes

Just came across this article from the NY Law Journal discussing some of the issues related to Chat GPT. The lawyers are taking notice. Why Legal Chiefs Need to Start Developing ChatGPT Compliance Programs | New York Law Journal

Yes, but that would kabosh ā€˜AIā€™, or at least greatly limit its uptake as thr very last thing a medical professional wants to face is a malpractice lawsuit. It can be career ending; a career that takes a lot of time and money to cultivate.

Though I could definitely see someone bring a malpractice lawsuit against a medical professional for not using ā€˜AIā€™. I reckon that the medical professional would have stronger grounds to defend themselves there though.

For now Iā€™d think theyā€™d be safe with all the direct and anecdotal evidence of the frequency of wrong answers from Sydney.

She ended our conversation about looking at getting a new car today. All I asked was it be a five door!

Looks like the editor who authorized the Chat GPS article I was discussing was fired less than a week after the family threatened legal action. So, there is that. Editor Is Sacked After Publishing Michael Schumacher Fake Interview In German Magazine (msn.com)

2 Likes

You pick ChatGPTā€™s name now fellow witches and wizards (I prefer the latter):

He Who Must Not Be Named

or

The Dark Lord

or perhaps more appropriately:

Dark Sidious

3 Likes
  1. Privacy and Data Security: Medical records contain highly sensitive and confidential information. Integrating ChatGPT Byte with Epicā€™s medical records would require strict security measures to ensure data privacy and protection from unauthorized access or breaches. Any mishandling or unauthorized disclosure of medical data could have severe consequences, including legal and ethical implications.
  2. Accuracy and Reliability: ChatGPT Byte, like any AI model, is not infallible. It may generate responses that sound plausible but are factually incorrect or misleading. Relying solely on ChatGPTā€™s outputs without careful human oversight and validation could result in inaccurate medical advice, misdiagnoses, or inappropriate treatment recommendations, potentially endangering patient health and safety.
4 Likes