Friday, July 14, 2023

Artificial Intelligence and Cancer

 I just saw a really thought-provoking article fro ASCO Daily News called "AI in Cancer Research and Care: Setting the Stage for a Promising and Safe Future."

If you pay attention to the news, you've probably seen a lot in the last 6 months or so about Artificial Intelligence. While AI has been around in some form for many years, it was only last November that it became widely and easily available  when the program ChatGPT was introduced to the world. Maybe you've tried it out yourself. It has become popular because it allows a user to ask questions and get responses in plain language, so it seems like you're having a conversation.

ChatGPT runs on a predictive language model. That means when it gets asked a question, it looks at its large database (basically the entire internet) and makes a prediction about how to respond based on what it finds in the database. So if you ask it for a chocolate cake recipe, it will search for lots of chocolate cake recipes, and then predict what such a recipe would look like, and give it to you. (That's an over-simplification, but it's basically how it works.) 

That works really well sometimes. But when you have the internet as your source, you can run into a whole lot of misinformation, and ChatGPT can't evaluate what is true and what isn't. It also has a habit of making some things up when it doesn't know the answer. 

It's far from a perfect system. At least for now. Some experts think it will be much more refined in a few years.

Now, I should say up front, I have long been an advocate for technology. But I am also in a job that could potentially be threatened by AI in some ways. At the very least, it's going to be changed by AI quite a bit. We're not quite sure just yet if those changes will be good or bad. So I'm looking at all of this with a very skeptical eye. I tend to see the bad as much as (maybe more than) I see the good.

Back to the ASCO Daily News article. The authors (who are all cancer researchers at the National Cancer Institute, try to point out the good and bad about how AI is currently being used in cancer research. For example, a few researchers have published articles in medical journals that have used ChatGPT to help write the article. (Great researchers aren't always great writers, and I know as much as anyone that writing can be hard, so having a program that helps with writing can be a very good thing -- as long as the work is checked very carefully.)

Other research has shown how AI can go through a large database to find answers to questions. I sometimes write about research that looks back at thousands of patient records to find patterns about cancer. AI is perfect for that. It is built to go through a large database. 

It could also, in theory, be used to write up clinical notes -- that patient visit summary in your Electronic Records that describes what happened at your appointment. That could be a great thing, assuming the notes are checked to make sure they are accurate. (One of my previous oncologists retired because the Electronic Records were so overwhelming for him.)

But, of course, there are plenty of potential bad things that can happen, too. A patient who tries to find information on their disease by asking questions of ChatGPT is taking a huge chance. As much as there is good stuff online, there is loads and loads of crap. And remember, ChatGPT doesn't do a good job of evaluating the difference.

(If you are interested in a nightmare scenario of AI and Medicine, consider the novel The Algorithm Will See You Now by JL Lycette. The author is an oncologist who started writing to help with the burnout from her job. She's writes essays and fiction, and this novel is a medical thriller involving AI. Please remember it is SPECULATIVE FICTION, not real life. But as a doctor, she has some fears about technology and medicine, and this is her way of thinking about them.)

And I think that is ultimately the danger of Artificial Intelligence, in medicine and in other areas -- it takes out the human element. It's fun to play with ChatGPT, but when it becomes a replacement for the human element, then that's where the problems begin. Last month, I saw a news story about how some doctors are using ChatGPT to find "compassionate" ways to break bad news to patients.

That strikes me as problematic, if it's done poorly. I'm sure we all know doctors who are so robotic that a computer would show more compassion than them. Breaking bad news is definitely a skill that some have and some don't (I know this all too well). But if ChatGPT helps a doctor figure out the best way to do it, maybe that's not a bad thing?

The problem is, because it is predictive, the AI program will kind of see when is typically done in these situations, and offer that as a suggestion. But because delivering bad news isn't done very well, there's a chance that the advice that the doctor gets won't be very good either. It seems to me that the best way to use that is as a starting point. Advice about bad news is often to give it indirectly -- ease into it instead of coming right out and saying it. But a doctor hopefully knows a patient well enough to know whether that person wants the news broken gently, or wants it to be direct. ChatGPT can't tell a doctor that. Only spending some time with the patient can do that.

It's a complex topic. I suspect we're going to have to deal with AI as patients at some point, if we haven't already. My prediction about al of this is that the excitement will die down a little bit as problems emerge, and my hope is that we ultimately end up valuing the human element a little bit more. AI can't provide the kind of emotion that goes into great writing, great art -- and great medicine. And I hope we all insist that it never does.

I'd love to hear your thoughts about this -- and your experiences with AI, if you have any.

 

3 comments:

Anonymous said...

I've been experimenting a lot with GPT4 lately. The focus, however, was not so much on my follicular lymphoma, but rather on having excerpts from books explained to me in more detail. Regarding FL, ChatGPT didn't produce anything that one couldn't have found through a Google search.

Should AI facilitate doctors' work, that would obviously be a positive thing. On the other hand, this doesn't necessarily lead to a patient getting more time for a conversation with their doctor. After all, economically speaking, the doctor could treat even more patients as ChatGPT is taking over some of the work.

What really interests me is the extent to which AI can be used in pharmaceutical research to bring new drugs to the market. Will AI save us? I believe that's unlikely, but it's still exciting to see what changes the future holds for us.

Anonymous said...

Hey Bob

I worry that silicon-based machines (AI) will take over and control humans, much like they did in the movie "Terminator."

William

(P.S. - "I'll be back!")

Lymphomaniac said...

William, I've heard "I'll be back" enough times from actual doctors, who then make me wait for whatever they promised me. So maybe an efficient robot wouldn't be such a bad thing?

And Anonymous, I agree that use by pharma or for other purposes would be wonderful, though at the moment I'm still skeptical of how much of an improvement would come. My concern now is also its "efficiency quotient." I think there is already some AI being used in lots of places in our lives (like Alexa and Siri), and I'd hate to get to a point where we talk into our phones and say "Siri, do I have cancer?" But I'm sure for lots of people, such a thing would be seen as a massive success. [Those people would a) like to make money off of it, and b) probably don't see an oncologist.]
Thanks for your thoughts.
Bob