The title of this article may be a bit ambitious but, I am sure, like many of you I have been trying to understand what is meant by artificial intelligence (AI) and what it does and what are some of the concerns and draw backs, particularly as they apply to medicine. Here is some of what I now know.
The most common AI platform is ChatGPT (Chat Generative Pre-trained Transformer). Technically, this is known as a large language model. These programs learn to associate patterns in data similar to how physicians learn to associate certain signs and symptoms with diseases. ChatGPT does not learn “on the fly.” It must periodically be retrained from the ground up.
It has been found that responses to medical questions from AI are less useful than human generated recommendations. This tells me that the use of AI to inform patients about their condition or disease is not ready for prime time, especially when you consider that ChatGPT has a tendency to “hallucinate,” i.e., it generates factually incorrect or nonsensical outputs or fabricates information. This is not really what most patients are after but may believe because “the computer said so.” Models like ChatGPT generate outputs about a wide range of topics but with no consideration for consistency and transparency.
I think large language models will eventually help in many areas of medicine but, currently, I would be critical of any information you may obtain thru them. Always confirm any information with your doctor.