The American Medical Association has advised against the use of chatbots such as ChatGPT by physicians to assist with clinical decisions.

Additionally, prominent AI companies, such as OpenAI, caution against making diagnoses based on the outputs of their chatbots.

Director of Graduate Studies at the Oxford Internet Institute, Adam Mahdi stated, “We would suggest that individuals make healthcare decisions by utilizing reliable sources of information.”

“The complexity of interacting with human users is not reflected in the current evaluation methods for chatbots.”

Before deployment, chatbot systems should undergo real-world testing, similar to clinical trials for novel medications.

Many individuals are utilizing AI-powered chatbots such as ChatGPT for medical self-diagnosis due to the increasing costs and lengthy waiting lists in overburdened healthcare systems.

According to a recent survey, approximately one in six American adults currently utilize chatbots for health advice on a monthly basis.

However, a recent study conducted by Oxford suggests that placing an excessive amount of trust in the outputs of chatbots can be hazardous.

This is due to the fact that individuals may not be aware of the information they should provide to chatbots in order to receive the most accurate health recommendations.

Mahdi added, “Participants who made decisions using chatbots did not perform any better than those who relied on conventional methods, such as online searches or their own judgment.”

The authors recruited approximately 1,300 individuals in the United Kingdom for the study and provided them with medical scenarios that were composed by a group of physicians.

The participants were entrusted with identifying potential health conditions in the scenarios and determining potential courses of action (e.g., visiting a doctor or hospital) using chatbots and their own methods.

The participants employed the default AI model that powers ChatGPT, GPT-4o, as well as Cohere’s Command R+ and Meta’s Llama 3, which previously supported the company’s Meta AI assistant.

In addition to decreasing the likelihood that participants would identify a pertinent health condition, the chatbots also increased the likelihood that they would underestimate the severity of the conditions they did identify, according to the authors.

Mahdi stated that the participants frequently neglected to provide essential information when interacting with the chatbots or received responses that were challenging to comprehend.

“The chatbots frequently mixed good and bad recommendations in their responses,” he continued.

“The complexity of interacting with human users is not reflected in the current evaluation methods for [chatbots].”Secure your reservation today.

The results are being released at a time when technology companies are increasingly emphasizing the potential of AI to enhance health outcomes.

Apple is purportedly in the process of creating an artificial intelligence (AI) instrument that can provide guidance on sleep, nutrition, and exercise.

Amazon is investigating an AI-based approach to the analysis of medical databases for “social determinants of health.”

In addition, Microsoft is assisting in the development of artificial intelligence (AI) to filter messages sent from patients to care providers.

you might also like