AI chatbots, including ChatGPT, are delivering incorrect and out-of-date information regarding certain cancers, according to a charity. The news comes following a survey showing nearly one in 10 women turn to chatbots when experiencing gynecological health worries.
Research from The Eve Appeal revealed that the platform also failed to recognize ovarian and vulval cancers as possible causes of key symptoms, such as ongoing bloating and itching. Relying on the platform to help diagnose health concerns could potentially delay women from seeking medical attention, experts warned.
Gynecological cancers affecting the womb, ovaries, cervix, vulva, and vagina impact approximately 22,000 women throughout the UK annually, resulting in nearly 8,000 deaths. Warning signs can include abnormal bleeding, ongoing bloating, unusual discharge, and continuing vulval or vaginal itching.
Survey Highlights Widespread Use of AI for Health Advice
A YouGov poll of more than 2,000 British women, commissioned by The Eve Appeal for its Get Lippy campaign, found that almost 1 in 10 had consulted AI platforms due to gynecological health concerns. Nearly a quarter reported feeling reassured that they needn't be concerned after chatbot responses, while 28% said it prompted them to consult a doctor.
The Eve Appeal further found that certain information provided by ChatGPT was either irrelevant to the UK healthcare system or inconsistent with NHS guidance, and that some advice was outdated or inaccurate in certain cases.
Charity Leader Expresses Concern Over Misinformation
Athena Lamnisos from the charity said: "We regularly speak to women about their gynecological health concerns and to the doctors who see them. We are increasingly concerned about the rise of misinformation and how that will impact people's health. We want everyone to take charge of their health, and that needs to start with access to information they can trust."
"Early diagnosis is key to improving outcomes with gynecological cancer. Our concern with the rise of ChatGPT being used to seek health information is that someone will be much less likely to see a doctor quickly if they feel reassured it isn't something serious, or if they're given advice to follow that may take weeks or months to see if it helps," she added.
Healthcare Professionals Warn of Risks
GP Dr. Aziza Sesay said that AI tools "can only provide information they have access to. When they too have policies that inadvertently censor women's health, vital information can be missed. This, and the fact information can be hallucinated, could lead to people feeling at ease when they should have seen a doctor. So tackling this problem is paramount, and it is why so many healthcare professionals are online sharing evidence-based, accurate information that truly does save lives."
Dr. Alison Wright, president of the Royal College of Obstetricians and Gynaecologists, said: "We are hearing that women are increasingly going online for health information and reassurance, so The Eve Appeal's finding that AI tools such as ChatGPT may miss gynecological cancer symptoms is very concerning, particularly when encouragement from those platforms to seek medical advice is lacking."
"Caution should generally be exercised when using AI and online platforms, especially regarding medical conditions. Information from unverified sources can be inaccurate and may actually delay women from seeking appropriate care," she added.
Dr. Wright strongly encouraged anyone with gynecological health concerns to seek reliable, evidence-based health information through trusted sources such as the RCOG website, the NHS website, charities such as The Eve Appeal, or speaking with a healthcare professional.
Patient Information Forum Urges Reliable Sources
Sophie Randall, director of the Patient Information Forum, similarly urged those seeking health information to seek out reputable, trustworthy sources. "Organizations put a huge amount of time and effort into producing information that meets our criteria, and we would like to see that content being used properly in models like ChatGPT," she said.
The findings highlight the growing concern over the use of AI for medical advice and the potential for misinformation to harm public health.



