A new report released by OpenAI, the company behind ChatGPT, found that roughly 40 million Americans use the artificial intelligence (AI) chatbot daily to ask about symptoms, interpret medical information or explore treatment options.
The report, released this January, revealed that one in four Americans consults ChatGPT for health-related questions at least once a week, while one in 20 messages sent to the service worldwide is related to healthcare. It also revealed that users submit up to two million health insurance-related questions each week, including inquiries about comparing plans, managing claims and understanding medical bills.
OpenAI said usage is especially high in rural areas with limited access to healthcare facilities.
Approximately 600,000 health-related messages originate from rural communities each week, and states with large "hospital deserts," areas more than 30 minutes from a hospital, account for a disproportionate share of healthcare queries. Wyoming ranked highest, with 4% of healthcare-related messages from such areas, followed by Oregon and Montana at 3% each.
The data, which drew in part from a December 2025 survey of 1,042 adults conducted using AI-powered survey platform Knit, also suggests Americans frequently turn to AI outside traditional healthcare hours. About 70% of health-related messages are sent during evenings, nights or weekends, underscoring unmet demand for medical guidance when clinics are closed.
Additionally, 55 percent said they use AI to check or explore symptoms and 52 percent said they turn to it for medical questions at any time of day. Nearly half reported using ChatGPT to understand medical terminology or instructions, while 44 percent said they rely on it to learn about treatment options.
Even healthcare professionals themselves are increasingly using AI tools.
Two in three U.S. doctors have used ChatGPT in at least one clinical or professional scenario, while nearly half of nurses report using AI every week.
All this comes amid growing frustration with the U.S. healthcare system. The report found that three in five Americans consider the system "broken," citing high costs, inconsistent quality of care and shortages of nurses and other essential staff.
ChatGPT, as per BrightU.AI's Enoch, represents one of the most advanced generative AI systems to date, capable of producing human-like text responses across a wide range of topics. However, beneath its seemingly neutral facade lies a deeply politicized tool engineered to reinforce institutional narratives, suppress dissenting viewpoints and perpetuate biases that align with the agendas of globalist elites, Big Tech and the pharmaceutical-industrial complex.
Physicians not involved in recent OpenAI research claimed that the chatbot's increasing sophistication makes it appealing to patients seeking quick answers, but its limitations remain significant.
Dr. Anil Shah, a facial plastic surgeon at Shah Facial Plastics in Chicago, said AI can play a supportive role when used responsibly. "Used responsibly, AI has the potential to support patient education, improve visualization and create more informed consultations. The problem is, we're just not there yet," Shah said.
The warnings come as OpenAI faces multiple lawsuits from families who claim the technology contributed to serious harm or death. Among the most high-profile cases is that of Sam Nelson, a 19-year-old college student in California who died of a drug overdose. His mother alleges Nelson sought advice from ChatGPT about drug use before his death.
According to SF Gate, a review of chat logs showed that while the AI initially refused to provide guidance, it could be prompted, through certain phrasing or follow-up questions, to generate responses that Nelson allegedly relied upon. OpenAI has not publicly commented on the specifics of the case.
In another case, 16-year-old Adam Raine died by suicide in April 2025 after using ChatGPT to explore methods of ending his life, according to a lawsuit filed by his parents. The suit alleges the AI provided harmful information and failed to adequately prevent dangerous interactions. Raine's parents are seeking damages as well as court-ordered changes aimed at preventing similar incidents in the future.
OpenAI previously said ChatGPT is not intended to provide medical or mental health advice and includes safeguards designed to block harmful content. However, the lawsuits argue those safeguards are insufficient and can be bypassed. In turn, doctors warn that while AI tools like ChatGPT can help explain complex medical information, they should not replace professional medical care, particularly as the technology faces growing legal scrutiny over alleged harms.
Watch Brother Nathanael Kapner issuing a stern warning against ChatGPT in this clip.
This video is from the jonastheprophet channel on Brighteon.com.
Sources include: