In the United States, people without adequate health care access (or without adequate access in the moment) often turn to ChatGPT for help with diagnoses, symptoms management, and other healthcare-related questions.
Why should educators be aware of this research?
We don’t know how adolescents are using AI with regards to healthcare - or how much they trust the information they receive. We also don’t know the percentage of accurate vs inaccurate healthcare information received via chatbot.
Students who over-rely on AI for advice may be in real danger. On the other hand, they may be willing to begin investigating health issues with an AI that they would not feel comfortable addressing through other mediums.
Students need to understand both the training bias and the hallucinogenic nature of AI. It can be a serious tool for gathering data, so long as one understands the sourcing of that data. The worry would be students relying on AI to “make decisions” for them.
Further, the AI may try to please the student user by suggesting that symptoms are not indicative of major health problems - or by doing the opposite.
This is more evidence that teachers have a responsibility to teach their students responsible AI use.