Digital harms are a modern determinant of health - and a population health issue


The RCGP has published a position statement on digital harms and children, recognising the growing impact of digital environments on the health, wellbeing and development of children and young people.

To coincide, an abridged version of the following comment piece from RCGP Chair Professor Victoria Tzortziou Brown has been published by the Daily Mail today.


When it comes to health online, trust matters more than ever

Access to information has never been easier. But access to misinformation has never been easier, too, and for many people it is increasingly difficult to know who, or what, to trust.

As a GP, and also as a parent, this is something I see both professionally and personally. Whatever safeguards families put in place to keep children safe online, once a young person has access to a smartphone, parental controls can only ever go so far.

At the extreme end, we know that easy access to harmful online content can have serious, even fatal, consequences. Advice promoting suicide or self-harm, the glorification of extreme sexual practices, and reports of unhealthy emotional relationships with chatbots rightly attract concern. The relentless nature of social media and its impact on mental health are also well-documented.

But beyond these headline-grabbing examples lies a quieter, more pervasive risk: health misinformation and disinformation.

Online platforms are designed to show users more of what they have already engaged with. Algorithms repeatedly expose people to similar content, reinforcing messages whether they are accurate or not. When misleading health information is amplified in this way, the consequences can be genuinely dangerous.

This challenge is being intensified by the rapid growth of artificial intelligence. AI has enormous potential benefits for healthcare, including general practice, and it will play a role in improving future care. But many platforms now default to AI-generated answers to health questions, and these tools are only as good as the information they are trained on and the safeguards governing how it is presented.

In our surgeries, we are increasingly seeing patients who have received inaccurate, confusing or potentially harmful advice from AI tools in response to ordinary health concerns. These systems can produce confident, polished answers that sound medically authoritative, but often aren’t. They have no clinical judgement, no understanding of an individual’s medical history, and no ability to recognise subtle warning signs. This can lead to missed diagnoses, unsafe self-treatment, unnecessary anxiety, or dangerous delays in seeking help.

The idea of “Dr Google” has long divided health professionals. It is positive for patients to be interested in their own health, and the internet can support appropriate self-care. But safe healthcare is not just about information alone. It is about context, empathy, reassurance and understanding. An algorithm cannot weigh risk or respond to uncertainty in the way a GP does every day.

The uncomfortable reality is that much of the health information people encounter online, including some AI-generated material, does not meet the standards of evidence and clinical reliability patients deserve.

There is good health information online. Trusted sources such as the NHS website provide reviewed, evidence-based guidance that can help people understand symptoms and decide when professional help is needed. Initiatives by technology companies to promote verified health content are welcome and should go further.

When it comes to health decisions, trust is essential. That trust is undermined when inaccurate or misleading information is amplified at scale. With reach comes responsibility. Accuracy, transparency and accountability must matter whenever health information is shared widely.

Technology has already achieved extraordinary things, and its potential in healthcare is immense. We should embrace innovation, but we must do so safely and responsibly. That means clearer standards for online health information, stronger expectations on platforms to prioritise reliable sources, and better public understanding of the limits of AI-generated advice.

Above all, we must not lose sight of what good healthcare depends on. Every day in our surgeries, GPs combine scientific knowledge with compassion, judgement and continuity of care to keep patients safe. Technology can support that work, but it cannot replace the human connection at its heart. 

The Royal College of GPs recognises digital harm, in its various forms, as a modern determinant of health. Addressing this, particularly with regard to children, and protecting trust in health information should be treated as a public health priority.

Further information

RCGP press office: 0203 188 7659
press@rcgp.org.uk

Notes to editors

The Royal College of General Practitioners is a network of more than 54,000 family doctors working to improve care for patients. We work to encourage and maintain the highest standards of general medical practice and act as the voice of GPs on education, training, research and clinical standards.