
It’s an increasingly common scenario. You fill in an online form to request an appointment with a doctor, and back comes a link asking you to upload a photo of your ailment. You pick up your phone, a couple of clicks and it’s sent. While you wait for a call back, your GP is studying your image.
But do you look pixelated? Have the colours been adjusted? Has the phone erased a rash or smoothed your skin? Does the doctor see you as you really are, or as your phone camera thinks you should be?
New research from our team suggests the answer is often the latter. Smartphone cameras and software routinely alter images in ways that can mislead doctors, and in some cases, put patients at risk of misdiagnosis.
Remote consultations are now routine in many health systems. No longer an emergency pandemic stopgap, general practice is increasingly offered in a “hybrid” way, with patients receiving care in different ways, some in person, many remotely.
Across Australia, North America and parts of Scandinavia, video appointments are commonplace. In the UK, patients are often asked to upload photos through online platforms. Photos are used to diagnose conditions such as eczema or warts, assess responses to treatment and assess how unwell someone appears, informing decisions about if, and how urgently, they need to be seen in person.
For many people it’s a quick and convenient way to receive care, reducing travel times and avoiding time spent waiting for a call to be answered or hanging around in a germ-packed doctor’s waiting room.
Safety incidents in remote consultations remain relatively rare. But previous research by members of our team showed doctors sometimes miss important clinical signs, leading to misdiagnosis or delayed care. Examples include mistaking a malignant skin lesion for something benign, or failing to recognise colour changes, like jaundice or the blue tinge of low oxygen levels (cyanosis).
Doctors often blame themselves when this happens. Our latest research suggests something else is at play. We found that automatic image processing and compression on smartphones can distort clinically important visual information. Colour signals shift. Fine detail disappears. Subtle changes in the skin become harder to detect.
In other words, it isn’t just human error. It’s the technology.
Smartphones are designed to make photos look good, not to preserve medical accuracy. They automatically adjust exposure, balance colours, sharpen edges and compress files. These features are perfect for social media, but problematic for healthcare.
Lighting conditions in people’s homes add another layer of uncertainty. So does the quality of the screen at the doctor’s end. Night-mode settings, poor quality displays and poor calibration can all change how an image appears. Together, these factors can make people look healthier than they really are, flatten rashes, soften swelling or alter the colour of lesions.
Some patients are particularly vulnerable. We already know that some medical devices, such as pulse oximeters, perform less accurately in people with darker skin. Image distortion risks compounding those inequities as those with darker skin are more likely to have clinical findings missed.
Other cases where findings may be missed are where clinical findings are subtle, or when AI-based filters are used. People with lower digital literacy, or those who struggle to communicate their symptoms clearly, are also at greater risk when something is overlooked.
So, what can be done?
As patients, there are simple steps that help. Turn off filters. Use good lighting, daylight if possible. Check that the photo resembles what you actually see before sending it. Include a written description alongside the image. And if it seems your doctor is seeing something different from you, say so.
Medical teams also need to be aware of the limits of patient-generated images. That means checking back: “I can’t see any changes here. Are you noticing something different?” And when there’s doubt, arranging an in-person review.
Screens should be large enough and of sufficient quality. Night-mode settings should be switched off. Image uncertainty should be treated like any other clinical uncertainty.
But is it fair to leave all this to patients and already stretched medical teams? There’s a strong case for wider change. Smartphones could include a dedicated healthcare mode that disables filters and warns users when image quality is too poor for clinical decisions. Video platforms and upload systems could flag inadequate lighting, low resolution or excessive compression before images are sent.
As digital consultations become embedded in everyday care, image quality needs to be treated as part of patient safety infrastructure, not just a technical detail. Smartphones were built to make us look good. Medicine requires something different: accuracy.
If the health service is going to rely on smartphone cameras for clinical decisions, how those cameras are designed, regulated and used may need a rethink. Because convenience should never come at the cost of care.
Rebecca Payne receives funding from a University of Oxford Clarendon-Reuben Scholarship and works on a project funded by Health and Care Research Wales
Zengbo Wang does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
* This article was originally published at The Conversation
0 Comments