There’s a chance that by the time you read this blog, it’ll be out of date. That’s how quickly AI is moving. The latest report from TechUK has outlined how much value AI will add to the UK economy if the regulatory clarity is provided by the Government (£400 billion by 2030 if you haven’t seen). However, views towards AI technologies are nuanced. People see the benefit but are concerned about a lack of human interaction and there’s mistrust. Communications around the endless possibilities of AI vs the risks are certainly coming thick and fast.
For those working within the healthcare sector, this superfast pace of change is alien. Yes, we got the COVID vaccine in record time, but traditional scientific research takes years.
Therefore, we already have a challenge in communications; reconciling the pace that AI is moving with the reality for patients. Patients (who are also people) are currently seeing Mid Journey and ChatGPT updates over mere weeks, but the impact of AI on their condition is part of the longer game.
That’s not to dim the awe-inspiring potential of AI and it is certainly revolutionising patient care, diagnosis, and treatment already. A recent report by NIHR , summarised AI’s ’10 promising interventions for healthcare’. This included the fact that AI will soon be used by doctors to diagnose heart attacks more quickly and accurately; which is pretty amazing stuff.
Not forgetting the ‘behind the scenes’, but equally important uses of AI in healthcare. Booking appointments, internal comms and planning surgical lists could lead to significant economic and operational efficiencies.
However, whilst we’re understanding that AI doesn’t always mean super cyborgs, with our communications hats on there’s no getting away from where we are and where we need to get to. There are real risks to AI and it would be foolish to gloss over these within your communications strategy and long-term goals of securing trust with patients.
Actually, as Understanding Patient Data says, ‘it’s better to start with being trustworthy, than ‘building trust’, considering elements such as accountability and transparency in how we speak about patient data.
What we can do is address misconceptions and concerns head-on. By transparently discussing AI’s limitations, ethical considerations, and ongoing safety measures, we are owning the conversation and showing a willingness and openness in our dialogue.
Of course, fundamental to this approach is storytelling; something healthcare communications professionals are no stranger to. The personal experiences and journeys of patients and healthcare professionals are our bread and butter, and it’s how we can make AI relatable, emotionally engaging, and impactful.
Getting this balance right can be hard. Of course, you want to shout from the rooftops about the amazing potential of your AI-powered research. But there’s always been a responsibility when it comes to medical reporting. We’re also encountering new reputational risks with AI paths unknown.
There’s no doubt we’re in a life-changing moment, which we should all be curious about. And really, although it may feel more daunting, it does require the same approach we would take with any healthcare communications strategy: goals, key messaging segmented by audience, risk analysis and channel activity.
By highlighting the positive impact of AI while addressing potential concerns, we can create a future where AI and human expertise work hand in hand.
If you like to discuss how best to communicate your healthcare stories, get in touch with a member of our specialist healthcare PR team today.