AI in NHS care: what’s the impact, and what do people think?

As the Government attempts to ‘make the NHS the most AI-enabled care system in the world’, the public’s early experiences show it may be causing as many issues as it aims to solve.

The public, policymakers and the tech industry are driving greater AI use in the NHS. But is AI living up to aspirations in the 10-Year Health Plan to ‘help patients better navigate the health service’, ‘end the need for tasks like clinical note taking’, and free up clinicians ‘to better engage with patients’?

To understand whether AI is improving care, we reviewed feedback we heard from the public about the technology in 2025.

How is AI use in healthcare growing?

The 10-Year Health Plan ambitiously aims to ‘make the NHS the most AI-enabled care system in the world’. Earlier in January, NHS England urged health providers to adopt AI notetaking and published a registry of the first 19 ambient voice technology (AVT) suppliers who have self-certified that they meet core requirements. AVTs – or AI scribes – use AI to generate real-time transcriptions and summaries of conversations between clinicians and patients. 

And AI is playing a role in people’s care beyond the NHS. OpenAI, which owns ChatGPT, has announced plans to launch ChatGPT Health, to meet the demand from an estimated 230 million people globally who ask ChatGPT health- and wellness-related questions each week. 

What did we look at?

We analysed how and why people used AI for health information and advice, and people’s attitudes to and experiences of AI use in NHS administration. We saw a notable increase in stories that referenced AI in the second half of 2025, receiving 89% of the year’s feedback on AI between 1 July and 31 December.

This latest analysis follows new findings we published in November, showing one in ten men had turned to AI such as ChatGPT for health information.

Why are people turning to AI for health information and advice?

In our review, we heard from people who had actively used AI for health information and/or advice, as well as people who spoke hypothetically about the perceived benefits of AI tools over NHS consultations.

People were generally using AI to support their health because of difficulty accessing NHS services or due to poor experiences with the NHS. This included:

  • Someone who turned to ChatGPT following frustration trying to get advice from NHS 111
  • One person who was struggling to get a diagnosis from their GP
  • Someone who used AI tools for advice because of a lack of physiotherapy appointments

“The earliest appointment was in 12 days, so I went to ChatGPT for advice”

Story shared by Healthwatch Somerset

“Trying to talk to NHS 111 was exhausting, so in desperation, I put my info to ChatGPT. My age, my weight, etc, and it gave me the advice I needed like a good pharmacist”

Story shared by Healthwatch Hertfordshire

People were generally happy or impressed with the information and advice AI tools provided. They commented that these tools often gave more personalised advice and more detailed information than some NHS online services. 

There were, however, some concerning reports of AI tools providing inaccurate information. 

One person consulted ChatGPT about a rash and accompanying pain. ChatGPT suggested they might have shingles, of which mild cases can be monitored without treatment. This person was eventually diagnosed with Lyme disease, which should be treated as soon as possible. Delayed treatment increases the risk of complications. Although the person was able to get the treatment they needed the same day, this could have easily not been the case. 

In another example, a nurse personally tested an AI tool to see what it advised when they were unwell. The AI tool advised blood tests, though the nurse was certain they could manage their symptoms at home.

“I inputted [sic] my symptoms, but it didn't ask me the questions I thought it would ask…It advised me I needed a blood test, but I knew I didn’t; it could all be done with self-care. I worry AI will be sending patients for tests they don't need.”

Story shared by Healthwatch Calderdale

Is AI helping with booking and other NHS admin?

We also heard about people’s experiences of AI when booking appointments and requesting or changing medication. These experiences were generally poor.

People told us AI systems were not capable of understanding their health or care needs and would not direct them appropriately. This was more common for people with multiple or complex conditions.

“Getting to that via the NHS or the surgery AI IT was tough. I'm quite smart and IT literate, but I found it took a while and several tries to get even that done…Booked and went today. Not a trace of my appt. Had to re-book.”

Story shared by Healthwatch Hertfordshire

People also highlight problems with WhatsApp AI receptionists, which they found difficult to navigate and that occasionally failed to transfer the booking or information to the service.

"I can’t get a GP appointment because I have to book through WhatsApp using AI options that don’t give me options to say what I need. [They] automatically close the request as actioned.”

Story shared by Healthwatch Tameside

People also told us requesting medication or changes to medication through AI systems was challenging. In one case, someone requested a change of medication due to unpleasant side effects, but they were prescribed the same medication again, resulting in a delay in their treatment. Another person described how the quantity of their medication has changed since requesting through AI, though the prescription itself has not changed.

“The AI system is not able to work anything out for itself without being prompted… [T]he AI system is the one left to work out my medication requirements, and it's obvious that it is not that intelligent as it is unable to read detailed individual notes, [or] work out individual medication dispensary information.”

Story shared by Healthwatch Lambeth

Is AI accessible for people with communication needs?

People raised concerns around AI and accessibility. One person registered as blind told us some AI chatbots are not compatible with screen-reading software. People with autism and/or learning difficulties also found AI systems couldn’t understand them or their requests. 

One person noted that AI systems would be inaccessible for some vulnerable groups (e.g. digitally excluded people, people with sensory or cognitive impairments, people with communication needs, and people with mental health issues).

“[A]ll I want to do is order a prescription, but because of my learning difficulties and speech issues, this AI system won't understand me like a real human.”

Story shared by Healthwatch England

Do people feel AI in healthcare is safe or desirable?

One person shared their concern about AI scribes being used to generate clinical summaries and whether it risks important health information being missed.

“The new Anima triaging system is awful. Several times it has been clear the information I've entered did not end up being seen by the GP I saw, which led to things being missed… The way information is then summarised by AI makes this a dangerous system… and the possibility of important clinical information being missed poses a real risk to patient safety.”

Story shared by Healthwatch Rotherham

In October 2025, Healthwatch Surrey published a report detailing feedback from 106 residents on attitudes towards online access to GP services, including the use of AI. They found people were generally happy for AI to be used for administrative tasks, but not health-related issues or questions. 

Survey respondents cited concerns about AI data quality and lack of trust in a relatively new technology, as well as a preference for human interaction.

Healthwatch England has also observed mixed messaging from GPs and hospitals about patient consent. Some say they won’t seek consent before each consultation involving AI scribes. Others say they will always inform patients before or at the start of consultations to give them the option to opt out. 

What’s the future of AI in the NHS?

The Government is clear on the potential benefits of AI use in the NHS. They cite a Great Ormond Street study that estimates A&E departments see 9,259 additional patients per day in the time they save by using AI scribes.

Our feedback suggests people are happy for services to use AI and other online systems when they work well and are not exclusionary. But we also heard concerns, including poor experiences of the effectiveness, accuracy and safety of AI in the NHS, and wider misinformation on health through non-NHS platforms.

More widely, experts at the London School of Hygiene & Tropical Medicine have expressed concerns about using AI chatbots for health advice. In December, the Health Services Journal reported a case from the British Institute of Radiology where the consultation recording stopped halfway through and the software “simply made up the second half based on what it had learned”. 

Other organisations have raised concerns about informed consent to use patient data. The NHS must now keep pace with the rapid roll-out of AI and ensure guardrails are in place to prevent clinical harm or poor experiences. 

A new National Commission into the Regulation of AI in Healthcare is due to produce recommendations this year on how to regulate AI in healthcare. The Medicines and Healthcare products Regulatory Agency (MHRA), the drugs and devices safety body, established this commission last September.

We’ll be sharing our latest patient evidence with the commission via its national consultation.

What people have told us also shines a light on why people are turning to AI sources like ChatGPT. Underlying factors, such as a lack of timely GP appointments, should also be addressed to make services patient-friendly and easily accessible.

What are our recommended principles for AI use in the NHS?

  1. Transparency: The NHS must make clear to patients whenever AI is being used to support patient care and to what level. Is the AI performing low-functionality tasks, acting as a medical device that generates prompts for clinical decision-making, and does it involve any storing of personal data?
  2. Consent: NHS England and regulators should clarify when, if and how providers must get patient consent for use of AI – such as AI scribes – in their care and how to opt out.
  3. Safety: NHS England, regulators and safety bodies should clarify their expectations of the NHS to systematically collect and review errors and mistakes made by AI, as well as encourage direct reporting from patients.
  4. Co-design: Policymakers and suppliers should involve patients in evaluating the effectiveness of new NHS AI tools to shape technological and policy improvements. They should also encourage private suppliers to ensure they signpost users to trusted NHS information from platforms like ChatGPT Health.
  5. Human backstops: The NHS should maintain investment in NHS admin staff to ensure there are always people involved in booking and managing health appointments, especially for people who don’t use digital systems. Guidance should also make clear the role of clinicians in verifying any AI summaries, letters or other outputs before they are added to patient records.

Get the latest news straight to your inbox

Sign up for news alerts if you're interested hearing how people's feedback is improving health and social care.

Sign up