| Welcome to Global Village Space

Monday, September 23, 2024

One-fifth of UK GPs are using AI in clinical practice, study shows

Dr. Blease, the study’s lead researcher from Harvard Medical School, emphasizes that AI tools in healthcare should not be adopted blindly.

A recent study published in BMJ Health and Care Informatics reveals that 20% of UK General Practitioners (GPs) are now incorporating artificial intelligence (AI) tools like ChatGPT into their clinical practice. While AI holds potential benefits for administrative and diagnostic tasks, experts warn of the associated risks, including inaccuracies, biases, and patient privacy concerns.

Survey Insights 

In February 2024, researchers distributed an online survey to 1,006 GPs in the UK via the medical forum Doctors.net.uk. The study sought to gauge the extent to which AI-powered chatbots, such as ChatGPT, Bing AI, and Google’s Bard, are being used in clinical settings. The findings revealed that one-fifth of respondents had employed these tools in various aspects of their practice.

Read More: Revamped Siri with AI expected to arrive in early 2025

The most common application of AI, reported by 29% of GPs, was to generate documentation after patient appointments. Close behind, 28% of respondents said they used AI to assist in suggesting a differential diagnosis, while 25% turned to AI to recommend treatment options. Additionally, 20% utilized AI to summarize patient information from previous medical records.

Administrative Aid  

The study indicates that GPs derive significant value from AI, especially for administrative tasks. With many healthcare professionals spending large portions of their time on paperwork, tools like ChatGPT offer relief by automating documentation and summarization. This administrative aid allows GPs to allocate more time to patient care, potentially alleviating some of the pressures they face due to staff shortages and growing patient demands.

Moreover, some GPs have found AI useful in assisting with clinical reasoning. For example, the ability of AI tools to suggest differential diagnoses or treatment options can serve as a supplemental resource in complex cases. However, experts warn that these technologies must only complement, rather than replace, professional expertise.

Risks: Inaccuracy, Bias, and Privacy Concerns

Despite the growing interest in AI within healthcare, the study authors caution against unregulated or unchecked use. One of the main concerns is the potential for AI tools to generate “hallucinations”—or false information—due to algorithmic limitations. These subtle inaccuracies could have severe consequences in clinical practice, especially when diagnosing or determining treatment paths.

Furthermore, AI tools may harbor inherent biases, as the datasets they are trained on often reflect systemic inequalities. The biases embedded in AI can result in skewed medical recommendations, disproportionately affecting underrepresented or marginalized patient groups.

The issue of patient privacy is also a significant concern. Researchers highlight that it remains unclear how companies behind generative AI, such as OpenAI or Google, handle the data processed through these tools. GPs must remain vigilant about how patient information is managed to avoid violating confidentiality and data protection laws.

Calls for Regulation and Training

The lack of clear regulatory frameworks governing AI in healthcare has sparked debate. While AI presents opportunities to streamline tasks and support decision-making, healthcare professionals, including GPs, need robust guidelines to ensure safe and ethical use.

Dr. Charlotte Blease, the study’s lead researcher from Harvard Medical School, emphasizes that AI tools in healthcare should not be adopted blindly. “Doctors must be trained to critically appraise AI and understand both the benefits and risks,” she notes. These concerns echo sentiments raised by medical defense organizations, which warn GPs of the liability risks associated with relying on AI for clinical decisions.

Future of AI in General Practice

As AI technology continues to evolve, it is poised to play a larger role in healthcare. For now, experts agree that it can be a valuable tool for easing administrative burdens and supporting clinical decisions. However, clear regulations and further training are essential to minimize the risks associated with its use.

Read More: Mind reading with Artificial Intelligence

Professor Kamila Hawthorne, chair of the Royal College of GPs, advocates for strict regulation, stating that “AI must work alongside medical professionals, not replace them.” As AI tools become more integrated into healthcare, maintaining patient safety and privacy will be critical to their responsible use.