The World Health Organization highlights the critical need for safe and ethical applications of AI in the promotion of health and well-being.
In Brief
The World Health Organization issued a statement advocating for the responsible and ethical application of AI and LLMs within the healthcare sector.
The WHO cautions that rushing to implement unproven technologies could result in errors from healthcare professionals, potentially putting patients at risk.
The organization recommends conducting a comprehensive evaluation of the actual benefits AI can bring to the healthcare sector prior to widespread adoption.

The WHO has engaged in discussions about artificial intelligence and large language models, urging their ethical and safe employment to ensure the safety of individuals while enhancing public welfare.
As generative AI systems like OpenAI’s ChatGPT and Google’s Bard continue to evolve rapidly, the potential of artificial intelligence to revolutionize healthcare is significant. transform the healthcare industry By leveraging vast amounts of clinical data, AI has the ability to facilitate the discovery of innovative drugs and treatment options, while also aiding physicians in crafting individualized care strategies.
AI technologies can assist in early disease detection and suggest preventive actions even before symptoms manifest. Recently, scientists developed an AI model that effectively predicts individuals' susceptibility to pancreatic cancer. have discovered In light of ongoing debates regarding AI regulation, the WHO expressed its concerns about potential misuse of the technology and is advocating for the establishment of protective measures to safeguard patient welfare and the integrity of the healthcare system.
The organization points out that it's vital to scrutinize the risks associated with using LLMs for enhancing access to health information, as decision-support tools, or in improving diagnostic abilities in under-resourced environments. Although the WHO supports leveraging new technologies, it is wary that due diligence is not consistently exercised with LLMs.
The WHO identifies the need for stringent oversight for using AI and LLMs ethically and effectively, highlighting issues such as:
Training AI with biased data can lead to incorrect or misleading information, which could jeopardize health, fairness, and inclusiveness.
- LLMs can produce responses that sound credible to users but may contain serious inaccuracies and errors.
- Some LLMs are trained on data collected without consent and may fail to safeguard private information, including sensitive health details shared by users.
- These models can also be exploited to create and spread convincing but false information in various formats, making it hard for the public to differentiate from trustworthy health content.
- While the WHO encourages the use of cutting-edge tools like AI and digital health to enhance human well-being, it calls on policymakers to prioritize the safety and rights of patients during the commercialization of LLMs.
- The WHO advises that these pressing concerns should be adequately tackled and suggests a thorough examination of the real benefits that AI can provide to the healthcare sector before it gets extensively implemented. Back in 2021, they released guidance titled 'Ethics & Governance of Artificial Intelligence for Health', asserting that the development of AI technologies should prioritize ethics and human rights throughout their design and application processes.
Comprehensive Guide to Prompt Engineering: From Fundamentals to Advanced Techniques Strategies for Ethical and Sustainable Growth in the Virtual Realm of ‘The Metaverse’ Arab Health delves into the intersection of healthcare and the metaverse, estimating that the industry's value will reach $5.37 billion by 2030.
Read more:
Disclaimer
In line with the Trust Project guidelines Hyperliquid revises its fee structure and staking levels to enhance trading flexibility.