A new study published in Public Health Reports, “Evaluation of Generative Artificial Intelligence Safeguards Against the Creation of Images and Videos Harmful to Public Health,” examines the growing role of generative artificial intelligence in shaping health information and raises important concerns about the accuracy and safety of AI-generated content. Evaluating 12 widely available image and video generation tools, researchers found that more than half of outputs created in response to health-related prompts contained potentially harmful or misleading messages.
The study identified troubling patterns, including content that normalized vaping, stigmatized individuals with obesity, or downplayed health risks during pregnancy. Performance varied significantly across platforms, with some tools consistently producing unsafe content while others demonstrated more effective safeguards. These inconsistencies point to a rapidly evolving digital landscape where the quality and reliability of health information can vary widely depending on the technology used.
The findings underscore the need for clearer standards, stronger oversight, and increased engagement from the public health community. As generative AI continues to expand, public health professionals play a critical role in ensuring that emerging technologies support accurate, ethical, and evidence-based communication.
Access the full article here.