Artificial intelligence is no longer a future consideration for public health. It is already shaping how health data are analyzed, how risks are predicted, how resources are allocated, and how decisions are made, which are often faster than our systems, policies, and workforces are prepared to respond.
The real question before us is not whether AI will influence public health. It already does. The question is whether public health will help shape how generative AI, agentic AI, and artificial general intelligence (AGI) are developed, governed, and applied, or whether those decisions will be made without us.
Public health has faced similar challenges before. Over the last century, we have navigated profound changes from building sanitation systems and expanding vaccinations to modernizing how we collect data and respond to emerging threats. Each advancement brought promise, but also risk. Each required leadership grounded in evidence, ethics, and an unwavering commitment to equity. Artificial intelligence is no different. What is different is the speed at which it is advancing and the scale of its potential consequences.
Used responsibly, AI can strengthen disease surveillance, improve emergency preparedness, accelerate research, and help identify patterns that save lives. Used without care—or without public health at the table—it can reinforce bias, widen inequities, erode trust, and make decisions about communities without their involvement or consent.
Technology itself is neither neutral nor inevitable. It reflects the values, assumptions, and priorities of those who design, deploy, and govern it. That is why this moment demands leadership from public health—and especially from academic public health.
Public Health Has a Responsibility to Lead
Academic public health plays a unique and essential role in shaping the future of the workforce, the evidence base, and the policies that protect communities. Schools and programs of public health educate the professionals who will work at the intersection of data, policy, systems, and communities. These graduates will be asked to evaluate AI tools, translate their outputs for decision-makers, and ensure that innovation serves the public good rather than undermining it.
If we fail to prepare them, we leave those responsibilities to others who may not share public health’s commitment to equity, transparency, accountability, and population-level impact.
For too long, conversations about artificial intelligence in higher education have focused narrowly on classroom use: whether students should be allowed to use AI tools, how to prevent misuse, or how to detect AI-generated work. Those questions matter, of course, but they are not sufficient.
The larger challenge is how we prepare public health professionals to understand, question, govern, and improve AI systems that increasingly influence health outcomes. AI literacy must become part of what it means to be a public health professional, alongside epidemiology, biostatistics, ethics, and systems thinking. This is not about training coders. It is about cultivating leaders who can critically assess AI’s role in decision-making and ensure that human judgment, community values, and public accountability remain at the forefront.
Innovation Without Equity Is Not Progress
AI is often described as objective or impartial, but public health professionals know better. Algorithms learn from data, and data reflect the structures and inequities of the world in which they are collected. Without deliberate design and governance, AI systems can amplify existing disparities, particularly for communities that have historically been underrepresented, misrepresented, or excluded.
Public health has a long history of addressing these realities. We understand that outcomes are shaped by social, economic, environmental, and political forces. We understand that trust is earned through transparency and engagement. And we understand that communities must be partners, not afterthoughts, in solutions designed to improve their health.
This perspective is urgently needed in AI discussions.
Who decides which data are used and which are ignored?
Whose outcomes are prioritized and whose risks are deemed acceptable?
Who is accountable when AI-driven decisions cause harm?
These are not technical questions alone. They are public health questions. And they require public health leadership.
ASPPH’s Commitment to Responsible and Ethical AI
Recognizing both the promise and the risk of artificial intelligence, ASPPH launched the AI for Public Health initiative to help academic public health meet this moment with intention and clarity. This work reflects a simple but critical belief: AI should strengthen public health’s mission, not redefine it from the outside.
Through this initiative, ASPPH is working with its member institutions and partners to promote the responsible, ethical, and equitable integration of AI across education, research, and practice. Our focus is not on chasing the newest technology or adopting tools for their own sake. It is on building readiness and ensuring that public health institutions have the knowledge, policies, and capacity to engage AI thoughtfully and responsibly.
As part of this effort, ASPPH convened a multidisciplinary Task Force on the Responsible and Ethical Use of AI in Public Health, bringing together leaders from academia, practice, policy, and related fields. The task force was charged with a clear goal: to define what responsible AI means in public health, grounded in our values and informed by real-world needs.
The result of this work will be a forthcoming AI Task Force report, designed to provide guidance and recommendations for schools and programs of public health as they navigate this rapidly evolving landscape.
Setting the Stage for the AI Task Force Report
The upcoming report is not a technical manual or a one-size-fits-all prescription. Instead, it is intended to serve as a practical, values-driven framework to help institutions ask the right questions and make informed decisions.
The report examines four interconnected focus areas that are essential to institutional readiness:
- Teaching and Learning
How do we integrate AI into public health education in ways that enhance learning, strengthen critical thinking, and prepare students for a technology-enabled workforce without compromising academic integrity or equity?
- Education and Workforce Preparation
What foundational competencies do future public health professionals need to engage with AI responsibly across sectors, roles, and career paths? How do we ensure equitable access to AI literacy for all students, regardless of institution or background?
- Practice and Research
How can AI be used to strengthen surveillance, preparedness, research, and community engagement while maintaining public trust and protecting against bias, misuse, or harm?
- Policy, Governance, and Infrastructure
What institutional policies, governance structures, and safeguards are needed to ensure accountability, transparency, and alignment with public health values as AI becomes more deeply embedded in our systems? How can academic public health help shape public policy to ensure AI is used in ways that protect communities and promote equity?
Across all four areas, the report emphasizes a central theme: AI must augment, but not replace human judgment, expertise, and compassion. Public health professionals must remain accountable for decisions that affect people’s lives.
This work is ongoing, and we are committed to developing it transparently and collaboratively. We invite colleagues and partners across academic public health and beyond to engage with us as the report takes shape. You may sign up to be notified when the AI Task Force report is released here. We also welcome feedback on the four focus areas and preliminary recommendations outlined here.
Why This Work Matters Now
This effort comes at a time of extraordinary challenge for public health. Trust in institutions is strained. Resources are limited. The workforce is under pressure. At the same time, communities face growing threats from infectious diseases and chronic conditions to climate change, mental health crises, and more.
AI will not solve these challenges on its own. But neither can public health afford to ignore its influence.
If public health hesitates, others will fill the gap; however, often without the ethical frameworks, equity commitments, or population-level perspective that our field brings. If public health leads, however, AI can become a tool that supports better decisions, stronger systems, and healthier communities.
Leadership in this space does not mean uncritical adoption. It means asking hard questions, setting clear standards, and insisting that innovation serves the public good. It means preparing students and professionals to engage with AI confidently and responsibly. And it means centering communities in decisions that affect their health and well-being.
A Call to Collective Leadership
ASPPH’s AI for Public Health initiative and forthcoming task force report are only starting points. Their success depends on a collective commitment from faculty and students, institutional leaders, practitioners, policymakers, and partners across sectors.
Public health has always been a field defined by service, integrity, and evidence. Those values must guide us now. As artificial intelligence continues to evolve, our responsibility is not to resist change but to shape it, ensuring that technology enhances public health rather than undermining it by eroding trust, equity, or accountability.
This is a moment for leadership, not hesitation. If public health leads with its values, AI can become a powerful collaborator in advancing health and well-being for everyone, everywhere.