»Ê¼Ò»ªÈË

XClose

Centre for Behaviour Change

Home
Menu

Engaging the public when using Artificial Intelligence for public health decisions: a toolkit

9 September 2024

A blog from The Human Behaviour-Change Project (Public Engagement Team) on the launch of the AI in Public Health Decisions Toolkit

The AI in Public Health Decisions Toolkit

Considerations when using AI in public health decisions

In the past decade, Artificial Intelligence (AI) has been increasingly used in a variety of different fields. It has helped select films for us on Netflix, contributed to finance forecasting and also been a great change in the field of health care. One specific field to which AI offers a number of advantages is that of public health. 
There are many ways in which AI can be used to advance public health decision making. AI can increase the rate at which evidence can be understood and synthesised. It can efficiently identify what interventions are most likely to work for a given population, and it also has the potential to identify at-risk populations who would benefit greatly from preventative care at critical early stages. For example, it could be used to identify individuals who might benefit from more frequent breast cancer screening.  

“While AI has great potential to advance public health decision making, how do we know when a recommendation can be trusted?

There are many times when Netflix recommends a film and we know we won’t enjoy it. In these instances, we can evaluate the suggestions and ignore them if we do not agree. What is the equivalent of knowing what is a ‘good’ or ‘bad’ recommendation in public health? What are the concerns we should be aware of in a public health system that uses AI?
One of the concerns surrounding AI is its potential to be biased against certain groups of people, depending on the data used to train it. For example, if the data were solely trained on white individuals, the predictions it might make could be less accurate for other ethnicities. This could exacerbate existing social inequalities. To what extent should these concerns be taken into account by public health decision makers?
AI is very complicated, and it can be hard to explain how a system came to its conclusions, even by the people who developed it. When decisions have a direct impact on the public, efforts need to be made to ensure an AI system is understandable to both those making the decision and those being affected by it.

Co-producing resources to help people critically question AI systems

These issues were raised during a series of public engagement workshops, carried out as part of the Human Behaviour-Change Project. Researchers worked with members of the public to co-produce a set of resources to facilitate people’s ability to critically question the nature of an AI system.

“These resources can be used to enable the public to decide how much they want public health decision makers to trust the system. They provide a means for public health decision makers to be accountable to those affected by the decisions they make.

The group suggested a process in which, when an AI system’s outputs might be used to make a public health decision, a public health practitioner could seek to consult a panel of people, representative of those affected by the decision. The toolkit itself includes five components, available as PDFs to download, that can be used by the members of the panel to help them come to their decision. They are available on our . You can read the to find out more about how it was developed.

Overall, AI has the potential to enhance public health decision-making. However, no AI system is perfect and there will always be concerns surrounding its accuracy, transparency, explainability and accountability. In order to understand how these concerns should influence the trust public health decision makers place in an AI system, the views of individuals who might be impacted by these decisions must be accounted for. This co-produced toolkit reflects discussions held by members of the public. It provides the opportunity to critically question how much they want public health decision makers to a use a system that could have implications on the services available to them. 
Ultimately, the toolkit provides Public Health practitioners with a way of providing accountability to those affected by public health decisions made with AI. This is integral to transparent and accountable healthcare.Â