Methodology
In this guide
In this guideThe purpose of this study was to examine how artificial intelligence (AI) might be applied in food safety and assurance, to identify the opportunities it offers, the risks it presents, and to assess the implications for the FSA in its role as regulator. The overall objective was to generate evidence-based recommendations to guide the FSA in supporting the safe and responsible adoption of AI across the UK food system.
A central component of the methodology was a full-day workshop held in London on 9 June 2025, attended by 43 participants drawn from food businesses, regulators, assurance providers, academics and technology developers. To structure the discussion, four case studies were prepared in advance, each presenting hypothetical uses of AI in a realistic food industry setting (see Appendix A). The case studies were deliberately framed to address different parts of the food chain, each raising distinct assurance challenges, and each requiring a multitude of diverse AI systems to tackle complex problems. They were selected because they represent both the diversity of the food system and situations where AI deployment is likely to become a reality in the near future. The four scenarios covered:
-
AI-driven safety and regulatory compliance evaluation for manufactured foods
-
AI-supported data pack generation for third-party certification and assurance
-
AI-assisted detection of infections and other pre/post-mortem pathologies in UK abattoirs
-
AI-powered document inspection at UK ports of entry
Participants received a briefing pack in advance, which set out the purpose of the study and the issues for consideration (Appendix A), while workshop facilitators were provided with a supplementary briefing document to ensure consistency in the conduct of breakout sessions. Together, these materials provided a shared frame of reference and ensured that discussions were anchored in practical challenges directly relevant to the FSA’s statutory remit.
Participants were assigned to breakout groups, each facilitated by a senior expert and supported by a notetaker. Sessions were conducted under the Chatham House rule to encourage open discussion. Two rounds of breakout discussions, each lasting 75 minutes, allowed participants to participate in two different case studies thereby providing a broad range of perspectives. Each breakout group concluded with the production of a short, summary report that was presented in the final plenary session. These reports enabled findings to be compared across groups and key themes to be determined. To complement the workshop outputs, participants were also invited to submit written reflections after the event, identifying what they regarded as the three most important issues for the FSA to consider.
Analysis of Results
The workshop generated a large volume of qualitative material, including detailed notetaker records from each breakout group, facilitator summaries presented in plenary, and post-event written reflections submitted by participants. These outputs were collated and reviewed to identify both case-specific insights and cross-cutting themes (see Appendix B). Analysis proceeded in two stages. First, the outputs for each case study were organised around the structured questions set out in the briefing materials, ensuring that the findings reflected the issues most relevant to the FSA’s remit. Second, themes that cut across case studies were identified, such as the need for transparency, human oversight, validation of training data, and mechanisms to manage bias or drift. These themes informed the synthesis presented later in this report and underpin the recommendations to the FSA.
Evidence from the workshop was then synthesised with relevant literature and policy analysis to draw out cross-cutting issues and to situate the findings within the wider ethical, technical, and legal context. Key areas of analysis included the role of AI as decision support versus autonomous decision-making, the requirements for transparency, explainability and traceability, the challenges of data quality and standardisation, and implications for accountability and legal liability. The approach ensured that the report reflects both expert evidence and stakeholder perspectives, providing a balanced assessment of how AI could shape food safety and assurance in the years ahead.
The case study findings documented in Appendix B present the results of this analysis. Each section begins with the key questions posed, followed by a summary of discussion, supported where appropriate by anonymised quotations. This structure allows both the breadth of perspectives and the areas of convergence or divergence to be captured, providing a balanced account of how AI might realistically shape food assurance.