Executive Summary
In this guide
In this guideArtificial Intelligence (AI) technologies are advancing rapidly with significant potential to transform how food is produced, managed and regulated. In the food system, AI offers the prospect of more efficient, predictive and responsive assurance processes, ranging from real-time detection of hazards to automated documentation checks and predictive modelling of risks. At the same time, the adoption of AI raises important questions about accountability, transparency and trust, particularly where automated systems interact with actions to comply with the Food Safety Act (1990) and the due diligence defence relied upon by Food Business Operators (FBOs).
The Food Standards Agency (FSA) has a responsibility to ensure that innovation does not undermine consumer protection, regulatory oversight, or public confidence. This Science Council project was established to explore the likely applications of AI in food safety and assurance; identify the benefits and risks and consider implications for the FSA’s role as a regulator. The study drew on academic and policy evidence with key insights from a June 2025 workshop attended by food businesses, regulators, assurance providers, academics and technology developers. Four case studies were used to anchor discussions in realistic scenarios where AI (all forms from machine learning applications to generative and emerging AI systems) is likely to be applied now and in the near future:
-
AI-driven safety and regulatory compliance evaluation for manufactured foods
-
AI-supported data pack generation for third-party certification and assurance
-
AI-assisted detection of infections and other pre/post-mortem pathologies in UK abattoirs
-
AI-powered document inspection at UK ports of entry
These case studies, and post-workshop discussions amongst the project team, wider science council and FSA staff, highlighted both opportunities and challenges. AI could enable faster detection of hazards, more consistent and scaled inspections, wider surveillance across supply chains and real-time data analytics to help target interventions. It could reduce reliance on sampling or retrospective checks and free inspectors or auditors to focus on higher-value tasks. However, workshop participants also identified risks: AI systems may embed bias or drift if not carefully validated; they can generate outputs that are difficult to explain or reproduce; and in poorly managed businesses, they could conceal weaknesses behind apparently robust documentation. Across all scenarios, the need for human oversight, clear accountability, explainability of decisions and robust validation of AI tools was consistently emphasised. Especially in these early stages of AI deployment, when both industry and regulators are still learning how AI systems can be safely applied, vigilance is essential; over time, experience will help clarify the contexts in which AI delivers most value and the necessary safeguards.
For the FSA, the implications are clear. AI has the potential to strengthen assurance processes, but only if deployed within strong governance frameworks and supported by clear guidance. We recommend the Agency should clarify best practice for the integration of AI within FBO accountability; continue its promotion of data standards and sharing; provide guidance for FBOs on responsible use; and work with industry, standards bodies, and other regulators to support codes of practice and validation mechanisms. At the same time, it must remain alert to the risks of hype and over-reliance, ensuring that AI enhances, rather than displaces, the human accountability that underpins food law.
Due to the rapid emergence of AI technologies, and notwithstanding the present limited case study evidence from scaled industrial use, there was broad consensus that existing UK food safety regulations are sufficiently robust to encompass the use of currently known AI systems in the food system. This study does not therefore call for immediate changes to legislation. However, the FSA will need to continually monitor developments in AI, assess their impacts on assurance processes and remain prepared to act if gaps emerge. Future regulatory attention may be required in areas such as validation standards, data governance, or liability frameworks should AI adoption accelerate, or if new classes of tools present novel risks.
The following recommendations aim to provide the FSA with practical steps to support safe and responsible AI adoption, ensuring that innovation contributes to a more predictive, preventative, and trusted food safety system.