Ai Policy

The Lister Institute of Preventive Medicine (the “Charity”) Artificial Intelligence (AI) Policy

Policy Summary

The Lister Institute of Preventive Medicine (“the Institute”) is committed to responsible use of Artificial Intelligence (AI). This policy outlines the Institute’s expectations for the responsible and ethical use of Artificial Intelligence (AI), including generative AI tools (e.g. ChatGPT, Deepseek, Copilot, Claude, Gemini), in the preparation and assessment of funding applications, and other related charitable work. This policy is based on the Joint statement on the Use of generative AI tools in funding applications and assessment produced by the Research Funders Policy Group, which includes the Association of Medical Research Charities (of which we are a member). It also takes account of the deliberations of the Declaration on Research Assessment (DORA – which we are a signatory of).

Scope

This policy applies to:

• All individuals and organisations applying for funding;

• Staff, trustees, reviewers, and contractors involved in assessment, peer review, or decision-making processes;

• Any administrative activities undertaken on behalf of the Institute.

Principles

The Institute recognises that generative AI tools can provide significant benefits, for example:

• Supporting research design and writing;

• Assisting neurodivergent or multilingual applicants;

• Enhancing accessibility and reducing administrative burden;

• Research reporting.

However, their use also raises important challenges relating to:

• Research integrity and originality;

• Transparency and accountability;

• Data protection and confidentiality;

• Intellectual property and copyright;

• Bias and fairness.

This policy sets out our expectations to ensure AI is used responsibly and ethically in line with the Institute’s values and the principles of fairness, integrity, and trust.

Expectations for Applicants

In preparing an application, applicants must ensure that no confidential or personal information is input into AI systems that could compromise privacy or data protection obligations, or intellectual property.

Any AI-generated or AI-assisted content, such as interpreting or analysing data, generating or refining code, comparing literature sources, or generating an abstract must be clearly acknowledged within the application (e.g. in a footnote or statement such as “This section was drafted with the assistance of ChatGPT/Copilot/Gemini”).

The Institute is aware of the capacity of AI to generate plausible text that is supported by non-existent evidence or sources. Applicants are fully accountable for the accuracy and integrity of all submitted material, including text generated or edited by AI; confirmation of the accuracy and originality will be included as part of the application process.

Applicants should not use AI in ways that may misrepresent authorship or inflate capability.

In line with current sector good practice, applicants are expected to be transparent where they have used generative AI tools in the development of an application. This information will not affect the assessment process.

Expectations for Peer Reviewers and Assessors

• Reviewers may use AI for side tasks to review previous research or existing knowledge, methods or techniques, or for translation/language refinement.

• Peer reviewers must not input any confidential information from funding applications into generative AI tools to maintain intellectual property and data protection.

• Reviews, assessments, or responses must be the independent work of the reviewer, drawing on their expertise and judgement.

• Reviewers should always be alert to bias and inaccuracies within funding applications and aware that AI-generated outputs can be a particular source of such problems. Thus, they should exercise careful critical evaluation when assessing proposals that declare the use of AI.

Organisational Use of AI

The Institute may use AI tools to support administrative or operational efficiency, but only in compliance with data protection law and information security standards, with human oversight and accountability for decisions.

No sensitive, confidential, or personal data shall be input into external AI systems without explicit authorisation and assurance of data security.

The Institute currently has 1 paid for Co-Pilot license. This license is linked to the Institute’s Microsoft 365 Tenancy. As it is a paid license any data or information put into Co-Pilot will be treated as private data and will not be used as training data for AI learning.

Responsibilities

All parties involved in the Institute's activities must act with integrity and openness about their use of AI tools. Failure to disclose substantive AI use or misuse of AI systems may be considered a breach of research integrity or funding terms.

The Institute reserves the right to request clarification, require disclosure, or take corrective action in cases of concern.

Policy Review

Generative AI technologies and their ethical implications are rapidly evolving. The Institute will monitor developments and guidance from funders and research organisations and other relevant bodies (e.g. the Research Funders Policy Group, Wellcome, UKRI). This policy will be reviewed at least every two years to ensure it remains fit for purpose.

Date policy published: April 2026

Date of next review: April 2028

This Policy links to the Lister Privacy Notice