Wed, 04/22/2026 - 13:00
Kod CSS i JS

Two-thirds of NCN researchers oppose the use of generative AI in the evaluation of grant proposals. At the same time, 60% believe that NCN should allow its use in their preparation. This is the main finding of a survey conducted by NCN among principal investigators submitting proposals in OPUS, SONATA and PRELUDIUM calls concluded over the past two years.

The boundary defined by researchers themselves is clear: generative AI is acceptable as an editorial tool for language editing, translation and abstracts. As the author of a research concept or as a reviewer of a proposal, it is not.

The findings are consistent with NCN’s position from May 2025, which allows the auxiliary use of GenAI in preparing proposals and prohibits its use in scientific evaluation.

The survey was carried out in October 2025 by the NCN Evaluation and Analysis Team, led by Dr Anna Strzebońska.

What they do and do not want

A total of 42% of respondents have used GenAI tools. They most commonly used them for language editing of proposals (39%), preparing abstracts, including abstracts for the general public (18%), and translating text (14%).

More than half (54%) believe that generative tools should not be used to develop the proposal concept or prepare literature reviews. Nearly one in two respondents (46%) support a ban on using AI to prepare summarised versions of proposals for experts.

Differences can be observed between calls for early-career researchers and the call open to all applicants. In SONATA and PRELUDIUM, 49% and 45% of applicants respectively used GenAI, compared with 37% in OPUS. Responses do not differ significantly across scientific groups, suggesting that these tools are already part of everyday research practice.

Three approaches

Three regulatory approaches emerge from respondents’ comments.

The first emphasises full responsibility of the researcher. The reasoning is pragmatic: bans are difficult to enforce, and funding agencies lack reliable tools to detect AI use in proposals. A proposal drafted using AI thoughtlessly will show signs of being formulaic, which an expert will quickly recognise, and given the low success rates in NCN calls, such a project will not succeed.

The second approach calls for a complete ban. The reasoning is ethical: a grant proposal demonstrates a researcher’s competence and forms an integral part of their creative work. Delegating writing to a machine is seen as unfair competition for independent researchers. One respondent compared the use of GenAI to doping in sport.

The third and largest group proposes a “middle ground”. It does not reject the technology but calls for clear rules and verification mechanisms: mandatory disclosure of GenAI use, a distinction between language editing and substantive content generation, and proportionate sanctions for discrepancies between declarations and the actual use of the tools.

Regardless of their regulatory stance, respondents point to the same risk: the leakage of unpublished research ideas into public models that use user data for further training. Text pasted into publicly available tools for language editing may be retained and lose its confidential status. The research community therefore expects solutions ensuring control over data flows, including on-premises tools and zero-retention policies.

Proposal evaluation: a clear “no”

The stance on the use of AI in proposal evaluation is much more clear-cut than in proposal writing. 67% of all respondents oppose it, rising to 72% among grantees. This opinion is consistent across scientific disciplines and independent of call outcomes.

The arguments go beyond general scepticism. Respondents refer to research indicating that large language models replicate the biases from training data and favour content that matches their own patterns over scientifically valuable content. Algorithmic evaluation would therefore favour proposals written to “please the algorithm”, rather than the scientifically strongest proposals. Concerns were also raised about new forms of manipulation, such as hidden instructions embedded in the text of the proposal (e.g. in white text) or excessive keyword use.

According to respondents, the only acceptable uses of AI for reviewers are formal tasks, such as checking completeness, compliance with call requirements, and the consistency of the document. Full responsibility for the content of the review must remain with the expert.

What next?

The survey results will serve as a starting point for further regulatory work at NCN. They point to three areas where clarification is expected: transparent declaration of GenAI use in proposals and reviews, data processing security, and training for applicants and reviewers, including protection against manipulation using AI tools (so-called prompt injection).

The approach adopted by NCN is consistent with a broader European standard, including the European Commission’s recommendations set out in Responsible Use of Generative AI in Research.

The full study forms part of the “Report on the Evaluation of NCN’s Activities”, which will be published after approval by the Ministry of Science and Higher Education.

The section on GenAI is available as an attachment (PDF file).