project

Centre for Credible Artificial Intelligence

The Centre was established in connection with the implementation of the project entitled “Centre for Credible Artificial Intelligence”, funded under the International Research Agendas action within the European Funds for Modern Economy (MAB FENG) programme of the Foundation for Polish Science.

The project has received funding in the amount of PLN 29,971,105.00 for the period 2025–2029.

1. Scope of Activities

The project focuses on developing new methods for the verification and explanation of artificial intelligence models, with particular emphasis on the technical analysis of their internal mechanisms. The team will develop attribution algorithms for new types of data (time series, biometric data, medical imaging) as well as counterfactual explanation techniques based on generative AI.

In parallel, research will address the alignment of AI models with domain knowledge, for example in bioinformatics — verifying whether protein structure prediction models behave in accordance with the laws of biophysics.

Complementing the technical core of the project are studies on how individuals and societies respond to AI systems — ranging from user trust analysis to modeling phenomena such as information overload and the spread of misinformation generated by large language models.

2. Target Groups

The project is addressed to three main stakeholder groups:

  • The scientific community, which applies AI models in research and requires robust verification tools.
  • Companies and organizations deploying AI-based solutions, seeking to enhance the safety and credibility of their systems (e.g., in medicine, the space industry, or finance).
  • Society at large, which will benefit indirectly through increased transparency of widely used AI models, reduced misinformation, and improved digital literacy.

3. Project Objectives

The primary objective is to establish and develop a research and implementation unit dedicated to AI model verification, whose long-term sustainability will be ensured through collaboration with EU industry partners and research institutions worldwide.

The Centre for Credible AI aims to achieve qualitative progress in the verifiability of AI systems in terms of safety, transparency, and controllability. In particular, the Centre will:

  • Propose new methods for validation and knowledge extraction from AI models, including foundation models.
  • Advance these methods toward practical applications and commercialization, offering a complete value chain of competencies.
  • Develop standards, recommendations, and best practices for AI model auditing for the business sector in Poland and globally.

4. Expected Outcomes

The project will deliver multiple outcomes:

For science:

  • New methods and tools for explaining and validating AI models (attribution methods, counterfactual explanations, bioinformatics model verification techniques).
  • Publications in leading international journals and conferences.
  • Scientific discoveries in medicine, biology, and physics supported by explainable AI methods.

For business and AI implementers:

  • Commercializable AI auditing and verification tools.
  • Standards and best practices aligned with the EU AI Act requirements.
  • Concrete implementations in cooperation with partners such as the European Space Agency (ESA).

For society:

  • An independent observatory monitoring the safety and transparency of widely used AI systems, including large language models.
  • Mitigation of risks such as deepfakes, misinformation, and algorithmic bias.
  • Increased digital competencies and strengthened national cybersecurity.

5. Project Value

  • Total project cost: PLN 29,971,105.00
  • EU Funds contribution: PLN 29,971,105.00

#EUFunds #EuropeanFunds