Decode AI

Partner and sponsor logos

our mission

Make AI truly verifiable, explainable, and controllable.

In a world dominated by algorithms that operate like "black boxes" — so complex that even their creators can't fully explain how or why they make decisions — the Centre for Credible AI (CCAI) was founded. This is a centre that isn't afraid to question prevailing paradigms, with a clear mission: to make artificial intelligence truly verifiable, explainable, and controllable.

We specialize in Explainable Artificial Intelligence (XAI) — a field that keeps asking questions where most have already accepted the answers. We're not just interested in building bigger and faster models. We care about whether they can be understood and improved — because understanding is the foundation of trust, and trust is the prerequisite for progress.

our vision

Creating technology and culture for trustworthy and transparent AI

AI models are often trained on a single, static dataset and then deployed into a dynamic, unpredictable reality. They optimize a single value, but fail when the environment shifts. We do not accept this technological mediocrity. Our ambition is to expose the internal logic of such systems, highlight their limitations, and provide real tools for control. To us, explainability is not a "feature" — it's a means to improve models and a gateway to new scientific knowledge.

Our specialization is in combining explainability and controlability of AI systems used in high-stakes, socially responsible domains — such as medicine, education, and bioinformatics. Where prediction alone isn't enough, and decisions must be grounded in knowledge that can be trusted.

our values

Our Values

Integrity

We are transparent and act consistently with ethical principles

Excellence icon

Excellence

We pursue excellence in every aspect of our work.

Impact icon

Impact

We create AI solutions that make a real difference.

our team

An interdisciplinary group of researchers

Prof. Przemysław Biecek

Prof. Przemysław Biecek

Director

XAI

LLM

ANALYTICS

Prof. Przemysław Biecek is a model scientist specializing in interactive exploration and analysis of artificial intelligence. He leads research at the intersection of computational statistics and computer science, developing models and tools for model red-teaming, auditing, and validation-oriented eXplainable AI

TEAM

RED-XAI: Verification, exploration and control

We focus on developing innovative methods and tools to improve the explainability, reliability, and controllability of multimodal AI systems. Our goal is to challenge the status quo in the formal analysis, exploration, and testing of foundation models that integrate diverse data types—including text, images, and structured data.

?

To be announced

More information about this team leader will be available soon.

TEAM

BLUE-XAI: Human-centered explainable AI

We focus on assessing the trustworthiness and societal impact of large language models (LLMs) and other AI systems in human-facing applications. Our goal is to advance human-centered XAI by developing methods to evaluate user trust, define ethical requirements, and design interactions that foster transparency, accountability, and cognitive alignment between intelligent systems and their users.

?

To be announced

More information about this team leader will be available soon.

TEAM

BIO-XAI: Explainable AI for Life Sciences

We focus on developing explainable AI methods tailored to the needs of life sciences, with particular emphasis on genomics and molecular modeling. Our goal is to unlock new scientific insights by combining structural genomics, generative AI, and explainable machine learning, enabling biologically grounded analysis of high-dimensional data.

?

To be announced

More information about this team leader will be available soon.

TEAM

PHYS-XAI: Physics-aligned explainable AI

We focus on developing AI systems whose behavior is reliable, and consistent with known physical laws. Our goal is to advance physics-aligned XAI by creating methods that assess whether model predictions respect fundamental principles—such as symmetry constraints or system dynamics—ensuring that AI remains grounded in the structure of the real world, especially in scientific and engineering applications.

see full team

who supports us

Our strategic partner

Fraunhofer logo

The Fraunhofer-Gesellschaft, headquartered in Germany, is one of the world's leading organizations for applied research. It plays a major role in innovation by prioritizing research on cutting-edge technologies and the transfer of results to industry to strengthen Germany's industrial base and for the benefit of society as a whole. Since its founding as a nonprofit organization in 1949, Fraunhofer has held a unique position in the German research and innovation ecosystem.

join us

Want to be part of our team?

see our open positions