We decode AI.
Centre for Credible Artificial Intelligence
Warsaw University of Technology
Mission
Our mission is to make artificial intelligence truly verifiable, explainable,and controllable.
In a world dominated by algorithms that operate like “black boxes” — so complex that even their creators can’t fully explain how or why they make decisions — the Centre for Credible AI (CCAI) was founded. This is a centre that isn’t afraid to question prevailing paradigms, with a clear mission: to make artificial intelligence truly verifiable, explainable, and controllable.
We specialize in Explainable Artificial Intelligence (XAI) — a field that keeps asking questions where most have already accepted the answers. We’re not just interested in building bigger and faster models. We care about whether they can be understood and improved — because understanding is the foundation of trust, and trust is the prerequisite for progress.
Values
Integrity
We are transparent and act consistently with ethical principles.
Excellence
We pursue excellence in every aspect of our work.
Impact
We create AI solutions that make a real difference.
Research Teams
🔴 RED-XAI ‒ verification, exploration and control
We focus on developing innovative methods and tools to improve the explainability, reliability, and controllability of multimodal AI systems. Our goal is to challenge the status quo in the formal analysis, exploration, and testing of foundation models that integrate diverse data types—including text, images, and structured data.
Leader: prof. Przemysław Biecek is a model scientist specializing in interactive exploration and analysis of artificial intelligence. He leads research at the intersection of computational statistics and computer science, developing models and tools for model red-teaming, auditing, and validation-oriented eXplainable AI.
LinkedIn Google Scholar www
🧬 BIO-XAI ‒ explainable AI for Life Sciences
We focus on developing explainable AI methods tailored to the needs of life sciences, with particular emphasis on genomics and molecular modeling. Our goal is to unlock new scientific insights by combining structural genomics, generative AI, and explainable machine learning, enabling biologically grounded analysis of high-dimensional data.
Leader: We're looking for a leader for this team!
🔵 BLUE-XAI ‒ human-centered explainable AI
We focus on assessing the trustworthiness and societal impact of large language models (LLMs) and other AI systems in human-facing applications. Our goal is to advance human-centered XAI by developing methods to evaluate user trust, define ethical requirements, and design interactions that foster transparency, accountability, and cognitive alignment between intelligent systems and their users.
Leader: We're looking for a leader for this team!
🟣 PHYS-XAI ‒ physics-aligned explainable AI
We focus on developing AI systems whose behavior is reliable, and consistent with known physical laws. Our goal is to advance physics-aligned XAI by creating methods that assess whether model predictions respect fundamental principles—such as symmetry constraints or system dynamics—ensuring that AI remains grounded in the structure of the real world, especially in scientific and engineering applications.
Leader: We're looking for a leader for this team!
Open positions...
Strategic Partners
Fraunhofer Heinrich-Hertz-Institut
Vision
We imagine a world where AI is fully credible and understandable. We develop the technology and culture to make that future possible.
AI models are often trained on a single, static dataset and then deployed into a dynamic, unpredictable reality. They optimize a single value, but fail when the environment shifts. We do not accept this technological mediocrity. Our ambition is to expose the internal logic of such systems, highlight their limitations, and provide real tools for control. To us, explainability is not a “feature” — it’s a means to improve models and a gateway to new scientific knowledge.
Our specialization is in combining explainability and controlability of AI systems used in high-stakes, socially responsible domains — such as medicine, education, and bioinformatics. Where prediction alone isn’t enough, and decisions must be grounded in knowledge that can be trusted.
Contact
Centre for Credible AI
Warsaw University of Technology
Pl. Politechniki 1, 00-661 Warsaw, Poland
E-mail: contact@credible-ai.org
Center for Credible AI (CCAI) is a project carried out within the International Research Agendas programme of the Foundation for Polish Science co-financed by European Union under the European Regional Development Fund.