our team
We are an interdisciplinary team – because algorithmic knowledge without domain knowledge leads nowhere.
Team Leaders
Prof. Przemysław Biecek is a model scientist specializing in interactive exploration and analysis of artificial intelligence. He leads research at the intersection of computational statistics and computer science, developing models and tools for model red-teaming, auditing, and validation-oriented eXplainable AI
TEAM
RED-XAI: Verification, exploration and control
We focus on developing innovative methods and tools to improve the explainability, reliability, and controllability of multimodal AI systems. Our goal is to challenge the status quo in the formal analysis, exploration, and testing of foundation models that integrate diverse data types—including text, images, and structured data.
To be announced
More information about this team leader will be available soon.
TEAM
BLUE-XAI: Human-centered explainable AI
We focus on assessing the trustworthiness and societal impact of large language models (LLMs) and other AI systems in human-facing applications. Our goal is to advance human-centered XAI by developing methods to evaluate user trust, define ethical requirements, and design interactions that foster transparency, accountability, and cognitive alignment between intelligent systems and their users.
To be announced
More information about this team leader will be available soon.
TEAM
BIO-XAI: Explainable AI for Life Sciences
We focus on developing explainable AI methods tailored to the needs of life sciences, with particular emphasis on genomics and molecular modeling. Our goal is to unlock new scientific insights by combining structural genomics, generative AI, and explainable machine learning, enabling biologically grounded analysis of high-dimensional data.
To be announced
More information about this team leader will be available soon.
TEAM
PHYS-XAI: Physics-aligned explainable AI
We focus on developing AI systems whose behavior is reliable, and consistent with known physical laws. Our goal is to advance physics-aligned XAI by creating methods that assess whether model predictions respect fundamental principles—such as symmetry constraints or system dynamics—ensuring that AI remains grounded in the structure of the real world, especially in scientific and engineering applications.
Researchers

prof. Jacek Tabor
generative models
interpretability
biomedical applications

dr Bartosz Naskręcki
generative models
interpretability
formal methods in mathematics and programming

Vladimir Zaigrajew
representation learning
interpretability
mechanistic interpretability

Bartek Kochański
computer-aided diagnosis
biomarkers
AI in radiology
research commercialization

Paweł Struski

Agata Kaczmarek
Administration

Hanna Góźdź

Ewa Maszke

Agata Balak

Hanna Piotrowska
Supporters

Kasia Modrzejewska
our partners
We build cross-institutional connections.

