our team

We are an interdisciplinary team – because algorithmic knowledge without domain knowledge leads nowhere.

Team Leaders

Prof. Przemysław Biecek

Prof. Przemysław Biecek

Director

XAI

LLM

ANALYTICS

Prof. Przemysław Biecek is a model scientist specializing in interactive exploration and analysis of artificial intelligence. He leads research at the intersection of computational statistics and computer science, developing models and tools for model red-teaming, auditing, and validation-oriented eXplainable AI

TEAM

RED-XAI: Verification, exploration and control

We focus on developing innovative methods and tools to improve the explainability, reliability, and controllability of multimodal AI systems. Our goal is to challenge the status quo in the formal analysis, exploration, and testing of foundation models that integrate diverse data types—including text, images, and structured data.

Dr. Tomasz Steifer

Dr. Tomasz Steifer

Team Leader

learning theory

logic

expressivity

Tomasz Steifer is a researcher working at the interface of machine learning, artificial intelligence, and theoretical computer science. He investigates the fundamental capabilities and limitations of modern ML/AI architectures, developing mathematically grounded frameworks that explain when these systems can learn, where they must fail, and how to design models that are more powerful, controllable, and predictable.

TEAM

BLUE-XAI: Human-centered explainable AI

We focus on assessing the trustworthiness and societal impact of large language models (LLMs) and other AI systems in human-facing applications. Our goal is to advance human-centered XAI by developing methods to evaluate user trust, define ethical requirements, and design interactions that foster transparency, accountability, and cognitive alignment between intelligent systems and their users.

?

To be announced

More information about this team leader will be available soon.

TEAM

BIO-XAI: Explainable AI for Life Sciences

We focus on developing explainable AI methods tailored to the needs of life sciences, with particular emphasis on genomics and molecular modeling. Our goal is to unlock new scientific insights by combining structural genomics, generative AI, and explainable machine learning, enabling biologically grounded analysis of high-dimensional data.

?

To be announced

More information about this team leader will be available soon.

TEAM

PHYS-XAI: Physics-aligned explainable AI

We focus on developing AI systems whose behavior is reliable, and consistent with known physical laws. Our goal is to advance physics-aligned XAI by creating methods that assess whether model predictions respect fundamental principles—such as symmetry constraints or system dynamics—ensuring that AI remains grounded in the structure of the real world, especially in scientific and engineering applications.

Researchers

Prof. Jacek Tabor

Prof. Jacek Tabor

generative models

interpretability

biomedical applications

Prof. Julian Sienkiewicz

Prof. Julian Sienkiewicz

complexity

sociophysics

PINNs

Dr. Jacek Rogala

Dr. Jacek Rogala

biomedical applications

Dr. Inez Okulska

Dr. Inez Okulska

agentic AI

LLM

NLP

semantics

linguistics

Dr. Bartosz Naskręcki

Dr. Bartosz Naskręcki

generative models

interpretability

formal methods in mathematics and programming

Dr. Kamil Książek

Dr. Kamil Książek

biomedical AI

meta learning

continual learning

computer vision

Hubert Baniecki

Hubert Baniecki

interpretability

tbd

Vladimir Zaigrajew

Vladimir Zaigrajew

representation learning

interpretability

mechanistic interpretability

Bartek Sobieski

Bartek Sobieski

generative models

interpretability

Bartek Kochański

Bartek Kochański

computer-aided diagnosis

biomarkers

AI in radiology

research commercialization

Paweł Struski

Paweł Struski

economics

LLMs

agentic AI

Tomek Weksej

Tomek Weksej

mechanistic interpretability

concept-based interpretability

Jan Piotrowski

Jan Piotrowski

NLP

LLM

mechanistic interpretability

agentic AI

Dawid Płudowski

Dawid Płudowski

mechanistic interpretability

time series

Agata Kaczmarek

Agata Kaczmarek

Michał Włodarczyk

Michał Włodarczyk

neural fields

computer vision

robotics

Paweł Gelar

Paweł Gelar

computer vision

mechanistic interpretability

Jakub Grzywaczewski

Jakub Grzywaczewski

generative models

attributions

Jakub Rymarski

Jakub Rymarski

generative models

biomedical applications

Klara Baś

Klara Baś

quantitative imaging

bayesian inference

uncertainty estimation

Piotr Suszyński

Piotr Suszyński

bioinformatics

genetic association testing

XAI

Dr. Maciej Świechowski

Dr. Maciej Świechowski

DECISION-MAKING AI

COMPUTATIONAL INTELLIGENCE

MEDICAL APPLICATIONS

AI IN INDUSTRY

Administration

Hanna Góźdź

Hanna Góźdź

communication

promotion

Ewa Maszke

Ewa Maszke

Agata Balak

Agata Balak

Hanna Piotrowska

Hanna Piotrowska

Supporters

Dr. Katarzyna Modrzejewska

Dr. Katarzyna Modrzejewska

our partners

We build cross-institutional connections.

Partner logo 1
Partner logo 2
KP Labs logo