Medical AI Group Medical AI Group
  • about
  • people
  • research
  • Publications
  • Jobs
  • Teaching
  • News
  • Contact

Research Areas

Our research is broad and does not always fit neatly into predefined categories. Nonetheless, a substantial part of our work can be understood as contributing to improved uncertainty quantification and interpretability in machine learning, as well as to the development of methods that generalize across tasks.

Interpretable Machine Learning
The rapid developments and early successes of deep learning technology in medical image analysis (and other fields) have caused the field to prioritize predictive accuracy…

Meta-Learning and Generalization Across Tasks
A major limitation of deep learning in many real-world applications is the scarcity of labelled data. Meta-learning, which leverages experience across related tasks to…

Robustness, Safety and Uncertainty
In medical image analysis, confidently predicting something false can have devastating consequences. Apart from achieving high predictive accuracy, one needs to establish…
No matching items
Back to top
 

Made with Quarto