Norwegian version
!
Event has expired!

This event has expired and it will be removed from our website 30 days after event's date.

Public defence: Andrea Storås

Andrea Storås will defend her thesis “Beyond the Black Box: Transparent Machine Learning Systems for Medical Applications” for the PhD programme in Engineering Science.

Trial Lecture

The trial will be held at 10:00.

Title: "Evaluation and validation of ML models in the medical context".

Public defence

The candidate will defend her thesis at 12:00.

Trial Lecture and public defence on Zoom

You can follow the Trial Lecture and public defense digitally on Zoom (zoom.us).

Webinar ID: 683 7628 4769

Passcode: 160524 

Ordinary opponents

Leader of the public defence

Siri Fagernes, Associate Professor, Head of Group Mathematical Modelling, Head of Group Human-Computer Interaction and Universal Design of ICT, Department of Computer Science, Faculty of Technology, Art and Design, OsloMet, Oslo, Norway

Supervisors

  • Summary

    Artificial intelligence and machine learning (ML) have become a part of our everyday lives. The advancement of such technology affects the society by assisting us in several tasks.

    Healthcare and medicine are some domains where ML is believed to play an important role in the future. Studies present impressive results when training ML models to solve medical tasks that are typically performed by humans today.

    Implementing ML systems in the clinic can among other things lead to faster and more accurate diagnosis, earlier detection of diseases and personalized treatment planning and patient follow-up.

    Taking the shortage of healthcare personnel into account, ML systems could improve the efficiency of the healthcare system and let the medical experts focus on the patients and challenging medical tasks rather than administrative and repetitive simpler tasks.

    New challenges with new technology

    However, new technology also brings new challenges. Lack of transparency in ML systems for medical applications is likely to limit their use in the clinic. The ML models are typically ‘black boxes’ that healthcare workers do not understand how work.

    Moreover, the models might be developed on private datasets where details regarding demographics, age, gender and ethnicity are unavailable. If the development dataset does not represent the population that the model will be used in, the model might fail drastically or discriminate against minority groups.

    Further on, the model evaluation process might not be thoroughly described, making it difficult to evaluate the applicability of the ML system.

    This thesis contributes to solve the above-mentioned challenges by exploring methods that improve the transparency of medical ML systems. 

    In addition to applying explanation methods for a variety of medical use cases, we publish open medical datasets, investigate techniques for synthetic healthcare data generation and collaborate with medical experts for evaluating the ML models and explanations.

    Expert feedback is important to detect improvements that are necessary for successfully explain the ML systems to healthcare personnel. Moreover, new medical insights might be gained by exploring how the ML models work and analyse medical data.

    Important to explain machine learning to medical professionals

    The results indicate that ML systems will most likely serve as a useful tool in the clinic in the future.

    However, existing explanation methods should be tailored to the specific medical use case to meet the expectations from healthcare personnel.

    Explaining the ML systems to experienced experts in the field can give rise to new discoveries about diseases and how to treat them.

    Finally, combining knowledge from computer science and medicine enables us to find solutions to tasks that would be impossible to solve without interdisciplinary collaboration.

Contact us

Loading ...