Liesbeth: We created an AI model, a deep learning algorithm, for PD-L1 scoring in NSCLC histology earlier and are now aiming to validate this AI model in other laboratory workups as well, including different antibodies and scanners. We will describe the steps that are needed for adapting an existing AI model to a new, local laboratory workup (domain adaptation).
This is a very relevant research question because many algorithms exist nowadays, but very few are reliable in new circumstances, especially in immunohistochemistry where there is great variation between laboratories. There are no clear guidelines on how to adapt an existing pathology algorithm to a new workup yet.
As far as I know, this is the first multi-centre domain adaptation study for medical image analysis. It answers a relevant question: how does one start using existing algorithm in a previously unseen (by the AI) workflow?
Easy-to-use, powerful and it permits quick training methods!
74 images were used for designing the base algorithm. For each workup we will use 50 additional slides as potential training data, and 50 slides as validation data.
For the base AI model (not generalized to the other workups yet): Error % Less than 0.5% for all semantic segmentation layers and about 9% error for object detection. On a whole slide level the agreement to the gold standard is approximately 80%, which is equally accurate as pathologists themselves doing the task.
It was a good experience! The scientists and support team at Aiforia respond very quickly and have a lot of technical expertise. It has been very easy to work with Aiforia, I especially like that you don’t have to worry about AI hyper-parameters and storage issues, but can focus on annotations and AI model performance. I also liked the Annotation Assistant very much, it speeds up annotating significantly, by proposing training regions and annotations.