DGIST-Stanford Joint Research Team Successfully Developed Novel Medical AI Model based on Federated Learning! Expected to Take the First Step in the Era of Large-scale AI

- Professor Sanghyun Park’s research team in the Department of Robotics and Mechatronics Engineering successfully developed a novel AI model that can effectively utilize medical images from multiple healthcare institutions with the knowledge distillation technique based on federated learning - Findings were published in Medical Image Analysis (MedIA), one of the top journals in medical AI

□ Professor Sanghyun Park at the Department of Robotics and Mechatronics Engineering, the Daegu Gyeongbuk Institute of Science & Technology (DGIST; President Kunwoo Lee) successfully developed the technology that can accurately segment body organs by effectively learning medical image data, which is distributed across multiple healthcare institutions, with the federated learning technique[1].In a joint study with a team led by Professors Kilian Pohl and Ehsan Adeli at Stanford University, Park’s team successfully developed the technology that can accurately segment different body organs by effectively learning medical image data used for different purposes in different hospitals, which is expected to greatly contribute to the development of large-scale medical AI models in the future.


□ Hospitals and other healthcare institutions have organ image data for various parts of the body for different purposes. To facilitate and provide healthcare more accurately, however, it is necessary to develop an AI model for multi-organ segmentation based on the medical data that individual institutions do not have. In the past, image data from different healthcare institutions was collected and learned on a central server, and therefore, it was difficult to apply the above method in the healthcare field, which is sensitive to data breaches and leaks. Furthermore, different healthcare institutions have different areas of interest for the use of images, which results in a limitation in training a model that can analyze and simultaneously segment multiple areas.


□ Against this backdrop, Professor Park proposed a multi-organ segmentation model based on federated learning to effectively utilize distributed data with different organ labels[2]without data breaches and leaks. Federated learning makes it possible to train an AI model by allowing different institutions to work with each other without directly sharing distributed data. However, during the process where the information obtained from distributed data is compiled, information is lost, which is also known as the problem of “catastrophic forgetting,” and the data with different labels for different areas of interest makes learning unstable, leading to the shortcomings that the model is not established or the learning speed slows down.


□ To tackle this problem, Park’s team proposed the knowledge distillation technique. To begin with, the research team used a multi-head U-Net model to segment the image data of different body organs from different institutions, and shared the segmented images with shared embedding learning, which enabled different institutions to perform federated learning by using the knowledge of the global model and the pre-trained specific organ segmentation model when training the AI model. Consequently, Park’s team successfully developed a novel technique that uses fewer parameters and computations with better performance than previously proposed models.


□ To verify the developed technique, the research team applied the technique to an abdominal CT dataset with seven different segmentation labels. Validation revealed that while the traditional multi-organ segmentation models had the performance of 66.82% on average in federated learning, the newly developed technique had the higher performance of 71.00% on average and reduced the inference time with shared embedding learning.


□ Professor Park said, “In this study, we successfully developed the technology to segment different organs of interest so that medical AI can be effectively trained and utilized even though medical image data from multiple healthcare institutions are not shared. I think that the newly developed technology will be greatly helpful in medical image analysis, and it is expected to contribute to the development of large-scale medical AI models in the future.”

□ Meanwhile, this study was funded by the DGIST General Project and the Daegu Digital Innovation Promotion Agency, and its findings were published in Medical Image Analysis (MedIA), one of the top journals in the field of medical AI.

 - Ccorresponding Author E-mail Address : [email protected]

[1] Federated learning: A distributed machine learning technique that allows different institutions to work with each other to train an AI model without directly sharing data stored in multiple locations, such as devices or institutions.

[2] Label: A particular marking attached to a file to facilitate the management or processing of the file.