Towards A Device-Independent Deep Learning Approach for the Automated Segmentation of Sonographic Fetal Brain Structures: A Multi-Center and Multi-Device Validation
Abstract
Access to quality prenatal ultrasonography (USG) is limited by a number of well-trained fetal sonographers. By leveraging on deep learning (DL), we can assist even novice users in delivering standardized and quality prenatal USG examinations, necessary for the timely screening and specialists referrals in case of fetal anomalies. We propose a DL framework to segment 10 key fetal brain structures across 2 axial views necessary for the standardized USG examination.
Despite training on images from only 1 center (2 USG devices), our DL model was able to generalize well even on unseen devices from other centers. The use of domain-specific data augmentation significantly improved the segmentation performance across test sets and across other benchmarking DL models as well. We believe, our work opens doors for the development of device-independent and robust models, a necessity for seamless clinical translation and deployment.