چكيده به لاتين
Breast cancer is the most commonly diagnosed cancer, accounting for 30% of new cancer diagnoses in women and posing a serious threat to women’s health. There are various tools available for detecting breast cancer. Mammography is considered the gold standard screening technique for breast cancer, but it has limitations for women with dense breast tissue. In such cases, breast ultrasound is often recommended as a supplementary imaging technique. Conventional ultrasound imaging provides a two-dimensional image of the tissue; however, due to its limitations, automated breast ultrasound (ABUS) has recently been introduced, allowing for three-dimensional imaging. The 3D automated breast ultrasound is fast, repeatable, and has revolutionized breast cancer imaging and diagnosis for women. Mass segmentation is essential for feature extraction in designing computer-aided diagnosis systems, as well as for volume estimation and temporal comparisons used by radiologists. Since radiologists deal with a volume, rather than a two-dimensional image, when analyzing ABUS images, describing a three-dimensional object—especially as image resolution increases—becomes time-consuming and exhausting. Therefore, designing a segmentation algorithm is crucial and unavoidable. Mass segmentation presents three main challenges: first, breast masses vary significantly in shape, size, and texture, making it difficult to develop a method independent of these variations. Second, speckle noise reduces the quality of ultrasound images. Third, larger dimensions complicate the process, as the segmentation algorithm must extract three-dimensional objects. To facilitate interpretation of these images, computer-aided diagnosis systems are under development, where mass segmentation is a necessary component for feature extraction and temporal comparisons. This study aims to present a method for effective noise reduction that enhances clarity without losing diagnostic information, serving as a preprocessing step for breast mass segmentation. It focuses on different mass scales and shapes in 3D ABUS images using deep learning techniques. The proposed method is a self-supervised approach that includes auxiliary tasks for textural differentiation learning to enhance breast tissue quality and reduce speckle noise, as well as the primary task of segmentation. The proposed model uses an atrous inception module to capture tumors of varying shapes and sizes. Additionally, a residual channel attention module enhances the model’s ability to selectively focus on the most informative features from low- and high-level representations. Experimental results demonstrate that the proposed method improves segmentation compared to advanced methods, achieving a Dice Similarity Coefficient (DSC) of 76.73, Jaccard Index (JI) of 62.48, Recall (REC) of 72.69, Precision (PRE) of 84.50, and 95% Hausdorff Distance (95HD) of 4.065.