- Chiman Haydar Salh
- [email protected]
- 0750 479 4161
- Disertation new1
-
Breast Cancer (BC) is a prevalent and potentially life-threatening disease affecting women worldwide. The timely and precise identification of ailments is essential in enhancing patient outcomes and rates of survival. Deep learning models have emerged as powerful tools for medical image analysis that potentially aiding in automatic BC detection. Several studies have been done in this area, and many gaps should be considered in the survey. As there are no datasets related to the Kurdistan Region of Iraq (KRI) hospitals categorizing images based on breast cancer cases, much of the writing on this topic has focused on classification accuracy without addressing reliability until now. This dissertation intends to develop a model based on machine learning that can be used to detect BC.
In order to be able to swiftly and cost-effectively identify potential cases of breast cancer. In this dissertation, very robust approaches for detecting breast cancer, particularly mastectomy and Wide Local Excision (WLE) were developed. BC datasets have been constructed, including magnetic resonance imaging (MRI) (DCE-MRI) stands for dynamic contrast-enhanced magnetic resonance imaging and mammography.
This dissertation intends to develop a model based on deep learning that can be used to detect breast cancer. It presents three main methods which can be categorized as follows:
This novel approach aims to classify breast cancer MRI images through a three-stage process: segmentation and feature extraction using five techniques it is (Scale-Invariant Feature Transform (Sift), Histogram of Oriented Gradients (HOG), Edge-Oriented Histogram (EOH), Local Binary Patterns (LBP), and Bag of Words (BoW) ), and classification using six algorithms it is (K-Nearest Neighbors (KNN), Artificial Neural Network (ANN), Support Vector Machine
(SVM), AdaBoost, Decision Tree (DT), and Random Forest(RF) ). The method demonstrated promising results, with 91.9% accuracy for images from Rizgary Hospital - Erbil and Hiwa Hospital - Sulaymaniyah, 97% accuracy on the ACRIN dataset, and 92.3% accuracy for breast cancer MRI images, highlighting its effectiveness in BC diagnosis via MRI imaging.
The second models, which are convolutional neural network (CNN), ResNet152V2, and Mask Region-based Convolutional Neural Network (Mask R-CNN), have been used to develop a cancer mammography image classification and recognition model from authentic images with less training time and computation cost but high accuracy and recall which would be the target of the experiments. It has been concluded that ResNet152V2 achieved a higher accuracy of 100% in recognition of the type of breast density and normal or abnormal images. A modified CNN has been used to determine whether the mammogram image is left or right. This model used Mask RCNN to differentiate between malignant and benign tumors and find the tumor size.
The third model uses different deep-learning approaches to increase the deeper features in breast cancer MRI and DCE_MRI images. This model has EfficientNetV2L, Mask R-CNN, Detectron2, and Detectron2 with Faster RCNN. In this model, we used Yolov7 instead of Mask RCNN. Has been concluded that Mask R-CNN achieved higher accuracy in recognition by more than 10% than YoloV7. This dissertation has been extended to the automatic detection of breast cancer for mastectomy or WLE using different deep-learning models.
In conclusion, incorporating deep learning models into breast cancer diagnostics yields promising outcomes in accuracy and efficiency. These models can potentially be helpful tools for radiologists and pathologists in detecting and classifying breast cancer.
- Erbil Technical Engineering College
- Information Systems Engineering
- Machine Learning
Gear Fault Detection Based on Time and Non-Time Series Feature Representation Using Machine Learning
- Zrar Khald Abdul
- [email protected]
- 0773 064 8822
- Dissertation_Zrar Khald Abdul_toprint
-
Abstract:
The automatic fault detection in rotating machinery has emerged as crucial factors for ensuring the high reliability of modern industrial systems. Therefore, developing automatic fault detection is a vital challenge in modern industry. This dissertation intends to develop an automatic model based on machine learning to detect gear faults. During the development of an automatic model for gear fault detection, certain issues are found that need to be addressed to establish a reliable monitoring system. Firstly, the literature has not yet explored the potential of representing vibration signals, despite their time-series nature, using both non-time-series based and/or time-series-based features. Secondly, the vibration signal may have different channels based on the type of accelerometer sensor. A lack of studies is noticed to show the impact of the representation of these channels on the performance of the fault detection system. And thirdly, the fault diagnosis process for rotating machinery becomes challenging due to the non-stationary and non-linear characteristics that commonly arise from varying operating conditions. Various conditions make it difficult for traditional linear to effectively capture the underlying fault patterns.
To address these problems, some investigations are required to be studied including investigating various feature representations for fault detection, investigating the nature of fault detection in terms of time series or non-time series analysis, investigating various forms of fusion models using traditional concatenation or adopting multi reservoir to model with multi-channels, and investigating time-consuming of model.
Regarding the utilized features, Mel Frequency Cepstral Coefficients (MFCCs) and Gamma tone Cepstral Coefficients (GTCC) have been adopted as a feature and various forms of these features have been used to feed two main different models including the time series model and non-time series model. For the time series model, Long Short-Term Memory (LSTM) and Echo State Network (ESN) have been adopted to classify the gear faults. The high performance of LSTM is achieved for gear fault classifications despite its being time-consuming during the training phase. To avoid time consumption, the ESN, which consumes less time as some of their layer’s weight values are non-trainable and selected randomly. Further investigation has been studied by adopting multi channels reservoir which has led to achieving reliable gear fault detection.
Regarding the non-time series model, Support Vector Machine is fed by two different forms of the feature representation, the first of which is the statistical form called in this dissertation (stat-SVM). The problem with the statistical form is that it may lead to loss of some important features related to gear fault and it may lead to degradation in gear fault detection. To address this problem, we use a concatenation of the frames of both features (MFCC and GTCC) to be fed to the SVM (concat-SVM). Consequently, a high-performance rate of gear fault detection is achieved. Further, investigation is adopted by optimizing the features hyperparameters as both features were originally designed to extract features from speech signals. For this purpose, Grey Wolf Optimization (GWO) and Fitness Dependent Optimizer (FDO) have been utilized to optimize three hyperparameters of both GTCC and MFCC. The performance of optimizing the hyperparameters of MFCC has not shown any improvements. Oppositely, improvement in GTCC performance by the same optimization process is observed and validated.
All the proposed models have been evaluated by two public datasets namely, Prognostic Health Monitoring 2009 (PHM09) and Drivetrain Dynamic Simulator (DDS). Based on the result, non-time series model demonstrated superior performance compared to time series models within a margin of 2 to 15% accuracy.
- Erbil Technical Engineering College
- information system engineering
- Gear fault detection using machine learning algorithm.
- Omar Shirko Mustafa
- [email protected]
- 0750 363 3901
- 1-FINA~2
-
Quantum Key Distribution (QKD) represents a groundbreaking application of quantum physics for secure symmetric encryption key distribution. This method exploits quantum mechanics unique attributes, such as the no-cloning theorem and Heisenberg uncertainty principle, to create inherently secure keys resistant to
eavesdropping. However, the primary challenge is the exponential reduction in key distribution rates as distances increase. To extend the secure communication range of QKD networks, a Classic Trusted Relay (CTR) scheme has been proposed, introducing trusted intermediate nodes for enhanced security over distance. Nevertheless, concerns regarding trust requirements in relay nodes and communication channel reliability pose significant risks, potentially leading to CTR failures and overall system security compromise.
This dissertation presents a novel approach addressing CTR failure
challenges and optimizing generated key utilization. The solution integrates Software-Defined Networking (SDN) with QKD, capitalizing on SDN's flexibility and control for improved network management. SDN, dividing the network into control and data planes, offers unified management and programmability. To enhance QKD network resilience and reliability, the Software-Defined Quantum Trusted Relay Failure (SDQTRF) model is proposed. This model employs a new SDN controller function to effectively orchestrate QKD network operations. By incorporating SDN capabilities, the SDQTRF model enhances fault tolerance and the system's ability to recover from relay failures. The SDN controller actively monitors the QKD network, including relay node status and key distribution processes. Upon detecting a relay failure, the SDN controller responds proactively by reconfiguring the network through key recycling using Q-learning. If recycling fails, the controller reroutes the key distribution process through alternative paths determined by the Q-learning method. This proactive approach minimizes relay failure impact, ensures continuous key distribution, and preserves system security. To assess the SDQTRF model's effectiveness, extensive simulations were conducted on two distinct network topologies: the National Science Foundation Network (NSFNET) and the United States network (USNET). Simulations utilized a high-performance NVIDIA GeForce RTX 3060Ti GPU and ran on the Windows 11 operating system, which provided stability. To simulate the proposed SDQTRF model, JavaScript, PhP, and Python programming languages using NetworkX library were employed due to their flexibility and extensive libraries for scientific computing and network simulations. Simulation results indicate significant improvements, including a substantial increase in the key generation ratio, remarkable key utilization rate enhancement, impressive recovery after failure rates, considerable reduction in the avalanche effect, and a lower service blocking rate due to SDQTRF model implementation. - Erbil Technical Engineering College
- Information System Engineering
- Quantum cryptography