- Zrar Khald Abdul
- [email protected]
- 0773 064 8822
- Dissertation_Zrar Khald Abdul_toprint
-
Abstract:
The automatic fault detection in rotating machinery has emerged as crucial factors for ensuring the high reliability of modern industrial systems. Therefore, developing automatic fault detection is a vital challenge in modern industry. This dissertation intends to develop an automatic model based on machine learning to detect gear faults. During the development of an automatic model for gear fault detection, certain issues are found that need to be addressed to establish a reliable monitoring system. Firstly, the literature has not yet explored the potential of representing vibration signals, despite their time-series nature, using both non-time-series based and/or time-series-based features. Secondly, the vibration signal may have different channels based on the type of accelerometer sensor. A lack of studies is noticed to show the impact of the representation of these channels on the performance of the fault detection system. And thirdly, the fault diagnosis process for rotating machinery becomes challenging due to the non-stationary and non-linear characteristics that commonly arise from varying operating conditions. Various conditions make it difficult for traditional linear to effectively capture the underlying fault patterns.
To address these problems, some investigations are required to be studied including investigating various feature representations for fault detection, investigating the nature of fault detection in terms of time series or non-time series analysis, investigating various forms of fusion models using traditional concatenation or adopting multi reservoir to model with multi-channels, and investigating time-consuming of model.
Regarding the utilized features, Mel Frequency Cepstral Coefficients (MFCCs) and Gamma tone Cepstral Coefficients (GTCC) have been adopted as a feature and various forms of these features have been used to feed two main different models including the time series model and non-time series model. For the time series model, Long Short-Term Memory (LSTM) and Echo State Network (ESN) have been adopted to classify the gear faults. The high performance of LSTM is achieved for gear fault classifications despite its being time-consuming during the training phase. To avoid time consumption, the ESN, which consumes less time as some of their layer’s weight values are non-trainable and selected randomly. Further investigation has been studied by adopting multi channels reservoir which has led to achieving reliable gear fault detection.
Regarding the non-time series model, Support Vector Machine is fed by two different forms of the feature representation, the first of which is the statistical form called in this dissertation (stat-SVM). The problem with the statistical form is that it may lead to loss of some important features related to gear fault and it may lead to degradation in gear fault detection. To address this problem, we use a concatenation of the frames of both features (MFCC and GTCC) to be fed to the SVM (concat-SVM). Consequently, a high-performance rate of gear fault detection is achieved. Further, investigation is adopted by optimizing the features hyperparameters as both features were originally designed to extract features from speech signals. For this purpose, Grey Wolf Optimization (GWO) and Fitness Dependent Optimizer (FDO) have been utilized to optimize three hyperparameters of both GTCC and MFCC. The performance of optimizing the hyperparameters of MFCC has not shown any improvements. Oppositely, improvement in GTCC performance by the same optimization process is observed and validated.
All the proposed models have been evaluated by two public datasets namely, Prognostic Health Monitoring 2009 (PHM09) and Drivetrain Dynamic Simulator (DDS). Based on the result, non-time series model demonstrated superior performance compared to time series models within a margin of 2 to 15% accuracy.
- Erbil Technical Engineering College
- information system engineering
- Gear fault detection using machine learning algorithm.
- Omar Shirko Mustafa
- [email protected]
- 0750 363 3901
- 1-FINA~2
-
Quantum Key Distribution (QKD) represents a groundbreaking application of quantum physics for secure symmetric encryption key distribution. This method exploits quantum mechanics unique attributes, such as the no-cloning theorem and Heisenberg uncertainty principle, to create inherently secure keys resistant to
eavesdropping. However, the primary challenge is the exponential reduction in key distribution rates as distances increase. To extend the secure communication range of QKD networks, a Classic Trusted Relay (CTR) scheme has been proposed, introducing trusted intermediate nodes for enhanced security over distance. Nevertheless, concerns regarding trust requirements in relay nodes and communication channel reliability pose significant risks, potentially leading to CTR failures and overall system security compromise.
This dissertation presents a novel approach addressing CTR failure
challenges and optimizing generated key utilization. The solution integrates Software-Defined Networking (SDN) with QKD, capitalizing on SDN's flexibility and control for improved network management. SDN, dividing the network into control and data planes, offers unified management and programmability. To enhance QKD network resilience and reliability, the Software-Defined Quantum Trusted Relay Failure (SDQTRF) model is proposed. This model employs a new SDN controller function to effectively orchestrate QKD network operations. By incorporating SDN capabilities, the SDQTRF model enhances fault tolerance and the system's ability to recover from relay failures. The SDN controller actively monitors the QKD network, including relay node status and key distribution processes. Upon detecting a relay failure, the SDN controller responds proactively by reconfiguring the network through key recycling using Q-learning. If recycling fails, the controller reroutes the key distribution process through alternative paths determined by the Q-learning method. This proactive approach minimizes relay failure impact, ensures continuous key distribution, and preserves system security. To assess the SDQTRF model's effectiveness, extensive simulations were conducted on two distinct network topologies: the National Science Foundation Network (NSFNET) and the United States network (USNET). Simulations utilized a high-performance NVIDIA GeForce RTX 3060Ti GPU and ran on the Windows 11 operating system, which provided stability. To simulate the proposed SDQTRF model, JavaScript, PhP, and Python programming languages using NetworkX library were employed due to their flexibility and extensive libraries for scientific computing and network simulations. Simulation results indicate significant improvements, including a substantial increase in the key generation ratio, remarkable key utilization rate enhancement, impressive recovery after failure rates, considerable reduction in the avalanche effect, and a lower service blocking rate due to SDQTRF model implementation. - Erbil Technical Engineering College
- Information System Engineering
- Quantum cryptography