Author Archives: Jihan Jihan

Strengthening of Reinforced Concrete Beams ‌‌Having Circular Openings Using Near Surface Mounting Steel Bars

  • Hoshang Hayder Anwer
  • [email protected]
  • +9647504259106
  • Hoshang Hayder Anwer (MSc THESIS) 20-2-2024
  • ABSTRACT

                        Reinforced concrete (RC) beams characterized by circular openings present inherent structural challenges, resulting in a compromise of their load-bearing capacity and overall structural integrity. This thesis endeavors to confront and resolve this issue through the implementation of an innovative approach involving Near Surface Mounting (NSM) steel bars. Circular openings, integral to architectural designs, frequently engender a diminution in the strength of beams. This study investigated the effectiveness of utilizing near-surface mounted (NSM) steel bars to restore the shear strength of deep beams and the presence of openings in slender beams. The experimental work involved testing fourteen simply supported reinforced concrete beams. These beams were divided into two groups, each consisting of beams with different shear span-to-depth ratios (a/h = 1.5 and 3.65). Two specimens served as control samples, while the remaining beams had openings located at various positions. The openings were categorized as large or small, with opening height ratios (ho/h) of 0.4 and 0.2, respectively. In the second group, six specimens were strengthened using near-surface mounted (NSM) steel bars arranged in three different stirrup configurations: (square, diamond, and parallelogram). All the beams had a cross-section of 100 mm × 200 mm and a total length of 2000 mm. The variables examined in the tests included the sizes and locations of the openings, the diameter of the bars, and the arrangement of the strengthening bars around the openings. The test results revealed that the presence of openings in the beams led to a reduction in the ultimate load. For specimens with large circular openings in the deep beam's shear zone, large circular openings in the slender beam subjected to shear, large circular openings subjected to shear and flexural loads, and small openings in the slender beam subjected to shear and flexural loads, the ultimate load decreased by approximately 45%, 18.7%, 14.6%, and 19.5%, respectively. Additionally, the test results showed that specimens strengthened with diamond stirrup bars exhibited an improvement in the ultimate load of up to 33.1%. Meanwhile, specimens strengthened with square and parallelogram stirrup bars demonstrated improvements of up to 21.5% and 26.5%, respectively. Changing the bar diameter had a slight effect on increasing the ultimate load, specifically for the parallelogram and square schemes, resulting in an increase of approximately 10% and 7%, respectively.

  • Erbil Technical Engineering College
  • Civil Engineering Department.
  • Structural Engineering.

DETECTION AND CLASSIFICATION OF BONE FRACTURE BASED ON MACHINE LEARNING TECHNIQUES

  • Kosrat Dlshad Ahmed
  • [email protected]
  • +9647506307788
  • Bone Fracture - Thesis_compressed
  • Computers have demonstrated their significance in all areas of human existence, including financial services, e-commerce, communication, education, research, innovation, and healthcare. The use of computers to make medical diagnoses has expanded rapidly in recent years. An efficient approach that uses modern technology while requiring fewer man-hours and materials to provide accurate diagnoses. While diagnosing an injury to a human body, doctors often obtain X-ray images of the affected area.

    Numerous advanced technological tools have been created to aid doctors and medical facilities in enhancing patient care. The conventional X-ray scanners, known for generating unclear images of bone structures, pose a risk of inaccurate diagnoses of bone fractures when employed by surgeons.

    The approach detailed in this research outlines a thorough procedure aimed at improving the precision of image examination within the realm of diagnosing medical conditions using X-ray images. Real-life medical data often suffers from noise, inconsistency, and incompleteness, necessitating robust preprocessing. The initial step involves applying a Gaussian filter to the X-ray images for noise elimination, followed by the utilization of adaptive histogram equalization to improve contrast. Edge detection is then conducted using the Canny technique, integrating adaptive histogram equalization to enhance contrast quality.

    The process of feature extraction utilizes the Gray-Level Co-occurrence Matrix (GLCM) for textural feature extractions, yielding a set of 100 features for each image. To classify bone fractures, a range of machine learning algorithms (Decision Tree, Naïve Bayes, k-Nearest Neighbors, Random Forest, and Support Vector Machine) along with deep learning algorithms, such as Convolutional Neural Network (CNN) and transfer learning via VGG-16, are employed to improve overall performance.

    As an additional point, the outlined approaches were applied for bone fracture detection using both national and international datasets. In this particular case, 80% of the dataset was employed for training, while the remaining 20% was allocated for testing. The research assessed the effectiveness of diverse algorithms, including Naïve Bayes, Decision Tree, Nearest Neighbors, Random Forest, SVM, and CNN, resulting in accuracy scores spanning from 0.64 to 0.96. Among these, statistically, the highest accuracies were observed in SVM, Random Forest, and CNN in this thesis.

  • Erbil Technical Engineering College
  • Information Systems Engineering
  • Machine Learning

Metaheuristic Optimization Algorithms in Applied Science and Engineering Applications

  • Azad Abdullah Ameen
  • [email protected]
  • +9647721068874
  • Metaheuristic Optimazation Algorithms in Applied Science and Engineering Applications
  • The objective of this study is to delineate the challenges associated with addressing complex optimization issues, with a specific focus on metaheuristic algorithms. A comprehensive investigation was undertaken to explore the principles and categories of these algorithms to gain a deeper understanding of the issues they present and develop effective strategies to overcome them. To challenge these issues, the study explores metaheuristic algorithms, which are known for their effectiveness in solving such problems. However, these algorithms often struggle with getting stuck in local optima and maintaining a balance between exploration and exploitation. Additionally, they exhibit poor searchability and exploitation performance.

    To address these challenges, this research work introduces three different algorithms: a modified version of child drawing development optimization MCDDO, a hybrid algorithm combining child drawing development optimization with harmony search CDDO-HS, and a novel metaheuristic called the social psychology interaction behavior algorithm SPIBA, inspired by human social psychology interactions.

    The performance of these algorithms is evaluated using various benchmark test functions, including classical and CEC-C06 2019 benchmark functions. Statistical methods, such as ranking and the Wilcoxon rank-sum test, are used to compare the results of these algorithms with the original algorithms, CDDO, HS, and other popular algorithms.

    In the beginning, two different approaches were proposed, namely MCDDO and CDDO-HS. The main objective of both techniques is to overcome the issues that the CDDO faces. The CDDO is an example of a human-based metaheuristic approach that may encounter challenges such as getting trapped in local optima, demonstrating suboptimal performance in the exploration phase, and experiencing stagnation in the nearest optimal solution.

    The first proposed MCDDO incorporates four key mechanisms: iterative pattern memory PM updating during the exploitation phase, where new experiences are compared with the child's current drawings; a change in the primary rule employed during the exploitation phase; parameter tuning to strike a balance between exploration and exploitation phases; and preservation of the best solution obtained in each iteration and comparing new solutions with the best solution during the exploration phase. Following the completion of the evaluation, the statistical findings indicate a consistent superiority of the proposed approach over standard algorithms, as evident in both average and p-value results. Specifically, out of the nineteen classical test functions and ten CEC-2019 benchmark test functions, the proposed approach demonstrated better performance in thirteen and nine instances, respectively. These results were then compared with those obtained from the JAYA, SCA, ChOA, DA, GPSO, and BOA algorithms. The comparative analysis confirmed that the proposed approach outperformed all other metaheuristic algorithms in four out of the ten CEC-2019 benchmark test functions.

    The second proposed method, CDDO-HS, represents a hybridization between CDDO and HS and integrates two crucial mechanisms. Firstly, it relocates the PM to the algorithm's core, updating it with each iteration using the HS algorithm. Secondly, it establishes the PM size at 80% of the overall population, aiming for optimal exploration. After the evaluation, the statistical results reveal that the hybridization approach consistently outperforms standard algorithms in both average and p-value outcomes. Specifically, in comparison with CDDO, it achieves better results in eleven out of nineteen classical test functions and all functions from the CEC-2019 benchmark. When compared with HS, the hybrid approach excels in sixteen out of nineteen classical test functions and seven out of ten CEC-2019 benchmark test functions. These results were then pitted against the ChOA, BOA, FOX, GWO-WOA, WOA-BAT, and DCSO algorithms. The study proved beyond a reasonable doubt that the suggested method is better than all other metaheuristic algorithms in six of the ten CEC-2019 test functions.

    In the subsequent phase, SPIBA, an innovative metaheuristic optimization algorithm inspired by social psychology interaction behavior and social interaction—processes involving the stimulus or response of two or more individuals—was developed. These fundamental ideas have been easily incorporated into SPIBA's core, which operates as a single-object and population-based algorithm. SPIBA's performance was compared to that of the ChOA, BOA, FOX, GWO-WOA, WOA-BAT, and DCSO algorithms. The exploration and convergence measures were utilized to assess its success. Their analytical results definitively indicated that the proposed approach beat all other metaheuristic algorithms in six of ten CEC-2019 benchmark test functions.

    Additionally, SPIBA was applied to equipment real-world engineering and applied science challenges, specifically in pressure vessel design and the analysis of the pathological IgG fraction in the nervous system. When working in pressure vessel design and compared with eight other algorithms—WOA, GWO, FDO, CFDO, WOAGWO, KMGWO, RFSO, and MFDO—SPIBA appeared as the top-performing algorithm. It showed an average solution quality of 6.01E-05 and the lowest standard deviation of 2.00E-04, guaranteeing the ahead position. In the context of the "Nervous System's Pathological IgG Fraction" application problem, a comparison between SPIBA and Leo revealed a significant improvement in the proposed algorithm's performance.

  • Erbil Technical Engineering College
  • Information Systems Engineering
  • AI-Optimization

Lagrange Elementary Optimization Algorithm Based on New Crossover Operator

  • Aso Mohammed Aladdin
  • [email protected]
  • +9647725426616
  • phd-dissertation-last-draft-Aso
  • The evolutionary sophistication method solves optimization problems; however, its effectiveness and scalability can be challenged as problem complexity increases. Population-based evolutionary metaheuristic algorithms heavily rely on operators that determine their overall performance. These operators enhance exploration and exploitation, crucial for effective search and optimization. The research introduces the crossover operator, Lagrangian Problem Crossover (LPX), to boost evolutionary algorithms' performance in tackling new optimization problems. Additionally, it presents Lagrange Elementary Optimization (LEO), a single-objective algorithm where LPX plays a significant role.

    The crossover operator in population-based algorithms is crucial for selecting suitable solutions in optimization processes. Its efficiency saves time, minimizes errors, and reduces costs in engineering applications. The initial phase of the study presents an overview of the current crossover methods utilized in engineering operations and problem representation. Furthermore, presenting LPX, it is a fresh and inventive hybrid technique that draws inspiration from the principles of the Lagrangian Dual Function (LDF). Experimental evaluations compare LPX with other standards such as Simulated Binary Crossover (SBX), Blended Crossover (BX), and Qubit-Crossover (Qubit-X) in real-coded crossovers. The results indicate that LPX generally outperforms other methods and shows comparable performance in remaining cases. Specifically, in TF7, LPX demonstrates superior performance and shorter computation time across all three random values compared to Mean (α=0.2) at 0.0048, Standard Deviation (α=0.2) at 0.0031, and time computation (α=0.2) at 143.005 units. Statistical analysis validates the significance and reliability of LPX compared to other crossover standards.

    In the second phase of the research, a novel evolutionary method named Leo is introduced. Leo is inspired by the accurate vaccination process that utilizes the human blood albumin quotient.  Leo utilizes a self-adaptive approach, evolving intelligent agents through gene crossover based on fitness function values. The algorithm's accuracy and precision are extensively validated through rigorous testing on diverse benchmark functions, including both traditional and CECC06 2019 benchmarks. Leo's performance is benchmarked against well-known algorithms like Dragonfly, Genetic Algorithm, Practical Swarm Optimization, and others across multiple functions. A comprehensive comparison evaluates Leo's effectiveness and efficiency in solving optimization problems against these established algorithms. In optimizing multimodal test functions (TF8-TF13), particularly TF11, the proposed approach outperformed other algorithms, with an average TF11 value of (2.7393E-08). Notably, across the composite test functions (TF14-TF19), the proposed method exhibited consistently high performance compared to the base algorithms. The statistical analysis supports the research conclusions, and real-world applications of Leo are also showcased. The stability of Leo is confirmed using standard metrics for exploration and exploitation.

  • Erbil Technical Engineering College
  • Information Systems Engineering
  • Information Systems Engineering