<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
<channel>
<title>ماجستير تكنولوجيا المعلومات Master’s in Information Technology</title>
<link>https://dspace.qou.edu/handle/194/2880</link>
<description/>
<pubDate>Tue, 28 Apr 2026 17:59:07 GMT</pubDate>
<dc:date>2026-04-28T17:59:07Z</dc:date>
<item>
<title>A Hybrid Model for Predicting Heart Disease Using DNN and ML Algorithms</title>
<link>https://dspace.qou.edu/handle/194/3069</link>
<description>A Hybrid Model for Predicting Heart Disease Using DNN and ML Algorithms
Ayad Salamah, Tharaa; Abuzir, Prof. Yousef
Cardiovascular diseases are among the most prominent health challenges facing medical systems worldwide, with high mortality rates associated with this disease recorded annually. This is largely due to the difficulty of early detection of these diseases, as diagnosis often relies on clinical findings that appear in advanced stages of the disease. In this context, the need to employ artificial intelligence (AI) techniques, particularly machine learning and deep learning, emerges to develop accurate predictive models that contribute to the early identification of patients at risk of heart disease. This study aims to develop a hybrid model that combines traditional machine learning algorithms—Support Vector Machines (SVMs), Random Forests, XGBoost, LightGBM, and Logistic Regression—with Deep Neural Networks (DNNs), which are used to extract advanced representative features from data. This study uses a reliable medical dataset obtained from Kaggle consisting of four databases: Cleveland, Hungary, Switzerland, and Long Beach. It contains 76 features, including the predicted feature, but all published experiments report using a subset of 14 of them. The target field indicates the presence of heart disease in the patient. It is an integer value where 0 = no disease and 1 = disease. This study introduces an innovative multi-stage methodology for heart disease prediction, leveraging feature augmentation to enhance performance on a moderately sized dataset of 1,025 patient records. The process begins with preprocessing clinical data, comprising 13 features (e.g., age, sex, chest pain type, resting blood pressure, serum cholesterol, fasting blood sugar, resting electrocardiographic results, maximum heart rate, exercise induced angina, oldpeak, slope, number of major vessels, thal ), followed by initial training of traditional machine learning models (e.g., SVM, Random Forest , XGBoost,  LightGBM , Logistic Regression) using these features. A Deep Neural Network (DNN) then extracts 32 high-level features from its penultimate layer, capturing complex, nonlinear patterns not evident in the original data. These DNN features are combined with the 13 clinical features to form a 45-feature set, significantly enriching the input space. A set of comprehensive evaluation indicators was used, including accuracy, confusion matrix, precision, recall, F1 coefficient, and the area under the ROC-AUC curve, to provide a comprehensive evaluation of the models before and after feature combination. The Random Forest model achieved the highest performance among classification models on the original features, with an accuracy rate of 97.80%, a high recall rate of 99.31%, and a predictive accuracy of 96.64%. In a previous study, Smith et al. (2018) employed traditional machine learning algorithms The experimental results showed that the Support Vector Machine achieved an accuracy of 85.2%. The results also showed a significant improvement in the performance of all classification algorithms after combining the original features with those extracted by deep neural networks (DNNs). This combination resulted in increased classification accuracy across all key indicators. The SVM algorithm achieved the highest AUC value of 99.90%, demonstrating its high ability to accurately distinguish between classes. The Random Forest, XGBoost, and LightGBM algorithms also achieved identical results in overall accuracy (99.63%) and other indicators. The results showed that combining the DNN-extracted features with the original features led to a significant improvement denotes consistency across models, not the numerical accuracy alone. in the prediction accuracy of all machine learning algorithms used, reflecting the effectiveness of the hybrid approach in enhancing predictive performance, especially in light of the challenges associated with class imbalance and small dataset sizes. This study confirms that combining machine learning and deep learning techniques provides a promising path for developing intelligent diagnostic tools capable of supporting medical decision-making, reducing false alarm rates, and contributing to improving early treatment opportunities, especially in resource-constrained medical settings.
</description>
<pubDate>Fri, 06 Feb 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://dspace.qou.edu/handle/194/3069</guid>
<dc:date>2026-02-06T00:00:00Z</dc:date>
</item>
<item>
<title>Securing Communication Standards of IoT-based Smart Irrigation Systems in Palestine</title>
<link>https://dspace.qou.edu/handle/194/3054</link>
<description>Securing Communication Standards of IoT-based Smart Irrigation Systems in Palestine
Younis AlShwieki, Samar; Jaloudi, Dr. Eng. Samer
The agricultural sector is one of the most vital sectors in Palestine, where water resource management faces increasing challenges due to water scarcity, limited infrastructure, and the continued use of inefficient traditional irrigation methods. In recent years, Internet of Things-based smart irrigation systems have emerged as an effective solution to improve water use efficiency. However, these systems are exposed to growing security threats, particularly at the communication level, especially in environments that rely on low-power wireless networks.&#13;
This thesis aims to secure communication standards for IoT-based smart irrigation systems in Palestine by designing and implementing a secure and flexible communication framework that considers the specific characteristics of the Palestinian environment, including the geographical distribution of agricultural lands, variable network coverage, and limited energy resources. A practical smart irrigation system was designed and implemented based on LoRa technology, integrating multiple security mechanisms such as data encryption, secure key management using hardware-based secure elements, and built-in LoRa protection features, such as message headers, to mitigate cyber-attacks, including replay attacks.&#13;
In addition, an alternative system based on the MQTT protocol was designed to operate when direct Internet connectivity is available, enhance system flexibility, and compare different communication standards in terms of performance and security. When deployed over the TLS protocol, MQTT provides encrypted communication, authentication, and data integrity with low latency, making it suitable for connecting smart irrigation systems to cloud servers and data analytics platforms.&#13;
The results indicate that combining long-range low-power communication technologies such as LoRa with Internet-based protocols like MQTT improves the reliability and security of smart irrigation systems while maintaining efficient energy consumption. This hybrid approach enhances water resource management and supports data-driven decision-making in Palestine and agricultural sustainability.
</description>
<pubDate>Sun, 05 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://dspace.qou.edu/handle/194/3054</guid>
<dc:date>2025-01-05T00:00:00Z</dc:date>
</item>
<item>
<title>Empowering Palestinian Voices Using Text Mining Techniques to Overcome Social Media Expression Restrictions</title>
<link>https://dspace.qou.edu/handle/194/3045</link>
<description>Empowering Palestinian Voices Using Text Mining Techniques to Overcome Social Media Expression Restrictions
Abd Alqaher Srour, Maab; Dweib, Dr. Mohamed
In an era where digital communication shapes political discourse and collective memory, social media platforms have become both spaces of empowerment and instruments of control. This thesis investigates the algorithmic suppression of Palestinian digital expression on major platforms—Facebook, Instagram, and X (formerly Twitter)—through the integration of Natural Language Processing (NLP) and text mining techniques. The study situates itself within the interdisciplinary domains of artificial intelligence, digital rights, and computational social science, aiming to uncover how algorithmic bias operates within automated moderation systems.&#13;
Data were collected using Apify-based scrapers and analysed in Kaggle through multiple preprocessing and modelling stages. Techniques such as TF-IDF vectorization, sentiment and emotion analysis, and Named Entity Recognition (NER) were employed to identify linguistic and affective patterns correlated with content suppression. Comparative machine-learning experiments—including Logistic Regression, Naïve Bayes, Linear SVC, SGD, and transformer-based models (BERT and XLM-R)—revealed consistent evidence of algorithmic bias. Facebook demonstrated structural filtering and downranking of politically sensitive posts, Instagram exhibited emotional suppression of solidarity content, and X retained partial transparency but reflected selective engagement constraints.&#13;
The results confirm that algorithmic repression is not incidental but systematically embedded within platform architectures and moderation logic. Beyond quantitative findings, the research advances an Integrated Research Framework that combines computational rigor with ethical reflection, positioning data science as a form of digital resistance.&#13;
This thesis contributes to the emerging field of algorithmic justice by presenting empirical evidence of digital repression and proposing a context-aware, ethically grounded approach to AI design. It concludes that reclaiming visibility in the algorithmic age is not merely a technical challenge but a moral and political act—one that defines the future of digital freedom and equity.
</description>
<pubDate>Sat, 24 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://dspace.qou.edu/handle/194/3045</guid>
<dc:date>2026-01-24T00:00:00Z</dc:date>
</item>
<item>
<title>Ransomware Detection: The Efficacy of Behavior-Based and Machine Learning Techniques</title>
<link>https://dspace.qou.edu/handle/194/3038</link>
<description>Ransomware Detection: The Efficacy of Behavior-Based and Machine Learning Techniques
Yousef Amro, Manar; Dweib, Dr. Mohammad
Ransomware remains one of the most pervasive cybersecurity threats, exploiting both technological and human vulnerabilities to inflict severe economic and operational damage. This thesis investigates the efficacy of hybrid detection methodologies that integrate behavior-based analysis with machine learning (ML) and deep learning Long Short-Term Memory (LSTM) approaches to improve detection accuracy and generalization across diverse ransomware variants.&#13;
The proposed framework unifies three behavioral dimensions—File System Monitoring (FSM), Process Behavior Analysis (PBA), and Network Behavior Analysis (NBA)—into a comprehensive dataset of 15,411 instances and 224 features, aligned through a Timestamp-Based Integration process. Multiple classifiers, including Random Forest, Naïve Bayes, Support Vector Machine (SVM), Gradient Boosting, and LSTM, were trained and evaluated. Two integration strategies—decision-level fusion (voting) and model-level stacking- were compared empirically to identify the most robust hybrid configuration.&#13;
Experimental results demonstrated that the stacking ensemble [BB, XGB, NB] achieved superior macro-average performance (F1 ≈ 0.93, AUPRC ≈ 0.91), validating the advantage of multi-model learning for ransomware detection. Additionally, Synthetic Minority Over-sampling Technique (SMOTE) balancing and probability calibration improved fairness and stability across minority ransomware families such as Ryuk, Sodinokibi, and LockBit.&#13;
The study also incorporated statistical validation (McNemar’s test) and sensitivity analysis to ensure the reliability of results under variable conditions. Finally, ethical and policy considerations were highlighted to guide the responsible deployment of AI-driven cybersecurity systems.&#13;
This research bridges a major gap in ransomware detection studies by operationalizing a cross-domain hybrid framework that synchronizes host and network behavioral data, providing a replicable and scalable foundation for intelligent, interpretable, and ethically aligned ransomware defense systems.
</description>
<pubDate>Mon, 05 Jan 2026 00:00:00 GMT</pubDate>
<guid isPermaLink="false">https://dspace.qou.edu/handle/194/3038</guid>
<dc:date>2026-01-05T00:00:00Z</dc:date>
</item>
</channel>
</rss>
