Please use this identifier to cite or link to this item: http://theses.ncl.ac.uk/jspui/handle/10443/6614
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAlqattan, Duaa Salman M-
dc.date.accessioned2025-12-04T15:12:56Z-
dc.date.available2025-12-04T15:12:56Z-
dc.date.issued2025-
dc.identifier.urihttp://hdl.handle.net/10443/6614-
dc.descriptionPhD Thesisen_US
dc.description.abstractDistributed and federated deep learning (DL) systems, operating across the client-edgecloud continuum, have transformed real-time data processing in critical domains like smart cities, healthcare, and industrial Internet of Things (IoT). By distributing DL training and inference tasks across multiple nodes, these systems enhance scalability, reduce latency, and improve efficiency. However, this decentralisation introduces significant security challenges, particularly concerning the availability and integrity of DL systems during training and inference. This thesis tackles these challenges through three main contributions. • Edge-based Detection of Early-stage IoT Botnets: The first contribution involves employing Modular Neural Networks (MNN), a distributed DL approach, to develop an edge-based system for detecting early-stage IoT botnet activities and preventing DDoS attacks. By harnessing parallel computing on Multi-Access Edge Computing (MEC) servers, the system delivers rapid and accurate detection, ensuring uninterrupted service availability. This addresses the research gap in detecting early-stage IoT botnet activities as faults in network communication, enabling preventive measures before attacks escalate. Key findings include a significant reduction in false-negative rates and faster detection times (as low as 16 milliseconds), enabling early intervention in large-scale IoT environments. • Security Assessment of Hierarchical Federated Learning (HFL): The second contribution is a security assessment of Hierarchical Federated Learning (HFL), evaluating its resilience against data and model poisoning attacks during training and adversarial data manipulation during inference. Defense mechanisms like Neural Cleanse (NC) and Adversarial Training (AT) are explored to improve model integrity in privacysensitive environments. This addresses the gap in systematically assessing the security vulnerabilities of HFL systems, particularly in detecting and mitigating targeted attacks in multi-level architectures. Key findings highlight that while HFL enhances scalability and recovery from untargeted attacks, it remains vulnerable to targeted backdoor attacks, especially in higher-level architectures, necessitating stronger defence mechanisms. • Analysis of HFL Dynamics Under Attack: The third contribution examines HFL dynamics under attack using a Model Discrepancy score to analyse discrepancies in model updates. This study sheds light on the impact of adversarial attacks and data heterogeneity, providing insights for more robust aggregation methods in HFL. This addresses the gap in understanding the dynamics of HFL under adversarial attacks through model discrepancy phenomena. Key findings reveal that increased hierarchy and data heterogeneity can obscure malicious activity detection, emphasising the need for advanced aggregation methods tailored to complex, real-world scenarios. Overall, this thesis enhances the security, availability, and integrity of Distributed and Federated DL systems by proposing novel detection and assessment methods, ultimately laying the foundation for more resilient DL-driven infrastructures.en_US
dc.language.isoenen_US
dc.publisherNewcastle Universityen_US
dc.titleSecurity of distributed and federated deep learning systemsen_US
dc.typeThesisen_US
Appears in Collections:School of Computing

Files in This Item:
File Description SizeFormat 
Alqattan D S M 2025.pdfThesis2.33 MBAdobe PDFView/Open
dspacelicence.pdfLicence43.82 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.