Abstract
Federated Learning has emerged as a promising paradigm for collaborative machine learning without centralized data aggregation. This distributed learning approach allows multiple devices to collaboratively train a shared model while keeping sensitive data localized. However, the unique characteristics of FL introduce challenges that impact the quality and performance of artefacts, such as models and gradients, during the learning process. This thesis introduces a novel approach to Zero Trust Federated Learning by integrating the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) decision-making method with the proposed Adaptive Trust Score Scaling System Filtering (ATSSSF) technique and the Exponential Moving Average (EMA) smoothing method to enhance model aggregation. This framework addresses challenges in FL, such as selecting the most reliable client updates in a distributed environment, mitigating the risks of data poisoning, and identifying malicious adversaries. By incorporating EMA, the system achieves greater stability in weighting client contributions over time, reducing the impact of noise, fluctuations in performance, and malicious behaviors. The ATSSSF technique also facilitates the omission of clients that do not meet a defined trust threshold, securing the model against adversarial attacks. Initial experiments demonstrate that this combined scoring approach results in a more accurate, secure, and robust global model. The study evaluates performance metrics and historical data from MEC Nodes, ensuring long-term learning stability, with results indicating significant improvements over traditional FL methods. Sensitivity analysis is the focus for assessing the robustness of these rankings and the overall effectiveness of the proposed framework.
| Original language | English |
|---|---|
| Qualification | Master of Science |
| Awarding Institution |
|
| Supervisors/Advisors |
|
| Award date | 25 Feb 2025 |
| Publisher | |
| Publication status | Published - 25 Feb 2025 |