A Multi-Objective Deep Reinforcement Learning Algorithm for Spatio-temporal Latency Optimization in Mobile LoT-enabled Edge Computing Networks

Göster/ Aç
Erişim
info:eu-repo/semantics/embargoedAccessTarih
2025Yazar
Khoshvaght, ParisaHaider, Amir
Rahmani, Amir Masoud
Gharehchopogh, Farhad Soleimanian
Anka, Ferzat
Lansky, Jan
Hosseinzadeh, Mehdi
Üst veri
Tüm öğe kaydını gösterKünye
KHOSHVAGHT, Parisa, Amir HAIDER, Amir Masoud RAHMANI, Farhad Soleimanian GHAREHCHOPOG, Ferzat ANKA, Jan LANSKY, Mehdi HOSSEINZADEH. "A Multi-Objective Deep Reinforcement Learning Algorithm for Spatio-temporal Latency Optimization in Mobile LoT-enabled Edge Computing Networks". Simulation Modelling Practice and Theory, 143 (2025): 1-26.Özet
The rapid increase in Mobile Internet of Things (IoT) devices requires novel computational
frameworks. These frameworks must meet strict latency and energy efficiency requirements in
Edge and Mobile Edge Computing (MEC) systems. Spatio-temporal dynamics, which include the
position of edge servers and the timing of task schedules, pose a complex optimization problem.
These challenges are further exacerbated by the heterogeneity of IoT workloads and the constraints
imposed by device mobility. The balance between computational overhead and
communication challenges is also a problem. To solve these issues, advanced methods are needed
for resource management and dynamic task scheduling in mobile IoT and edge computing environments.
In this paper, we propose a Deep Reinforcement Learning (DRL) multi-objective algorithm,
called a Double Deep Q-Learning (DDQN) framework enhanced with Spatio-temporal
mobility prediction, latency-aware task offloading, and energy-constrained IoT device trajectory
optimization for federated edge computing networks. DDQN was chosen for its optimize stability
and reduced overestimation in Q-values. The framework employs a reward-driven optimization
model that dynamically prioritizes latency-sensitive tasks, minimizes task migration overhead,
and balances energy efficiency across devices and edge servers. It integrates dynamic resource
allocation algorithms to address random task arrival patterns and real-time computational demands.
Simulations demonstrate up to a 35 % reduction in end-to-end latency, a 28 %


















