Basit öğe kaydını göster

dc.contributor.authorKhoshvaght, Parisa
dc.contributor.authorHaider, Amir
dc.contributor.authorRahmani, Amir Masoud
dc.contributor.authorGharehchopogh, Farhad Soleimanian
dc.contributor.authorAnka, Ferzat
dc.contributor.authorLansky, Jan
dc.contributor.authorHosseinzadeh, Mehdi
dc.date.accessioned2025-06-30T13:31:17Z
dc.date.available2025-06-30T13:31:17Z
dc.date.issued2025en_US
dc.identifier.citationKHOSHVAGHT, Parisa, Amir HAIDER, Amir Masoud RAHMANI, Farhad Soleimanian GHAREHCHOPOG, Ferzat ANKA, Jan LANSKY, Mehdi HOSSEINZADEH. "A Multi-Objective Deep Reinforcement Learning Algorithm for Spatio-temporal Latency Optimization in Mobile LoT-enabled Edge Computing Networks". Simulation Modelling Practice and Theory, 143 (2025): 1-26.en_US
dc.identifier.urihttps://hdl.handle.net/11352/5344
dc.description.abstractThe rapid increase in Mobile Internet of Things (IoT) devices requires novel computational frameworks. These frameworks must meet strict latency and energy efficiency requirements in Edge and Mobile Edge Computing (MEC) systems. Spatio-temporal dynamics, which include the position of edge servers and the timing of task schedules, pose a complex optimization problem. These challenges are further exacerbated by the heterogeneity of IoT workloads and the constraints imposed by device mobility. The balance between computational overhead and communication challenges is also a problem. To solve these issues, advanced methods are needed for resource management and dynamic task scheduling in mobile IoT and edge computing environments. In this paper, we propose a Deep Reinforcement Learning (DRL) multi-objective algorithm, called a Double Deep Q-Learning (DDQN) framework enhanced with Spatio-temporal mobility prediction, latency-aware task offloading, and energy-constrained IoT device trajectory optimization for federated edge computing networks. DDQN was chosen for its optimize stability and reduced overestimation in Q-values. The framework employs a reward-driven optimization model that dynamically prioritizes latency-sensitive tasks, minimizes task migration overhead, and balances energy efficiency across devices and edge servers. It integrates dynamic resource allocation algorithms to address random task arrival patterns and real-time computational demands. Simulations demonstrate up to a 35 % reduction in end-to-end latency, a 28 %en_US
dc.language.isoengen_US
dc.publisherElsevieren_US
dc.relation.isversionof10.1016/j.simpat.2025.103161en_US
dc.rightsinfo:eu-repo/semantics/embargoedAccessen_US
dc.subjectMobile edge computingen_US
dc.subjectSpatio-temporal optimizationen_US
dc.subjectDouble deep Q-learningen_US
dc.subjectLatency and energy efficiencyen_US
dc.titleA Multi-Objective Deep Reinforcement Learning Algorithm for Spatio-temporal Latency Optimization in Mobile LoT-enabled Edge Computing Networksen_US
dc.typearticleen_US
dc.relation.journalSimulation Modelling Practice and Theoryen_US
dc.contributor.departmentFSM Vakıf Üniversitesien_US
dc.identifier.volume143en_US
dc.identifier.startpage1en_US
dc.identifier.endpage26en_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.contributor.institutionauthorAnka, Ferzat


Bu öğenin dosyaları:

Thumbnail

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster