Optimized Task Offloading in D2D-Assisted Cloud-Edge Networks Using Hybrid Deep Reinforcement Learning
-
https://doi.org/10.14419/xm2ebp25
Received date: May 21, 2025
Accepted date: June 14, 2025
Published date: July 3, 2025
-
Cloud-Edge-Device Networks; Deep Reinforcement Learning; Device-to-Device Communication; Operational Efficiency; Resource Allocation. -
Abstract
The modern communication network depends highly on Device-to-Device (D2D) technology as an essential foundation. Direct communication allows devices to exchange information among themselves. Cloud-edge-device networks enable tasks to execute through several operational procedures. A device working at capacity executes local tasks or transfers them directly to an inactive device by means of D2D technology. The device has two options for delivering workloads, namely an edge-server transfer or a direct cloud-server transfer. Existing methods do not fully exploit D2D-assisted offloading. Such systems fail to maximize the benefits that stem from combining cloud-edge-device op-operations. This makes resource distribution a complex challenge that needs an optimized solution. Traditional solutions find it difficult to produce efficient system solutions. The presented work describes an approach for task offloading mechanisms. The technique determines overall system expenses through optimized management of time, together with energy usage. The method operates to optimize all four critical system factors: task selection and transmission power, with rate and computational resource distribution. The proposal utilizes a combina-tion of deep reinforcement learning methods through SD3. The proposed method merges Softmax Deep Double Deterministic Policy Gradients (SD3) with numerical techniques to achieve its operations. The proposed method operates on multiple smaller components of the primary issue. The SD3-based DRL method controls offloading decisions throughout the system, and the numerical techniques manage power and resource allocation. Extensive simulations were conducted. Seven different scenarios were tested. Research compared the pro-posed method against four traditional solution approaches. Research findings demonstrate the superiority of the proposed solution. The technique both lessens system expenses and optimizes resource usage while generating better operational efficiency. A novel hybrid DRL-based approach for task offloading constitutes the main contribution of this work. The system improves cloud-edge-device partnerships by enabling D2D communication. Machine learning unions with numerical methods create an effective strategy to solve complex optimization tasks.
-
References
- Y. Zhao, W. Wang, Y. Li, C. C. Meixner, M. Tornatoreand J. Zhang, “Edge computing and networking: A survey on infrastructures and applica-tions,” IEEE Access, vol. 7, pp. 101213–101230, 2019. https://doi.org/10.1109/ACCESS.2019.2927538.
- A. Jahid, M. K. H. Monju, M. E. Hossainand M. F. Hossain, “Renewable energy assisted cost aware sustainable off-grid base stations with energy cooperation,” IEEE Access, vol. 6, pp. 60900–60920, 2018. https://doi.org/10.1109/ACCESS.2018.2874131.
- A. Salh, L. Audah, N. S. M. Shah, A. Alhammadi, Q. Abdullah, Y. H. Kim, S. A. Al-Gailani, S. A. Hamzah, B. A. F. Esmailand A. A. Almoham-medi, “A survey on deep learning for ultra-reliable and low-latency communications challenges on 6g wireless systems,” IEEe Access, vol. 9, pp. 55098–55131, 2021. https://doi.org/10.1109/ACCESS.2021.3069707.
- S. Yao, M. Wang, Q. Qu, Z. Zhang, Y.-F. Zhang, K. Xuand M. Xu, “Blockchain-empowered collaborative task offloading for cloud-edge-device computing,” IEEE Journal on Selected Areas in Communications, vol. 40, no. 12, pp. 3485–3500, 2023. https://doi.org/10.1109/JSAC.2022.3213358.
- Z. Wu, X. Liu, Z. Ni, D. Yuanand Y. Yang, “A market-oriented hierarchical scheduling strategy in cloud workflow systems,” The Journal of Su-percomputing, vol. 63, pp. 256–293, 2013. https://doi.org/10.1007/s11227-011-0578-4.
- M. Ahmed, Y. Li, M. Waqas, M. Sheraz, D. Jinand Z. Han, “A survey on socially aware device-to-device communications,” IEEE Communications Surveys & Tutorials, vol. 20, no. 3, pp. 2169–2197, 2018. https://doi.org/10.1109/COMST.2018.2820069.
- R. Chai, J. Lin, M. Chenand Q. Chen, “Task execution cost minimization-based joint computation offloading and resource allocation for cellular d2d mec systems,” IEEE Systems Journal, vol. 13, no. 4, pp. 4110–4121, 2019. https://doi.org/10.1109/JSYST.2019.2921115.
- R. K. Bharti, D. Suganthi, S. Abirami, R. A. Kumar, B. Gayathriand S. Kayathri, “Optimal extreme learning machine based traffic congestion con-trol system in vehicular network,” p. 597 – 603, 2022. https://doi.org/10.1109/ICECA55336.2022.10009111.
- S. Duan, D. Wang, J. Ren, F. Lyu, Y. Zhang, H. Wuand X. Shen, “Distributed artificial intelligence empowered by end-edge-cloud computing: A survey,” IEEE Communications Surveys & Tutorials, vol. 25, no. 1, pp. 591–624, 2023. https://doi.org/10.1109/COMST.2022.3218527.
- K. Bian and R. Priyadarshi, “Machine learning optimization techniques: a survey, classification, challengesand future research issues,” Archives of Computational Methods in Engineering, vol. 31, no. 7, pp. 4209–4233, 2024. https://doi.org/10.1007/s11831-024-10110-w.
- S. Ghyasuddin Hashmi, V. Balaji, M. U. Ahamed Ayoobkhan, M. Shabbir Alam, R. Anilkuamr, N. Nishant, J. Prasad Patraand A. Rajaram, “Ma-chine learning-based renewable energy systems fault mitigation and economic assessment,” Electric Power Components and Systems, 2024. https://doi.org/10.1080/15325008.2024.2338557.
- H. Yang, M. Zheng, Z. Shao, Y. Jiangand Z. Xiong, “Intelligent computation offloading and trajectory planning for 3d target search in low-altitude economy scenarios,” IEEE Wireless Communications Letters, 2025. https://doi.org/10.1109/LWC.2025.3527005.
- Z. Wei, X. Yu, D. W. K. Ngand R. Schober, “Resource allocation for simultaneous wireless information and power transfer systems: A tutorial overview,” Proceedings of the IEEE, vol. 110, no. 1, pp. 127–149, 2021. https://doi.org/10.1109/JPROC.2021.3120888.
- X. Zhang et al., “A survey on edge computing: Architectures, applicationsand future directions,” IEEE Communications Surveys Tutorials, vol. 22, no. 2, pp. 1234-1267, 2020.
- J. Liu and Y. Lu, “Energy-efficient task offloading in edge computing networks,” IEEE Transactions on Network and Service Management, vol. 18, no. 3, pp. 789-802, 2021.
- K. Wang et al., “Latency optimization in mobile edge computing: A deep learning approach,” IEEE Journal on Selected Areas in Communications, vol. 39, no. 7, pp. 1345-1357, 2021.
- R. Gupta et al., “Resource allocation strategies for edge computing: A comprehensive survey,” IEEE Access, vol. 9, pp. 87632-87654, 2021.
- P. Sun et al., “Reinforcement learning for adaptive resource management in edge computing,” IEEE Transactions on Mobile Computing, vol. 20, no.6, pp. 1921-1934, 2021.
- Y. Huang et al., “Collaborative cloud-edge computing: Principles and applications,” IEEE Internet of Things Journal, vol. 8, no. 4, pp. 2981-2997,2021.
- Z. Tang et al., “Task distribution strategies for cloud-edge collaboration,” IEEE Transactions on Cloud Computing, vol. 10, no. 3, pp. 1121-1135, 2022.
- B. Li et al., “Collaborative task offloading in edge networks,” IEEE Transactions on Mobile Computing, vol. 21, no. 5, pp. 2141-2153, 2022.
- A. Kumar et al., “Device-to-device communication: Technologies, challengesand applications,” IEEE Communications Magazine, vol. 58, no.12, pp. 74-80, 2020.
- L. Chen et al., “Task offloading in D2D-assisted mobile edge computing,” IEEE Transactions on Wireless Communications, vol. 20, no. 8, pp. 5121-5134, 2021.
- H. Wang et al., “Deep reinforcement learning for network optimization,” IEEE Communications Surveys & Tutorials, vol. 23, no. 1, pp. 234-256, 2021.
- R. Gowtham, V. Anand, Y. V. Suresh, K. L. Narasimha, R. A. Kumarand V. Saraswathi, “Enhancing incentive schemes in edge computing through hierarchical reinforcement learning,” Journal of Engineering and Technology for Industrial Applications, vol. 11, no. 52, p. 226 – 236, 2025. https://doi.org/10.5935/jetia.v11i52.1637.
-
Downloads
-
How to Cite
Kailasam , N. ., Yalamati , S. ., Murthy , V. S. N. ., P, V. R. ., Kumar, R. A. . ., & Kumar, K. J. . . (2025). Optimized Task Offloading in D2D-Assisted Cloud-Edge Networks Using Hybrid Deep Reinforcement Learning. International Journal of Basic and Applied Sciences, 14(2), 591-602. https://doi.org/10.14419/xm2ebp25
