Dynamic Resource Allocation and Energy Optimization in Cloud Data Centers Using Deep Reinforcement Learning
Main Article Content
Abstract
This paper presents a new deep learning (DRL) framework for resource allocation and optimization in cloud computing. The proposed method leverages the multi-agent DRL architecture to address extensive decision-making processes in large cloud environments. We formulate the problem based on Markov's decision, creating a state space that includes the use of resources, work characteristics, and energy. The workspace comprises VM placement, migration, and physical power state determination. Careful reward work balances energy, efficiency, and resource utilization goals. We modify the Proximal Policy Optimization algorithm to handle the heterogeneous workspace and include advanced training techniques such as priority recursion and learning data. Simulations using real-world signals show that our method outperforms conventional and single-agent DRL methods, achieving a 25% reduction in the usage of electricity while maintaining a 2.5% SLA violation. The framework is adaptable to different work patterns and scales well to large data set environments. A global study further proves the proposal's validity, showing a significant improvement in energy consumption and efficiency compared to commercial management systems already there.
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.
©2024 All rights reserved by the respective authors and JAIGC