1Apex Institute of Technology Chandigarh University 140301, Kharar,Punjab
2Professor, AIT-CSE & UCRD Chandigarh University Kharar, Punjab,140301
Cloud computing is now widely used in the business world. As cloud computing becomes more and more popular, a lot of people and businesses desire to utilise and provide cloud computing services. The expansion of cloud computing services might result in massive energy consumption and carbon dioxide emissions. Growing worries in recent years about greenhouse gas emissions and their effects on the environment have encouraged numerous academics to work in the field of energy-efficient and environmentally conscious computing. This study proposes a "two phase carbon aware cloud broker," which takes data centres' energy and carbon efficiency into account in an effort to reduce energy and carbon.
In the ever-accelerating digital age, cloud computing has emerged as the linchpin of modern information technology infrastructure. The pervasive adoption of cloud services has revolutionized the way data is stored, processed, and accessed, ushering in unprecedented levels of convenience, efficiency, and scalability. Yet, amidst this digital transformation, there exists a significant and pressing concern—one that reverberates beyond the confines of the data center walls and into the broader global ecosystem: the environmental impact of cloud computing.
With cloud computing, users may pay for the infrastructure, platform, and applications they use on a pay-per-use basis. These services are known by their respective industry names, which are Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) [1]. Due to the combination of processing, networking, and storage gear, as well as the energy consumption needed to convey data from and to the user, temperature and energy management are the main problems with cloud computing systems [2]. Large-scale data centres with high operational costs, massive energy consumption, and significant carbon dioxide emissions have grown in number as a result of the expanding popularity and demand for cloud services. According to research, the information and communication technologies (ICT) sector contributes %2 of the world's CO2 emissions, or the aviation sector [3], which is growing at a pace of 6% annually year, and at that pace of expansion, they may account for 12% of global emissions projected by 2020 [4] and a decline in emissions By 2020, a volume of 15–30% is needed to maintain the rise in global temperature that is less than 2°C [5]. Additionally, Human impacts on climate change worldwide and the danger of fossil fuels depletion have accelerated social trends in the direction of implementing more environmentally friendly and energy-efficient lifestyles throughout the last thirty years [6]. According to a September 2008 International Data Corporation (IDC) poll [4], over half of the 459 European businesses assessed have implemented a plan including green IT and cost savings as the primary reasons for turning green. Moving business applications to the cloud can help reduce an organization's carbon footprint, according to a study by Accenture. Small businesses saw the biggest reduction in emissions while using cloud resources, with up to 90 percent reduction; large corporations can save at least 30 to 60 percent while using cloud applications; and mid-size businesses can save 60 to 90 percent [7]. According to the data centre dynamics 2012 Global Census, between 2011 and 2012, the global total power consumption of data centres rose from 24 GW to 38 GW (63%) [8].
This is how the paper is structured. Section 2 provides a synopsis of relevant literature. A few commonly used measures for assessing efficiency have been provided in Section 3. The suggested method for resolving the carbon and energy-aware allocation problem is provided in section 4. The simulation design and the suggested approach's assessment are covered in Section 5. Section 6 concludes with recommendations for further development.
LITERATURE REVIEW
Researchers have conducted a great deal of research on power usage and green computing in recent years; some of these works are included below. Si-Yuan Jing and colleagues tackled the problem of energy usage in cloud computing and examined several energy-saving methods for infrastructure, including CPU, server, network, storage, and cooling systems. In their study, they provide a number of workable answers [5]. In order to achieve energy-efficient management in cloud computing settings, Anton Beloglazov and colleagues suggested an architectural framework and principles for energy-efficient cloud computing [1]. Based on these framework and principles, they provide the vision, difficulties, resource provisioning, and allocation algorithms. A carbon-aware green framework has been presented in another study by Saurbh Kumar Garg and colleagues. It tackles the environmental issue and aims to decrease the emitted carbon footprint using cloud computing [9]. Studying low carbon private clouds, Fereydoun Farahi Moghaddam and colleagues concentrated on virtual machine migration in wide area networks [10]. The study that is most comparable to our one is [3], in which Atefeh Khosravi and colleagues introduced an algorithm called "ECE" that takes into account the power use effectiveness (PUE) and carbon emission of dispersed data centres. Unlike our work, however, the VM placement problem was seen as a bin-packing problem. VM placement issues were perceived as bin-packing issues. A genetic algorithm for power-aware scheduling of resource allocation (GAPA) was presented by Nguyen Quang-Hung and colleagues [11] to address the static virtual machine allocation issue (SVMAP). A survey of various energy-saving techniques for resource efficiency was conducted in 2013, and Amritpal Kaur and colleagues proposed a method to reduce the carbon impact and power consumption in data centres by taking into account the green factor of data centres, cloud computing concepts, and its core services [12]. In terms of energy efficiency, Toni Mastelic and colleagues conducted a thorough investigation of an infrastructure supporting the cloud computing paradigm in 2014. Their investigation concentrated on the energy efficiency of ICT hardware, including networks and servers, as well as software applications that run on top of ICT hardware, including appliances and Cloud Management System (CMS) [13]. A genetic algorithm framework for job scheduling to reduce energy consumption in cloud computing infrastructure has been presented in another paper by D. Kumar and colleagues [14]. F. Kong and X. Liu [15] review and categorise works that specifically take renewable energy and/or carbon emission into account. They also look at the green-energy-aware power management challenge for contemporary data centres that include renewable or green energy sources into their power supply. Green-energy-aware workload scheduling, Green-energy-aware virtual machine management, Green-energy-aware energy capacity planning, and Interdisciplinary are the four categories into which they divide green-energy-aware works. The amount of data centres involved can determine how these categories are further broken into subcategories. This categorization will put the task in a geo-distributed workload scheduling system that considers energy efficiency. The state of the art in energy-efficient networking solutions in cloud-based environments was reviewed in another survey study by Fahimeh Alizadeh Moghaddam and et al. (2015), and it revealed that the decision framework is the solution type that is most frequently investigated to achieve the energy efficiency goal [8]. An method to identify a server in the data centre with the lowest energy usage and/or carbon emission and shift the workload there was proposed by Dang Minh Quan and colleagues [16] in 2012. The method is employed in a federated data centre for resource management. In order to lower data centre power consumption and allow online monitoring, live virtual machine migration, and VM placement optimisation by consolidating the workload, Liang Liu and colleagues [17] presented the GreenCloud architecture in 2009. GreenCloud is made up of a number of parts, including the Managed Environment, Monitoring Service, and Migration Manager. Another method for power-efficient resource allocation in cloud-based data centres was put out in 2013 by [18]. They provide a heuristic technique for cloud-based data centres as power-efficient virtual network provisioning optimisation is NP-hard. .. In order to reduce brown energy usage, financial costs, and environmental effect, GreenSlot [19] is a parallel batch task scheduler for datacenters that are partially powered by solar energy. It plans the use of green energy in datacenters in an avaricious way. The FORTE (Flow Optimisation based framework for request-routing and traffic engineering) was created by Peter Xiang Geo and colleagues [20] in order to manage the three-way tradeoff between average job time, power cost, and carbon emissions. Federico Larumbe and Brunilde Sanso [21] described a cloud network planning challenge and suggested a technique that enabled planners to assess alternative solutions and adjust optimisation priorities.
BACKGROUND
1) Cloud Broker definition
A cloud broker is an entity that manages the use, performance, and delivery of cloud services and negotiates relationships between cloud providers and cloud consumers," states the National Institute of Standards and Technology (NIST) [22]. Serving as a middleman between customers and providers, the cloud broker may assist customers with understanding the complexities of cloud service offerings and even develop value-added cloud services [22]. The cloud brokers utilisation scenario is shown in Figure 1 [22].
1: Usage Scenario for Cloud Brokers [22].
2) Power usage effectiveness (PUE)
Equation 1 defines PUE, which is one of the most well-known metrics for assessing the energy efficiency of cloud computing services. It is calculated by dividing the total power consumed by ICT equipment (Ps) by the total power utilised by the data centre (Pt) [6].
PUE = Pt/Ps (1)
It is impossible to have a PUE lower than 1, and the optimal value for PUE is 1 if all of the power sent to the data centre is accounted for by the servers' power consumption [6]. According to statistics from surveys conducted by the Uptime Institute, the average PUE of data centres in use today is between 1.8 and 1.89 [6]. In [23], the authors benchmarked 22 data centre buildings on 22 data centres. They found that the average PUE value was 2.04, with values ranging from 1.33 to 3. A PUE ratio of 2.0 was taken to represent the average for all data centres in the United States, per the report to Congress on servers and data centre energy efficiency [24]. According to PUE, the levels of energy efficiency are presented in [25].
It is theoretically feasible to get a PUE rating of 1 or almost there by using no energy for cooling. Using free ambient cold-air, water-, and evaporation-based cooling economizers, as those in the Facebook data centre, allows for almost negligible cooling energy [26].
3) Carbon usage effectiveness (CUE)
CUE is defined as equation 2 [27] for such data centres that obtain all of their electricity from electric power distribution and do not have any local carbon footprints. Whereas PUE is given as a number without a unit, the CUE metric is represented in terms of kilogrammes of carbon dioxide equivalent (kgCO2eq) per kilowatt-hour (kWh).
CUE =CEF ´PUE (2)
IV. Proposed Approach
1) Roles
The "two phase carbon aware cloud broker" strategy that has been suggested aims to lower data centres' electricity and carbon footprint. Three primary roles are taken into account in the suggested approach: user, cloud provider, and Green Cloud Broker.
User: Users asked the broker to execute their cloud jobs, or cloudlets, with an anticipated duration. Each cloudlet has a length, expressed in Million Instructions (MI), which is the total number of its internal instructions.
Cloud provider: Pay-as-you-go cloud providers that let you rent their services. They rent their infrastructures under infrastructure as a service (IaaS); data centres are collections of actual machines, each with its own resources (CPUs, Memory, and Network, Bandwidth and Storage Space). Every cloud provider could have one or more data centres located in various locations with various configurations. As a result, it is the duty of each cloud provider to maintain relevant metrics like the PUE of data centres, the rate of carbon emissions, and the availability of physical computers.
Green cloud brokers: are accountable for the same tasks as regular cloud brokers, with the exception that they are also in charge of figuring out how much carbon is emitted during the execution of cloudlets. In the first stage, virtual machines (VMs) are assigned to real servers in data centres (which may differ) based on the data in the catalogue that has been obtained from cloud providers. In the second stage, assign the cloudlet to the correct virtual machine based on its duration, the VM's required specification, and the deadline. It is also in charge of assessing power and carbon footprint usage at the conclusion of each scheduling period in order to rank the providers and their data centres.
2) Power model
In data centres, the power consumption of computational nodes is primarily dictated by their CPU, memory, disc storage capacity, and cooling systems [28]. In contrast to previous systems, the processor in this one is using more power [1]. Thus, only CPU has been taken into account in this study [1]. demonstrated that there is a linear relationship between power usage and CPU utilisation even when DVFS is implemented. The power consumption of servers is determined in this study using Equation 3 [1].
P(u) = k ´Pmax + (1- k)´Pmax ´u (3)
Where u is the percentage of CPU utilisation, k is the percentage of energy used by a server while it is idle, and Pmax is the maximum power a server may use at full utilisation. CPU utilisation might change based on the workload and the time of day. The expression u(t) represents energy consumption as a function of time. Consequently, as Equation 4 illustrates, the energy consumption of a physical node may be stated as an integral power at time t [5].
ò P(u(t)) dt (4)
This feature, along with the issue of the power consumption model of contemporary multi-core processors, make the development of an accurate analytical model a challenging research topic. Thus, real power consumption data released by the SPECpower1 benchmark was utilised in place of a server's analytical capacity.
3) Scenario
In this case, m physical machines should have a VM allocated to them, and p cloudlets should be assigned to them after that. As previously mentioned, users requested that their cloud tasks (cloudlets) be implemented by the broker with an estimated duration. The broker then uses the information in the catalogue to assign the VMs to physical servers in data centres during each scheduling interval. In the second phase, the broker assigns the appropriate virtual machine to the cloudlet based on its specifications. A data center's power usage and carbon footprint are directly correlated if its energy source was unclean (fossil fuels, for example). Whatever the amount of electricity used, if the energy source were completely pure, the amount of carbon produced would be zero.
C p (t) = j (t)Pd (t) (5)
where Cpd(t) is the quantity of carbon that the data centre d has left at time t. For data centre d, ????????(????) is the exchange rate between carbon and power at time t. The power consumption of data centre d at time t is represented by ????????(????). various energy sources may power various data centres, and each of these energy sources has a unique carbon footprint [29]. ρd(t), as stated in equation 6, is dependent on the kind of energy source used in the data centre. This is particularly true for data centres that are fed by many energy sources, where ρd(t) may change over time [10].
j (t) = åsourcejdsource (t)Pdsource (t)
åsourcePdso urce (t)
4) First phase of proposed approach
Virtual machines are assigned to actual servers in data centres during the first phase. The minimum percentage of virtual machines that should be generated in data centres is determined in this phase. The remaining virtual machines are then installed on active data centres based on the CUE parameter and the maximum load for that data centre. Currently (in the second phase of the first phase), data centres with lower CUES are given more importance. On the data centres, virtual machines would be generated in this manner. This stage is displayed as pseudo code in Figure 2.
Algorithm 1: Create Vms In Data centre
Fig 2: Pseudo-code for the first phase
5) Second phase of proposed approach
This phase's goal is to allocate cloudlets to virtual machines while lowering energy and carbon footprint usage. In the second phase, the suggested fitness function is based on measuring the carbon footprint caused by executing cloudlets on virtual machines using the Genetic Algorithm. And bringing this sum down is the aim. Each cloudlet starts out with a predetermined length expressed in terms of millions of instructions. Additionally, the processing speed of each virtual machine is measured in millions of instructions per second (MIPS). One can use a virtual computer to determine the time of cloudlets. It is also possible to determine the carbon footprint of a cloudlet's energy performance, given the virtual machine that houses the data centre. Equation 7 governs the fitness function that is employed in this phase.
∑m ∑???? ∑p CUEi*Cloudlet_Lenk/Vm_MIPSj
f(x) = i=1 j=1 k=1
Counter (7)
where m is the number of data centres where m>n, P is the number of cloudlets, n is the number of virtual machines, and counter is the number of hosts that are underutilised. The second stage of the suggested method, which is referred to as pseudo code, is seen in Figure 3.
Algorithm 2: Submit Cloudlets
Fig 3: Pseudo-code for the Second phase
The cloudlet_lenght is a property of cloudlet which defined as number of Instructions and it`s unit is Million Instructions (MI).
SIMULATION
Cloudsim3.02 [30] has been extended in NetBeans to carry out the simulations. Four data centres in various locations have been taken into consideration in a scenario that has been constructed to replicate the suggested technique [3]. These data centres are shown in Table 1. There are one hundred actual servers in every data centre.
Tables 2 and 3 include the. specifications for virtual machines and real servers, respectively
Table 1. Data center Characteristics [3]
Data center Site |
PUE |
Carbon Footprint Rate (Tons/MWh) |
DC1 -Oregon, USA |
1.56 |
0.124 - 0.147 |
DC2 -California, USA |
1.7 |
0.350 - 0.658 |
DC3 -Virginia, USA |
1.9 |
0.466 - 0.782 |
DC4 -Dallas, USA |
2.1 |
0.678 - 0. 730 |
Table 2. Characteristics of Virtual Machine
Virtual Machine |
TYPE A |
TYPE B |
Number of cores |
1 |
1 |
Processing speed (MIPS) |
500 |
1000 |
Memory RAM (MB) |
1740 |
2048 |
Storage space (GB) |
2.5 |
2.5 |
Applications are classified as a bag-of-tasks, and an exponential distribution is used to create the arrival time of requests. A maximum generation size was used to produce the first population at random. The population size is 10, and the chance of mutation is set at 0.01 [11]. All data centres have limited load balancing configured to a minimum of 20% and a maximum of 90%. Six distinct workloads, designated {workload_0, workload_1,…, workload_6}, were produced using an exponential distribution with mean values of {50000, 70000, 100000, 120000, 150000, and 200000}, respectively. The simulations run using the previously given parameters. Every experiment is conducted 30 times in order to obtain a normal value, and the mean value is then published. The Round-Robin algorithm was used to compare the suggested method's outcome [31]. The simulations' output demonstrates that employing the suggested strategy has reduced energy use and reduced carbon footprint. Figures 4 and 5 display the outcomes of the simulation, respectively. The improvements in energy usage and carbon footprint are displayed in Figs. 6 and 7, respectively. The comparison between the suggested technique with Round-Robin's carbon footprint in data centres is shown in Fig 8.
Table 3. Characteristics of Physical Machine
|
Hpproliant Ml110 G5 |
Hp Proliant Ml110 G4 |
Server |
Intel Xeon 3075 |
Intel Xeon 3040 |
Processor name |
500 |
1000 |
Cores |
2 |
2 |
Processor frequency |
2660 |
1860 |
Fig 4: Comparison of energy consumption between “Two Phase Carbon Aware” and Round-Robin
Fig 5: Comparison of carbon footprints between “Two Phase Carbon Aware” and Round-Robin
Fig 6: Energy consumption improvement in “Two Phase Carbon Aware” in difference with Round-Robin
Fig 7: Carbon footprint improvement in “Two Phase Carbon Aware” in difference with Round-Robin
Fig 8: Comparison of carbon footprint between “Two Phase Carbon Aware” and Round-Robin
CONCLUSION AND FUTURE WORK
This study examines the crucial function of the Cloud Broker in cloud computing and proposes a "Two Phase Carbon Aware Cloud Broker" that aims to reduce energy and carbon emissions by taking data centres' energy and carbon efficiency—which may vary geographically—into account. A genetic algorithm has been used to choose and position the cloudlets on the appropriate virtual machine in order to produce the "Two Phase Carbon Aware Cloud Broker." CloudSim has been extended for simulation-based evaluation. According to simulation data, the suggested strategy can cut energy and carbon emissions by 15% and 20%, respectively, as compared to Round Robin. In the future, trade-offs between SLA, service provider income, and energy cost should be taken into account. Live VM migration strategies should be evaluated in order to improve energy usage and achieve green cloud computing
REFERENCE
Ayush Lingwal*, Ankit Garg, A Cloud Broker-Based Approach to Improve the Energy Consumption and Achieve A Green Cloud Computing, Int. J. Sci. R. Tech., 2025, 2 (4), 186-196. https://doi.org/10.5281/zenodo.15185637