Browsing by Author "Barbosa, Jorge G."
Now showing 1 - 3 of 3
Results Per Page
Sort Options
- Constructing Reliable Computing Environments on Top of Amazon EC2 Spot InstancesPublication . Sampaio, Altino; Barbosa, Jorge G.Cloud provider Amazon Elastic Compute Cloud (EC2) gives access to resources in the form of virtual servers, also known as instances. EC2 spot instances (SIs) offer spare computational capacity at steep discounts compared to reliable and fixed price on-demand instances. The drawback, however, is that the delay in acquiring spots can be incredible high. Moreover, SIs may not always be available as they can be reclaimed by EC2 at any given time, with a two-minute interruption notice. In this paper, we propose a multi-workflow scheduling algorithm, allied with a container migration-based mechanism, to dynamically construct and readjust virtual clusters on top of non-reserved EC2 pricing model instances. Our solution leverages recent findings on performance and behavior characteristics of EC2 spots. We conducted simulations by submitting real-life workflow applications, constrained by user-defined deadline and budget quality of service (QoS) parameters. The results indicate that our solution improves the rate of completed tasks by almost 20%, and the rate of completed workflows by at least 30%, compared with other state-of-the-art algorithms, for a worse-case scenario
- PIASA: A power and interference aware resource management strategy for heterogeneous workloads in cloud data centersPublication . Sampaio, Altino M.; Barbosa, Jorge G.; Prodan, RaduCloud data centers have been progressively adopted in different scenarios, as reflected in the execution of heterogeneous applications with diverse workloads and diverse quality of service (QoS) requirements. Virtual machine (VM) technology eases resource management in physical servers and helps cloud providers achieve goals such as optimization of energy consumption. However, the performance of an application running inside a VM is not guaranteed due to the interference among co-hosted workloads sharing the same physical resources. Moreover, the different types of co-hosted applications with diverse QoS requirements as well as the dynamic behavior of the cloud makes efficient provisioning of resources even more difficult and a challenging problem in cloud data centers. In this paper, we address the problem of resource allocation within a data center that runs different types of application workloads, particularly CPU- and network-intensive applications. To address these challenges, we propose an interference- and power-aware management mechanism that combines a performance deviation estimator and a scheduling algorithm to guide the resource allocation in virtualized environments. We conduct simulations by injecting synthetic workloads whose characteristics follow the last version of the Google Cloud tracelogs. The results indicate that our performance-enforcing strategy is able to fulfill contracted SLAs of real-world environments while reducing energy costs by as much as 21%.
- A Study on Cloud Cost Efficiency by Exploiting Idle Billing Period FractionsPublication . Sampaio, Altino M.; Barbosa, Jorge G.In most of the current commercial Clouds, resources are billed based on a time interval equal to one hour, as is the case of virtual machine (VM) instances on Amazon EC2. Such time interval is usually long, and yet the user has to pay for the whole last hour, even if he/she has only used a fraction of it, contradicting the pay-as-you-go model of Clouds. In this paper, we analyse the advantages of adopting alternative scheduling policies that exploit idle last time intervals, in terms of service cost to Cloud users and operating costs to Cloud providers. Using a real-life astronomy workflow application, constrained by user-defined Deadline and Budget quality of service (QoS) parameters, a set of online state-ofthe- art-based scheduling algorithms try different execution and resource provisioning plans. Our results show that exploitation of partially idle last time intervals can reduce the cost of service to the end user, and augments providers competitiveness up to 21.6% through energy efficiency improvement and consequent lowering of operational costs.