• Title/Summary/Keyword: Spot instances

Search Result 11, Processing Time 0.026 seconds

Optimal Bidding Strategy for VM Spot Instances for Cloud Computing (클라우드 컴퓨팅을 위한 VM 스팟 인스턴스 입찰 최적화 전략)

  • Choi, Yeongho;Lim, Yujin;Park, Jaesung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.9
    • /
    • pp.1802-1807
    • /
    • 2015
  • The cloud computing service provides physical IT resources to VM instances to users using virtual technique and the users pay cost of VM instances to service provider. The auction model based on cloud computing provides available resources of service provider to users through auction mechanism. The users bid spot instances to process their a job until its deadline time. If the bidding price of users is higher than the spot price, the user will be provided the spot instances by service provider. In this paper, we propose a new bidding strategy to minimize the total cost for job completion. Typically, the users propose bidding price as high as possible to get the spot instances and the spot price get high. we lower the spot price using proposed strategy and minimize the total cost for job completion. To evaluate the performance of our strategy, we compare the spot price and the total cost for job completion with real workload data.

A Workflow Scheduling Technique Using Genetic Algorithm in Spot Instance-Based Cloud

  • Jung, Daeyong;Suh, Taeweon;Yu, Heonchang;Gil, JoonMin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.9
    • /
    • pp.3126-3145
    • /
    • 2014
  • Cloud computing is a computing paradigm in which users can rent computing resources from service providers according to their requirements. A spot instance in cloud computing helps a user to obtain resources at a lower cost. However, a crucial weakness of spot instances is that the resources can be unreliable anytime due to the fluctuation of instance prices, resulting in increasing the failure time of users' job. In this paper, we propose a Genetic Algorithm (GA)-based workflow scheduling scheme that can find the optimal task size of each instance in a spot instance-based cloud computing environment without increasing users' budgets. Our scheme reduces total task execution time even if an out-of-bid situation occurs in an instance. The simulation results, based on a before-and-after GA comparison, reveal that our scheme achieves performance improvements in terms of reducing the task execution time on average by 7.06%. Additionally, the cost in our scheme is similar to that when GA is not applied. Therefore, our scheme can achieve better performance than the existing scheme, by optimizing the task size allocated to each available instance throughout the evolutionary process of GA.

CRUSHING CHARACTERISTIC OF DOUBLE HAT-SHAPED MEMBERS OF DIFFERENT MATERIALS JOINED BY ADHESIVE BONDING AND SELF-PIERCING RIVET

  • Lee, M.H.;Kim, H.Y.;Oh, S.I.
    • International Journal of Automotive Technology
    • /
    • v.7 no.5
    • /
    • pp.565-570
    • /
    • 2006
  • The development of a light-weight vehicle is in great demand for enhancement of fule efficiency and dynamic performance. The vehicle weight can be reduced effectively by using lightweight materials such as aluminum and magnesium. However, if such materials are used in vehicles, there are often instances when different materials such as aluminum and steel need to be joined to each other. The conventional joining method, namely resistance spot welding, cannot be used in joining different materials. Self-piercing rivet(SPR) and adhesive bonding, however, are good alternatives to resistance spot welding. This paper is concerned with the crushing test of double hat-shaped member made by resistance spot welding, SPR and adhesive bonding. Various parameters of crashworthiness are analyzed and evaluated. Based on these results, the applicability of SPR and adhesive bonding are proposed as an alternative to resistance spot welding.

A Time Threshold-based Checkpointing Scheme for Cost-Efficient Spot Instances in Cloud Computing (클라우드 컴퓨팅에서 비용-효율적 스팟 인스턴스를 위한 시간 문턱치 기반의 검사점 기법)

  • Jung, Daeyong;Yu, HeonChang;Gil, Joon-Min
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.11a
    • /
    • pp.191-193
    • /
    • 2011
  • 클라우드 환경에서 스팟 인스턴스(spot instance)는 사용자가 제시한 입찰 가격으로 클라우드 내의 자원을 활용하도록 해 준다. 그러나 사용자의 입찰 가격이 클라우드 자원 가격보다 높으면 작업 실패가 발생하고 이로 인해 작업 완료 시간의 지연과 서비스 품질의 저하를 야기한다. 이 문제에 효과적으로 대처하기 위해, 본 논문에서는 시간 문턱치 기반의 검사점(time threshold-based checkpointing) 기법을 제안하고, 시뮬레이션을 통하여 작업 수행 시간과 비용 절감 관점에서 기존 기법과 비교 분석한다.

A Study on Factor Extraction of Green-Network by Assessment Indicators -For the purpose of Biotop creation- (평가지표를 통한 녹지네트워크 인자도출에 관한 연구 -비오토프 조성을 위하여-)

  • Kim, E-Shin;Lee, Dong-Kun
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.4 no.3
    • /
    • pp.75-83
    • /
    • 2001
  • This study selected Yangpyung as a target site because Yangpyung is a area of high value blessed with well preserved natural resources and beautiful natural scene and where thoughtless development and malformed use of land are in progress under the mask of hotel, accommodations and sales facility which fits the interests of land owner. As for the method used for this study, I inquired into domestic/international instances and the concept of existing environmental indicators and assessment of suitability with a view to establish assessment indicators and pose the concept through theoretical investigation of green-network and to establish green-network. 17 articles of environmental indicators such as aspect analysis, contour, greenbelt, DEM, NDVI, nature conservation area, reservoir and area for the promotion of agriculture were chosen as a actual analysing n data for setting up assessment indicators. From the result of analysis, as anticipated, green zone in Yongmusan areas within Youngmun and Seojong areas were the center of green-network in the wide area green-network in Yangpyung and the restricted area by basin system, road and legal regulation was selected as spot area and finally arable land and reservoir were selected as base which connects core with spot.

  • PDF

QTL Mapping of Resistance to Gray Leaf Spot in Ryegrass: Consistency of QTL between Two Mapping Populations

  • Curley, J.;Chakraborty, N.;Chang, S.;Jung, G.
    • Asian Journal of Turfgrass Science
    • /
    • v.22 no.1
    • /
    • pp.85-100
    • /
    • 2008
  • Gray leaf spot (GLS) is a serious fungal disease caused by Pyricularia oryzae Cavara, recently reported on the important turf and forage species, perennial ryegrass (Lolium perenneL.). This fungus also causes rice blast, which is usually controlled by host resistance, but durability of resistance is a problem. Few instances of GLS resistance have been reported in perennial ryegrass. However, two major QTL for GLS resistance have been detected on linkage groups 3 and 6 in an Italian x perennial ryegrass mapping population. To confirm that those QTL are still detectable in the next generation and can function in a different genetic background, a resistant segregant from this population has been crossed with an unrelated susceptible perennial clone, to form a new mapping population segregating for GLS resistance. QTL analysis has been performed in the new population, using two different ryegrass field isolates and RAPD, RFLP, and SSR marker-based linkage maps for each parent. Results indicate the previously identified QTL on linkage group 3 is still significant in the new population, with LOD and percent of phenotypic variance explained ranging from 2.0 to 3.5 and 5% to 10%, respectively. Also two QTL were detected in the susceptible parent, with similar LOD and phenotypic variance explained. Although the linkage group 6 QTL was not detected, the major QTL on linkage group 3 appears to beconfirmed. These results will add to our understanding of the genetic architecture of GLS resistance in ryegrass, which will facilitate its use in perennial ryegrass breeding programs.

Performance Evaluation and Analysis of Multiple Scenarios of Big Data Stream Computing on Storm Platform

  • Sun, Dawei;Yan, Hongbin;Gao, Shang;Zhou, Zhangbing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.2977-2997
    • /
    • 2018
  • In big data era, fresh data grows rapidly every day. More than 30,000 gigabytes of data are created every second and the rate is accelerating. Many organizations rely heavily on real time streaming, while big data stream computing helps them spot opportunities and risks from real time big data. Storm, one of the most common online stream computing platforms, has been used for big data stream computing, with response time ranging from milliseconds to sub-seconds. The performance of Storm plays a crucial role in different application scenarios, however, few studies were conducted to evaluate the performance of Storm. In this paper, we investigate the performance of Storm under different application scenarios. Our experimental results show that throughput and latency of Storm are greatly affected by the number of instances of each vertex in task topology, and the number of available resources in data center. The fault-tolerant mechanism of Storm works well in most big data stream computing environments. As a result, it is suggested that a dynamic topology, an elastic scheduling framework, and a memory based fault-tolerant mechanism are necessary for providing high throughput and low latency services on Storm platform.

Community Analysis and Pathogen Monitoring in Wild Cyprinid Fish and Crustaceans in the Geum River Estuary (금강 하구 자연수계 생물체의 군집 분석 및 질병 원인체 검사)

  • Kim, So Yeon;Hur, Jun Wook;Cha, Seung Joo;Park, Myoung Ae;Choi, Hye-Sung;Kwon, Joon Yeong;Kwon, Se Ryun
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.51 no.3
    • /
    • pp.248-253
    • /
    • 2018
  • Freshwater farms are primarily located adjacent to rivers and lakes, facilitating the introduction and spread of pathogens into natural systems. Therefore, it is necessary to continuously monitor natural aquatic organisms, the breeding environment, and infection rates by pathogenic organisms. Fish and crustaceans were sampled 4 times in the Geum River estuary in 2016. The samples were analyzed for the presence of pathogens for reportable communicable diseases, including KHVD (koi herpesvirus disease), SVC (spring viraemia of carp), EUS (epizootic ulcerative syndrome) and WSD (white spot disease); parasite abundance was also examined. The dominant fish species were deep body bitterling Acanthorhodes macropterus (21.4%), followed by skygager Erythroculter erythropterus (12.7%). For crustaceans, Palaemon paucidens and Chinese mitten crab Eriocheir sinensis were dominant. Sixty fish and 36 crustacean species were examined for reportable communicable diseases. When using a specific primer set for each disease, PCR analysis did not detect any reportable communicable diseases in the samples. Some instances of Dactylogyrus, copepods, nematodes and metacercaria were detected. However, the PCR results indicated that the metacercaria were not Clonorchis sinensis.

OBSERVABILITY-IN-DEPTH: AN ESSENTIAL COMPLEMENT TO THE DEFENSE-IN-DEPTH SAFETY STRATEGY IN THE NUCLEAR INDUSTRY

  • Favaro, Francesca M.;Saleh, Joseph H.
    • Nuclear Engineering and Technology
    • /
    • v.46 no.6
    • /
    • pp.803-816
    • /
    • 2014
  • Defense-in-depth is a fundamental safety principle for the design and operation of nuclear power plants. Despite its general appeal, defense-in-depth is not without its drawbacks, which include its potential for concealing the occurrence of hazardous states in a system, and more generally rendering the latter more opaque for its operators and managers, thus resulting in safety blind spots. This in turn translates into a shrinking of the time window available for operators to identify an unfolding hazardous condition or situation and intervene to abate it. To prevent this drawback from materializing, we propose in this work a novel safety principle termed "observability-in-depth". We characterize it as the set of provisions technical, operational, and organizational designed to enable the monitoring and identification of emerging hazardous conditions and accident pathogens in real-time and over different time-scales. Observability-in-depth also requires the monitoring of conditions of all safety barriers that implement defense-in-depth; and in so doing it supports sensemaking of identified hazardous conditions, and the understanding of potential accident sequences that might follow (how they can propagate). Observability-in-depth is thus an information-centric principle, and its importance in accident prevention is in the value of the information it provides and actions or safety interventions it spurs. We examine several "event reports" from the U.S. Nuclear Regulatory Commission database, which illustrate specific instances of violation of the observability-in-depth safety principle and the consequences that followed (e.g., unmonitored releases and loss of containments). We also revisit the Three Mile Island accident in light of the proposed principle, and identify causes and consequences of the lack of observability-in-depth related to this accident sequence. We illustrate both the benefits of adopting the observability-in-depth safety principle and the adverse consequences when this principle is violated or not implemented. This work constitutes a first step in the development of the observability-in-depth safety principle, and we hope this effort invites other researchers and safety professionals to further explore and develop this principle and its implementation.

Evaluating Computational Efficiency of Spatial Analysis in Cloud Computing Platforms (클라우드 컴퓨팅 기반 공간분석의 연산 효율성 분석)

  • CHOI, Changlock;KIM, Yelin;HONG, Seong-Yun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.4
    • /
    • pp.119-131
    • /
    • 2018
  • The increase of high-resolution spatial data and methodological developments in recent years has enabled a detailed analysis of individual experiences in space and over time. However, despite the increasing availability of data and technological advances, such individual-level analysis is not always possible in practice because of its computing requirements. To overcome this limitation, there has been a considerable amount of research on the use of high-performance, public cloud computing platforms for spatial analysis and simulation. The purpose of this paper is to empirically evaluate the efficiency and effectiveness of spatial analysis in cloud computing platforms. We compare the computing speed for calculating the measure of spatial autocorrelation and performing geographically weighted regression analysis between a local machine and spot instances on clouds. The results indicate that there could be significant improvements in terms of computing time when the analysis is performed parallel on clouds.