• Title/Summary/Keyword: Distribute Computing

Search Result 88, Processing Time 0.025 seconds

Multi-queue Hybrid Job Scheduling Mechanism in Grid Computing (그리드 컴퓨팅의 다중 큐 하이브리드 작업스케줄링 기법)

  • Kang, Chang-Hoon;Choi, Chang-Yeol;Park, Kie-Jin;Kim, Sung-Soo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.7
    • /
    • pp.304-318
    • /
    • 2007
  • Grid computing is a service that share geographically distributed computing resources through high speed network. In this paper, we propose hybrid scheduling scheme which considers not only meta-scheduling scheme to distribute the job between the nodes of grid computing system but also the job scheduling to distribute the job within the local nodes. According to the number of processors needed and expected execution time, the job with high priority is allocated to job queue while the one with low priority and remote job are allocated to backfill queue. We evaluate the proposing scheme through the various experiments and the results show that the utilization of grid computing system increases and the job slowdown decreases.

A Study on the Distribute Authentication Method Scheme through Authentication and Right Mechanism Trend of the Ubiquitous Environment (유비쿼터스 환경의 인증 및 권한 메커니즘 동향을 통한 분산 인증기법 방안 연구)

  • Oh, Dong-Yeol;Sung, Kyung-Sang;Kim, Bae-Hyun;Oh, Hae-Seok
    • Convergence Security Journal
    • /
    • v.8 no.1
    • /
    • pp.35-42
    • /
    • 2008
  • While an information system and administration for an application that a user contacts with raise a head by an important problem, a system approach and methodology for administration are mentioned. Authentication technology of various configuration is used, but non-efficiency by complicated authentication administration and operation inappropriate use are for a successful expansion of various and new business of wire/wireless environment. In addition, under the mobile computer environment with different authentic method each other, it is difficult at all to expect flexible and continuous service. Under the ubiquitous computing environment, It is very important thing plan to research and develop compatibility and the side of variance authentication plan that preservation characteristics are helped. Hereby, This paper look around an requirement items and authority mechanism for the administration and the operation mechanism of the distributed authentication considering expansion possibility of the ubiquitous computing environment not only fixed computing environment but also mobile computing. In future, we expect it by can guide positive participation about distributed authentication technique of the genuine ubiquitous environment.

  • PDF

A Study On Recommend System Using Co-occurrence Matrix and Hadoop Distribution Processing (동시발생 행렬과 하둡 분산처리를 이용한 추천시스템에 관한 연구)

  • Kim, Chang-Bok;Chung, Jae-Pil
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.5
    • /
    • pp.468-475
    • /
    • 2014
  • The recommend system is getting more difficult real time recommend by lager preference data set, computing power and recommend algorithm. For this reason, recommend system is proceeding actively one's studies toward distribute processing method of large preference data set. This paper studied distribute processing method of large preference data set using hadoop distribute processing platform and mahout machine learning library. The recommend algorithm is used Co-occurrence Matrix similar to item Collaborative Filtering. The Co-occurrence Matrix can do distribute processing by many node of hadoop cluster, and it needs many computation scale but can reduce computation scale by distribute processing. This paper has simplified distribute processing of co-occurrence matrix by changes over from four stage to three stage. As a result, this paper can reduce mapreduce job and can generate recommend file. And it has a fast processing speed, and reduce map output data.

Work Allocation Methods and Performance Comparisons on the Virtual Parallel Computing System based on the IBM Aglets (IBM Aglets를 기반으로 하는 가상 병렬 컴퓨팅 시스템에서 작업 할당 기법과 성능 비교)

  • Kim, Kyong-Ha;Kim, Young-Hak;Oh, Gil-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.4
    • /
    • pp.411-422
    • /
    • 2002
  • Recently, there have been active researches about the VPCS (Virtual Parallel Computing System) based on multiple agents. The PVCS uses personal computers or workstations that are dispersed all over the internet, rather than a high-cost supercomputer, to solve complex problems that require a huge number of calculations. It can be made up with either homogeneous or heterogeneous computers, depending on resources available on the internet. In this paper, we propose a new method in order to distribute worker agents and work packages efficiently on the VPCS based on the IBM Aglets. The previous methods use mainly the master-slave pattern for distributing worker agents and work packages. However, in these methods the workload increases dramatically at the central master as the number of agents increases. As a solution to this problem, our method appoints worker agents to distribute worker agents and workload packages. The proposed method is evaluated in several ways on the VPCS, and its results are improved to be worthy of close attention as compared with the previous ones.

Applying Workload Shaping Toward Green Cloud Computing

  • Kim, Woongsup
    • International journal of advanced smart convergence
    • /
    • v.1 no.2
    • /
    • pp.12-15
    • /
    • 2012
  • Energy costs for operating and cooling computing resources in Cloud infrastructure have increased significantly up to the point where they would surpass the hardware purchasing costs. Thus, reducing the energy consumption can save a significant amount of management cost. One of major approach is removing hardware over-provisioning. In this paper, we propose a technique that facilitates power saving through reducing resource over provisioning based on virtualization technology. To this end, we use dynamic workload shaping to reschedule and redistribute job requests considering overall power consumption. In this paper, we present our approach to shape workloads dynamically and distribute them on virtual machines and physical machines through virtualization technology. We generated synthetic workload data and evaluated it in simulating and real implementation. Our simulated results demonstrate our approach outperforms to when not using no workload shaping methodology.

Honey Bee Based Load Balancing in Cloud Computing

  • Hashem, Walaa;Nashaat, Heba;Rizk, Rawya
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.12
    • /
    • pp.5694-5711
    • /
    • 2017
  • The technology of cloud computing is growing very quickly, thus it is required to manage the process of resource allocation. In this paper, load balancing algorithm based on honey bee behavior (LBA_HB) is proposed. Its main goal is distribute workload of multiple network links in the way that avoid underutilization and over utilization of the resources. This can be achieved by allocating the incoming task to a virtual machine (VM) which meets two conditions; number of tasks currently processing by this VM is less than number of tasks currently processing by other VMs and the deviation of this VM processing time from average processing time of all VMs is less than a threshold value. The proposed algorithm is compared with different scheduling algorithms; honey bee, ant colony, modified throttled and round robin algorithms. The results of experiments show the efficiency of the proposed algorithm in terms of execution time, response time, makespan, standard deviation of load, and degree of imbalance.

Development of CAE Service Platform Based on Cloud Computing Concept (클라우드 컴퓨팅기반 CAE서비스 플랫폼 개발)

  • Cho, Sang-Hyun
    • Journal of Korea Foundry Society
    • /
    • v.31 no.4
    • /
    • pp.218-223
    • /
    • 2011
  • Computer Aided Engineering (CAE) is very helpful field for every manufacturing industry including foundry. It covers CAD, CAM, and simulation technology also, and becomes as common sense in developing new products and processes. In South Korea, more than 600 foundries exist, and their average employee number is less than 40. Moreover, average age of them becomes higher. To break out these situations of foundry, software tools can be effective, and many commercial software tools had already been introduced. But their high costs and risks of investment act as difficulties in introducing the software tools to SMEs (Small and Medium size Enterprise). So we had developed cloud computing platform to propagate the CAE technologies to foundries. It includes HPC (High Performance Computing), platforms and software. So that users can try, enjoy, and utilize CAE software at cyber space without any investment. In addition, we also developed platform APIs (Application Programming Interface) to import not only our own CAE codes but also 3rd-party's packages to our cloud-computing platforms. As a result, CAE developers can upload their products on cloud platforms and distribute them through internet.

New GPU computing algorithm for wind load uncertainty analysis on high-rise systems

  • Wei, Cui;Luca, Caracoglia
    • Wind and Structures
    • /
    • v.21 no.5
    • /
    • pp.461-487
    • /
    • 2015
  • In recent years, the Graphics Processing Unit (GPU) has become a competitive computing technology in comparison with the standard Central Processing Unit (CPU) technology due to reduced unit cost, energy and computing time. This paper describes the derivation and implementation of GPU-based algorithms for the analysis of wind loading uncertainty on high-rise systems, in line with the research field of probability-based wind engineering. The study begins by presenting an application of the GPU technology to basic linear algebra problems to demonstrate advantages and limitations. Subsequently, Monte-Carlo integration and synthetic generation of wind turbulence are examined. Finally, the GPU architecture is used for the dynamic analysis of three high-rise structural systems under uncertain wind loads. In the first example the fragility analysis of a single degree-of-freedom structure is illustrated. Since fragility analysis employs sampling-based Monte Carlo simulation, it is feasible to distribute the evaluation of different random parameters among different GPU threads and to compute the results in parallel. In the second case the fragility analysis is carried out on a continuum structure, i.e., a tall building, in which double integration is required to evaluate the generalized turbulent wind load and the dynamic response in the frequency domain. The third example examines the computation of the generalized coupled wind load and response on a tall building in both along-wind and cross-wind directions. It is concluded that the GPU can perform computational tasks on average 10 times faster than the CPU.

Toward Energy-Efficient Task Offloading Schemes in Fog Computing: A Survey

  • Alasmari, Moteb K.;Alwakeel, Sami S.;Alohali, Yousef
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.3
    • /
    • pp.163-172
    • /
    • 2022
  • The interconnection of an enormous number of devices into the Internet at a massive scale is a consequence of the Internet of Things (IoT). As a result, tasks offloading from these IoT devices to remote cloud data centers become expensive and inefficient as their number and amount of its emitted data increase exponentially. It is also a challenge to optimize IoT device energy consumption while meeting its application time deadline and data delivery constraints. Consequently, Fog Computing was proposed to support efficient IoT tasks processing as it has a feature of lower service delay, being adjacent to IoT nodes. However, cloud task offloading is still performed frequently as Fog computing has less resources compared to remote cloud. Thus, optimized schemes are required to correctly characterize and distribute IoT devices tasks offloading in a hybrid IoT, Fog, and cloud paradigm. In this paper, we present a detailed survey and classification of of recently published research articles that address the energy efficiency of task offloading schemes in IoT-Fog-Cloud paradigm. Moreover, we also developed a taxonomy for the classification of these schemes and provided a comparative study of different schemes: by identifying achieved advantage and disadvantage of each scheme, as well its related drawbacks and limitations. Moreover, we also state open research issues in the development of energy efficient, scalable, optimized task offloading schemes for Fog computing.

Requirements for Future Digital Radiology System

  • Kim, Y.M.;Park, H.W.;Haynor, D.R.
    • Progress in Medical Physics
    • /
    • v.2 no.1
    • /
    • pp.3-16
    • /
    • 1991
  • Abstract. An area of particularly rapid technological growth in the last 15 years has been medical imaging (conventional X-ray, ultrasound, X-ray computed tomography (CT), magnetic resonance imaging (MRI). As the number and complexity of imaging studies rises, it becomes ever more important to distribute these images and the associated diagnoses in a timely and cost-effective fashion. The purpose of this paper is to describe the requirements for a future digital radiology system which will efficiently handle the large volume of images that generated, add new functionality to improve productivity of physicians, technologists, and other health care providers, and provide enough flexibility to allow the system to grow as medical image technology grows.

  • PDF