• Title/Summary/Keyword: Distributed Parallel Computing

Search Result 156, Processing Time 0.033 seconds

A Scheduling Algorithm for Parsing of MPEG Video on the Heterogeneous Distributed Environment (이질적인 분산 환경에서의 MPEG비디오의 파싱을 위한 스케줄링 알고리즘)

  • Nam Yunyoung;Hwang Eenjun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.12
    • /
    • pp.673-681
    • /
    • 2004
  • As the use of digital videos is getting popular, there is an increasing demand for efficient browsing and retrieval of video. To support such operations, effective video indexing should be incorporated. One of the most fundamental steps in video indexing is to parse video stream into shots and scenes. Generally, it takes long time to parse a video due to the huge amount of computation in a traditional single computing environment. Previous studies had widely used Round Robin scheduling which basically allocates tasks to each slave for a time interval of one quantum. This scheduling is difficult to adapt in a heterogeneous environment. In this paper, we propose two different parallel parsing algorithms which are Size-Adaptive Round Robin and Dynamic Size-Adaptive Round Robin for the heterogeneous distributed computing environments. In order to show their performance, we perform several experiments and show some of the results.

An Efficient Design and Implementation of an MdbULPS in a Cloud-Computing Environment

  • Kim, Myoungjin;Cui, Yun;Lee, Hanku
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.3182-3202
    • /
    • 2015
  • Flexibly expanding the storage capacity required to process a large amount of rapidly increasing unstructured log data is difficult in a conventional computing environment. In addition, implementing a log processing system providing features that categorize and analyze unstructured log data is extremely difficult. To overcome such limitations, we propose and design a MongoDB-based unstructured log processing system (MdbULPS) for collecting, categorizing, and analyzing log data generated from banks. The proposed system includes a Hadoop-based analysis module for reliable parallel-distributed processing of massive log data. Furthermore, because the Hadoop distributed file system (HDFS) stores data by generating replicas of collected log data in block units, the proposed system offers automatic system recovery against system failures and data loss. Finally, by establishing a distributed database using the NoSQL-based MongoDB, the proposed system provides methods of effectively processing unstructured log data. To evaluate the proposed system, we conducted three different performance tests on a local test bed including twelve nodes: comparing our system with a MySQL-based approach, comparing it with an Hbase-based approach, and changing the chunk size option. From the experiments, we found that our system showed better performance in processing unstructured log data.

Molecular Docking System using Parallel GPU (병렬 GPU를 이용한 분자 도킹 시스템)

  • Park, Sung-Jun
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.12
    • /
    • pp.441-448
    • /
    • 2008
  • The molecular docking system needs a large amount of computation and requires super-computing power. Since the experiment requires a large amount of time, the experiment is conducted in the distributed environment or in the grid environment. Recently, researches on using parallel GPU of far higher performance than that of CPU in scientific computing have been very actively conducted. CUDA is an open technique by which a parallel GPU programming is made possible. This study proposes the molecular docking system using CUDA. It also proposes algorithm that parallels energy-minimizing-computation. To verify such experiments, this study conducted a comparative analysis on the time required for experimenting molecular docking in general CPU and the time and performance of the parallel GPU-based molecular docking which is proposed in this study.

InterCom : Design and Implementation of an Agent-based Internet Computing Environment (InterCom : 에이전트 기반 인터넷 컴퓨팅 환경 설계 및 구현)

  • Kim, Myung-Ho;Park, Kweon
    • The KIPS Transactions:PartA
    • /
    • v.8A no.3
    • /
    • pp.235-244
    • /
    • 2001
  • Development of network and computer technology results in many studies to use physically distributed computers as a single resource. Generally, these studies have focused on developing environments based on message passing. These environments are mainly used to solve problems for scientific computation and process in parallel suing inside parallelism of the given problems. Therefore, these environments provide high parallelism generally, while it is difficult to program and use as well as it is required to have user accounts in the distributed computers. If a given problem is divided into completely independent subproblems, more efficient environment can be provided. We can find these problems in bio-informatics, 3D animatin, graphics, and etc., so the development of new environment for these problems can be considered to be very important. Therefore, we suggest new environment called InterCom based on a proxy computing, which can solve these problems efficiently, and explain the implementation of this environment. This environment consists of agent, server, and client. Merits of this environment are easy programing, no need of user accounts in the distributed computers, and easiness by compiling distributed code automatically.

  • PDF

Distributed AI Learning-based Proof-of-Work Consensus Algorithm (분산 인공지능 학습 기반 작업증명 합의알고리즘)

  • Won-Boo Chae;Jong-Sou Park
    • The Journal of Bigdata
    • /
    • v.7 no.1
    • /
    • pp.1-14
    • /
    • 2022
  • The proof-of-work consensus algorithm used by most blockchains is causing a massive waste of computing resources in the form of mining. A useful proof-of-work consensus algorithm has been studied to reduce the waste of computing resources in proof-of-work, but there are still resource waste and mining centralization problems when creating blocks. In this paper, the problem of resource waste in block generation was solved by replacing the relatively inefficient computation process for block generation with distributed artificial intelligence model learning. In addition, by providing fair rewards to nodes participating in the learning process, nodes with weak computing power were motivated to participate, and performance similar to the existing centralized AI learning method was maintained. To show the validity of the proposed methodology, we implemented a blockchain network capable of distributed AI learning and experimented with reward distribution through resource verification, and compared the results of the existing centralized learning method and the blockchain distributed AI learning method. In addition, as a future study, the thesis was concluded by suggesting problems and development directions that may occur when expanding the blockchain main network and artificial intelligence model.

Parallel Implementation of Nonlinear Analysis Program of PSC Frame Using MPI (MPI를 이용한 PSC 프레임 비선형해석 프로그램의 병렬화)

  • 이재석;최규천
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2001.04a
    • /
    • pp.61-68
    • /
    • 2001
  • A parallel nonlinear analysis program of prestressed concrete frame is migrated on a PC cluster system and a massively parallel processing system, CRAY T3E system, using MPI. The PC cluster system is configured with Pentium Ⅲ class PCs and fast ethernet. The CRAY T3E system is composed of a set of nodes each containing one Processing Element (PE), a memory subsystem and its distributed memory interconnect network. Parallel computing algorithms are implemented on element-wise processing parts including the calculation of stiffness matrix, element stresses and determination of material states, check of material failure and calculation of unbalanced loads. Parallel performance of the migrated program is evaluated through typical numerical examples.

  • PDF

Development of Parallel Distributed VHDL Simulator on SGI Origin 2000/Cray T3e/IBM SP2 Systems (SGI Origin 2000/Cray T3e /IBM SP2 시스템에서 병렬 분산 VHDL 시뮬레이터의 개발)

  • Jeong, Yeong-Sik
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.5 no.2
    • /
    • pp.196-208
    • /
    • 1999
  • 본 논문에서는 시뮬레이션 속도 향상을 위하여 VHDL(Very high speed integrated circuit Hardware Description Language)로 기술된 디지털 회로 시뮬레이션을 위한 병렬 분산 VHDL 시뮬레이터(Parallel Distributed VHDL Simulator : PDVS)를 개발한다. 개발된 프로그램을 대규모 병렬 프로그래밍 환경에서도 수행될 수 있도록 하기 위해서 표준 통신 라이브러리인 MPI(Message Passing Interface)를 이용하여 구현된다. PDVS 의 전체적인 시스템구성도, PDVS 에 사용된 시뮬레이션 프로토콜, 전역가상시간 계산 메카니즘 및 논리적 프로세스의 내부 구성요소들간의 관계와 PDVS의 제어 흐름도를 제시한다. 그리고 본 연구에서는 병렬 분산 시뮬레이션의 병렬성 정도를 분석하기 위하여 디지털 회로의 크기 변화와 처리되는 사건수(grain size)의 변화에 따른 성능 결과를 제시한다. 이 연구에서 4배크기의 디지털 회로를 적용한 경우는 프로세서를 12개 사용할 때에 8배의 속도향상을 얻었다. 그리고 처리되는 사건의 수가 200인 경우는 프로세서를 32개 사용할 때에 12배의 속도향상을 얻었다. 또한 동일한 방법을 SGI Origin 2000, Cray T3e 및 IBM SP2에 적용함으로서 그 성능의 간접적인 비교결과도 제시한다.

A Study on Distributed Processing of Big Data and User Authentication for Human-friendly Robot Service on Smartphone (인간 친화적 로봇 서비스를 위한 대용량 분산 처리 기술 및 사용자 인증에 관한 연구)

  • Choi, Okkyung;Jung, Wooyeol;Lee, Bong Gyou;Moon, Seungbin
    • Journal of Internet Computing and Services
    • /
    • v.15 no.1
    • /
    • pp.55-61
    • /
    • 2014
  • Various human-friendly robot services have been developed and mobile cloud computing is a real time computing service that allows users to rent IT resources what they want over the internet and has become the new-generation computing paradigm of information society. The enterprises and nations are actively underway of the business process using mobile cloud computing and they are aware of need for implementing mobile cloud computing to their business practice, but it has some week points such as authentication services and distributed processing technologies of big data. Sometimes it is difficult to clarify the objective of cloud computing service. In this study, the vulnerability of authentication services on mobile cloud computing is analyzed and mobile cloud computing model is constructed for efficient and safe business process. We will also be able to study how to process and analyze unstructured data in parallel to this model, so that in the future, providing customized information for individuals may be possible using unstructured data.

Applying TIPC Protocol for Increasing Network Performance in Hadoop-based Distributed Computing Environment (Hadoop 기반 분산 컴퓨팅 환경에서 네트워크 I/O의 성능개선을 위한 TIPC의 적용과 분석)

  • Yoo, Dae-Hyun;Chung, Sang-Hwa;Kim, Tae-Hun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.5
    • /
    • pp.351-359
    • /
    • 2009
  • Recently with increase of data in the Internet, platform technologies that can process huge data effectively such as Google platform and Hadoop are regarded as worthy of notice. In this kind of platform, there exist network I/O overheads to send task outputs due to the MapReduce operation which is a programming model to support parallel computation in the large cluster system. In this paper, we suggest applying of TIPC (Transparent Inter-Process Communication) protocol for reducing network I/O overheads and increasing network performance in the distributed computing environments. TIPC has a lightweight protocol stack and it spends relatively less CPU time than TCP because of its simple connection establishment and logical addressing. In this paper, we analyze main features of the Hadoop-based distributed computing system, and we build an experimental model which can be used for experiments to compare the performance of various protocols. In the experimental result, TIPC has a higher bandwidth and lower CPU overheads than other protocols.

Efficient Parallel Visualization of Large-scale Finite Element Analysis Data in Distributed Parallel Computing Environment (분산 병렬 계산환경에 적합한 초대형 유한요소 해석 결과의 효율적 병렬 가시화)

  • Kim, Chang-Sik;Song, You-Me;Kim, Ki-Ook;Cho, Jin-Yeon
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.32 no.10
    • /
    • pp.38-45
    • /
    • 2004
  • In this paper, a parallel visualization algorithm is proposed for efficient visualization of the massive data generated from large-scale parallel finite element analysis through investigating the characteristics of parallel rendering methods. The proposed parallel visualization algorithm is designed to be highly compatible with the characteristics of domain-wise computation in parallel finite element analysis by using the sort-last-sparse approach. In the proposed algorithm, the binary tree communication pattern is utilized to reduce the network communication time in image composition routine. Several benchmarking tests are carried out by using the developed in-house software, and the performance of the proposed algorithm is investigated.