• Title/Summary/Keyword: Distributed Processing

Search Result 2,324, Processing Time 0.024 seconds

A Design and Implementation of Distributed Object Group Platform for Supporting Real-Time Application in CORBA Environments (CORBA 환경에서 실시간 응용을 자원을 위한 분산 객체그룹 플랫폼의 설계 및 구현)

  • Kim, Myeong-Hui;Lee, Jae-Wan;Ju, Su-Jong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.4
    • /
    • pp.1062-1072
    • /
    • 2000
  • The applications developing in distributed object computing enviroments are faced with the difficulties for managing various lots of distributed objects. Also, because the most multimedia service, like video, audio, and so forth, must be satisfied itself with real-time constraints, the users also are feeling with necessary to apply real-time mechanisms to distributed multimedia services. The goal of this paper is to solve the problems for managing distributed objects, and to be easy to develop complex applications that can provide real-time services. To do this, we designed and implemented a real-time object group platform that can be placed between applications and CORBA. This platform is extended the existing object group model[13,14] added to the scheduler and timer object components for supporting real-time concept. We designed the components for platform by using James Rumbaugh object modeling technology that consists of object, function, and dynamic model. And then we described the detailed interfaces of the components by IDL, and implemented our real-time object group's platform using OrbixMT 22 which is the IONA Technologies' ORB product. Finally, we showed the execution procedures of the schduler object of each components in a real-time object group platform.

  • PDF

Delayed Block Replication Scheme of Hadoop Distributed File System for Flexible Management of Distributed Nodes (하둡 분산 파일시스템에서의 유연한 노드 관리를 위한 지연된 블록 복제 기법)

  • Ryu, Woo-Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.2
    • /
    • pp.367-374
    • /
    • 2017
  • This paper discusses management problems of Hadoop distributed node, which is a platform for big data processing, and proposes a novel technique for enabling flexible node management of Hadoop Distributed File System. Hadoop cannot configure Hadoop cluster dynamically because it judges temporarily unavailable nodes as a failure. Delayed block replication scheme proposed in this paper delays the removal of unavailable node as much as possible so as to be easily rejoined. Experimental results show that the proposed scheme increases flexibility of node management with little impact on distributed processing performance when the cluster size changes.

Development of Big-data Management Platform Considering Docker Based Real Time Data Connecting and Processing Environments (도커 기반의 실시간 데이터 연계 및 처리 환경을 고려한 빅데이터 관리 플랫폼 개발)

  • Kim, Dong Gil;Park, Yong-Soon;Chung, Tae-Yun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.4
    • /
    • pp.153-161
    • /
    • 2021
  • Real-time access is required to handle continuous and unstructured data and should be flexible in management under dynamic state. Platform can be built to allow data collection, storage, and processing from local-server or multi-server. Although the former centralize method is easy to control, it creates an overload problem because it proceeds all the processing in one unit, and the latter distributed method performs parallel processing, so it is fast to respond and can easily scale system capacity, but the design is complex. This paper provides data collection and processing on one platform to derive significant insights from various data held by an enterprise or agency in the latter manner, which is intuitively available on dashboards and utilizes Spark to improve distributed processing performance. All service utilize dockers to distribute and management. The data used in this study was 100% collected from Kafka, showing that when the file size is 4.4 gigabytes, the data processing speed in spark cluster mode is 2 minute 15 seconds, about 3 minutes 19 seconds faster than the local mode.

FAST Design for Large-Scale Satellite Image Processing (대용량 위성영상 처리를 위한 FAST 시스템 설계)

  • Lee, Youngrim;Park, Wanyong;Park, Hyunchun;Shin, Daesik
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.25 no.4
    • /
    • pp.372-380
    • /
    • 2022
  • This study proposes a distributed parallel processing system, called the Fast Analysis System for remote sensing daTa(FAST), for large-scale satellite image processing and analysis. FAST is a system that designs jobs in vertices and sequences, and distributes and processes them simultaneously. FAST manages data based on the Hadoop Distributed File System, controls entire jobs based on Apache Spark, and performs tasks in parallel in multiple slave nodes based on a docker container design. FAST enables the high-performance processing of progressively accumulated large-volume satellite images. Because the unit task is performed based on Docker, it is possible to reuse existing source codes for designing and implementing unit tasks. Additionally, the system is robust against software/hardware faults. To prove the capability of the proposed system, we performed an experiment to generate the original satellite images as ortho-images, which is a pre-processing step for all image analyses. In the experiment, when FAST was configured with eight slave nodes, it was found that the processing of a satellite image took less than 30 sec. Through these results, we proved the suitability and practical applicability of the FAST design.

High-Performance Korean Morphological Analyzer Using the MapReduce Framework on the GPU

  • Cho, Shi-Won;Lee, Dong-Wook
    • Journal of Electrical Engineering and Technology
    • /
    • v.6 no.4
    • /
    • pp.573-579
    • /
    • 2011
  • To meet the scalability and performance requirements of data analyses, which often involve voluminous data, efficient parallel or concurrent algorithms and frameworks are essential. We present a high-performance Korean morphological analyzer which employs the MapReduce framework on the graphics processing unit (GPU). MapReduce is a programming framework introduced by Google to aid the development of web search applications on a large number of central processing units (CPUs). GPUs are designed as a special-purpose co-processor. Their programming interfaces are typically formulated for graphics applications. Compared to CPUs, GPUs have greater computation power and memory bandwidth; however, GPUs are more difficult to program because of the design of their architectures. The performance of the Korean morphological analyzer using the MapReduce framework on the GPU is evaluated in comparison with the CPU-based model. The proposed Korean Morphological analyzer shows promising scalable performance on distributed computing with the GPU.

Performance Optimization of Big Data Center Processing System - Big Data Analysis Algorithm Based on Location Awareness

  • Zhao, Wen-Xuan;Min, Byung-Won
    • International Journal of Contents
    • /
    • v.17 no.3
    • /
    • pp.74-83
    • /
    • 2021
  • A location-aware algorithm is proposed in this study to optimize the system performance of distributed systems for processing big data with low data reliability and application performance. Compared with previous algorithms, the location-aware data block placement algorithm uses data block placement and node data recovery strategies to improve data application performance and reliability. Simulation and actual cluster tests showed that the location-aware placement algorithm proposed in this study could greatly improve data reliability and shorten the application processing time of I/O interfaces in real-time.

An Efficient Data Distribution Method on a Distributed Shared Memory Machine (분산공유 메모리 시스템 상에서의 효율적인 자료분산 방법)

  • Min, Ok-Gee
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.6
    • /
    • pp.1433-1442
    • /
    • 1996
  • Data distribution of SPMD(Single Program Multiple Data) pattern is one of main features of HPF (High Performance Fortran). This paper describes design is sues for such data distribution and its efficient execution model on TICOM IV computer, named SPAX(Scalable Parallel Architecture computer based on X-bar network). SPAX has a hierarchical clustering structure that uses distributed shared memory(DSM). In such memory structure, it cannot make a full system utilization to apply unanimously either SMDD(shared Memory Data Distribution) or DMDD(Distributed Memory Data Distribution). Here we propose another data distribution model, called DSMDD(Distributed Shared Memory Data Distribution), a data distribution model based on hierarchical masters-slaves scheme. In this model, a remote master and slaves are designated in each node, shared address scheme is used within a node and message passing scheme between nodes. In our simulation, assuming a node size in which system performance degradation is minimized,DSMDD is more effective than SMDD and DMDD. Especially,the larger number of logical processors and the less data dependency between distributed data,the better performace is obtained.

  • PDF

Comparison of Design and Implementation for Distributed Active Objects based on RMI and CORBA environment (RMI와 CORBA 환경하의 분산 액티브 객체의 설계 및 구현에 대한 비교 분석)

  • Lee, Do-Hak;Kim, Shik;Hyun, Mu-Yong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.11
    • /
    • pp.2721-2731
    • /
    • 1997
  • Distributed programming can be greatly simplified by language support for distributed communication. Many web-browsers now offer some form of active objects and the number and types of them are growing daily in interesting and innovative ways. Java applets are well known as one kind of active object related to web-browser. This paper focuses in distributed active objects which is one kind of active objects that can communicate with other active objects located in different machines across the Internet. Java RMI and CORBA IDL are two major programming environments for distributed active objects which are non compatible with each other. To make discussion concrete, we introduce a single application as implemented on two environments : the HORM, adopting RMI mechanism, and the OrbixWeb2.0.1, adopting CORBA specification, respectively. Binding, inheritance, polymorphism, object passing and callbacks across the machine boundary in distributed programming environments are issued. The results show that some differences in the implementation of distributed active objects can have a significant impact on how distributed applications are structured. The comparison between two implementations on the programming environments will be the basis for building the translation system between HORB to OrbixWeb and vice versa.

  • PDF

A Java Distributed Batch-processing System using Network of Workstation (워크스테이션 네트워크를 이용한 자바 분산 배치 처리 시스템)

  • Jeon, Jin-Su;Kim, Jeong-Seon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.5 no.5
    • /
    • pp.583-594
    • /
    • 1999
  • With the advance of VLSI and network technologies, it has now become a common practice to deploy a various forms of distributed computing environments. A study shows that a lot of network-aware computers are in an idle state for considerable amount of time depending on the types of users and time frames of the day. If we can take the full advantage of those idle computers, we can obtain the enormous combined processing power without further costly investment. In this paper, we present a distributed batch-processing system, called the Java Distributed Batch-processing System (JDBS), which allows us to execute CPU-intensive, independent jobs across a pool of idle workstations on top of extant distributed computing environments. Since JDBS is implemented using a Java programming language, it not only extends the scope of machine types that can be joined to the pool, but makes it a lot easier to build an entire system. Besides, JDBS is scalable and fault-tolerant due to its multi-cluster organization and intelligent strategies. A graphical user interface is also provided to facilitate the registration and unregistration, job submission, and job monitoring.