• Title/Summary/Keyword: distributed memory system

Search Result 211, Processing Time 0.036 seconds

Backend of a Parallelizing Compiler for an Heterogeneous Parallel System (이기종 병렬 시스템을 위한 자동적 병렬화 컴파일러 후위)

  • Kwon, Dae-Suk;Kim, Hsung-Hwan;Han, Sang-Yong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.8
    • /
    • pp.710-718
    • /
    • 2000
  • Many multiprocessing systems have been developed to exploit the parallelism and to improve the performance. However, the naive multiprocessing schemes were not successful as many researchers thought, due to the heavy cost of communication and synchronization resulting from parallelization. In this paper, we will identify the reasons for the poor performance and the compiler requirements for the performance improvement. We realized that the decisions for multiprocessing should be derived by the overhead information. We applied this idea to the automatic parallelizing compiler, SUIF. We substituted the original backend of SUIF with our backend using MPI, and gave it the capability to validate parallelization decisions based on overhead parameters. This backend converts the intermediate code containing spacification of parallelizable regions into the distributed-memory based parallel program with MPI function calls without excessive parallelization that may cause performance degradation.

  • PDF

A Vertical Partitioning Algorithm based on Fuzzy Graph (퍼지 그래프 기반의 수직 분할 알고리즘)

  • Son, Jin-Hyun;Choi, Kyung-Hoon;Kim, Myoung-Ho
    • Journal of KIISE:Databases
    • /
    • v.28 no.3
    • /
    • pp.315-323
    • /
    • 2001
  • The concept of vertical partitioning has been discussed so far in an objective of improving the performance of query execution and system throughput. It can be applied to the areas where the match between data and queries affects performance, which includes partitioning of individual files in centralized environments, data distribution in distributed databases, dividing data among different levels of memory hierarchies, and so on. In general, a vertical partitioning algorithm should support n-ary partitioning as well as a globally optimal solution for the generation of all meaningful fragments. Most previous methods, however, have some limitations to support both of them efficiently. Because the vertical partitioning problem basically includes the fuzziness property, the proper management is required for the fuzziness problem. In this paper we propose an efficient vertical $\alpha$-partitioning algorithm which is based on the fuzzy theory. The method can not only generate all meaningful fragments but also support n-ary partitioning without any complex mathematical computations.

  • PDF

The Method for Real-time Complex Event Detection of Unstructured Big data (비정형 빅데이터의 실시간 복합 이벤트 탐지를 위한 기법)

  • Lee, Jun Heui;Baek, Sung Ha;Lee, Soon Jo;Bae, Hae Young
    • Spatial Information Research
    • /
    • v.20 no.5
    • /
    • pp.99-109
    • /
    • 2012
  • Recently, due to the growth of social media and spread of smart-phone, the amount of data has considerably increased by full use of SNS (Social Network Service). According to it, the Big Data concept is come up and many researchers are seeking solutions to make the best use of big data. To maximize the creative value of the big data held by many companies, it is required to combine them with existing data. The physical and theoretical storage structures of data sources are so different that a system which can integrate and manage them is needed. In order to process big data, MapReduce is developed as a system which has advantages over processing data fast by distributed processing. However, it is difficult to construct and store a system for all key words. Due to the process of storage and search, it is to some extent difficult to do real-time processing. And it makes extra expenses to process complex event without structure of processing different data. In order to solve this problem, the existing Complex Event Processing System is supposed to be used. When it comes to complex event processing system, it gets data from different sources and combines them with each other to make it possible to do complex event processing that is useful for real-time processing specially in stream data. Nevertheless, unstructured data based on text of SNS and internet articles is managed as text type and there is a need to compare strings every time the query processing should be done. And it results in poor performance. Therefore, we try to make it possible to manage unstructured data and do query process fast in complex event processing system. And we extend the data complex function for giving theoretical schema of string. It is completed by changing the string key word into integer type with filtering which uses keyword set. In addition, by using the Complex Event Processing System and processing stream data at real-time of in-memory, we try to reduce the time of reading the query processing after it is stored in the disk.

Design and Implementation of an Efficient Web Services Data Processing Using Hadoop-Based Big Data Processing Technique (하둡 기반 빅 데이터 기법을 이용한 웹 서비스 데이터 처리 설계 및 구현)

  • Kim, Hyun-Joo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.1
    • /
    • pp.726-734
    • /
    • 2015
  • Relational databases used by structuralizing data are the most widely used in data management at present. However, in relational databases, service becomes slower as the amount of data increases because of constraints in the reading and writing operations to save or query data. Furthermore, when a new task is added, the database grows and, consequently, requires additional infrastructure, such as parallel configuration of hardware, CPU, memory, and network, to support smooth operation. In this paper, in order to improve the web information services that are slowing down due to increase of data in the relational databases, we implemented a model to extract a large amount of data quickly and safely for users by processing Hadoop Distributed File System (HDFS) files after sending data to HDFSs and unifying and reconstructing the data. We implemented our model in a Web-based civil affairs system that stores image files, which is irregular data processing. Our proposed system's data processing was found to be 0.4 sec faster than that of a relational database system. Thus, we found that it is possible to support Web information services with a Hadoop-based big data processing technique in order to process a large amount of data, as in conventional relational databases. Furthermore, since Hadoop is open source, our model has the advantage of reducing software costs. The proposed system is expected to be used as a model for Web services that provide fast information processing for organizations that require efficient processing of big data because of the increase in the size of conventional relational databases.

Design and Implementation of the Extended SLDS for Real-time Location Based Services (실시간 위치 기반 서비스를 위한 확장 SLDS 설계 및 구현)

  • Lee, Seung-Won;Kang, Hong-Koo;Hong, Dong-Suk;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • v.7 no.2 s.14
    • /
    • pp.47-56
    • /
    • 2005
  • Recently, with the rapid development of mobile computing, wireless positioning technologies, and the generalization of wireless internet, LBS (Location Based Service) which utilizes location information of moving objects is serving in many fields. In order to serve LBS efficiently, the location data server that periodically stores location data of moving objects is required. Formerly, GIS servers have been used to store location data of moving objects. However, GIS servers are not suitable to store location data of moving objects because it was designed to store static data. Therefore, in this paper, we designed and implemented an extended SLDS(Short-term Location Data Subsystem) for real-time Location Based Services. The extended SLDS is extended from the SLDS which is a subsystem of the GALIS(Gracefully Aging Location Information System) architecture that was proposed as a cluster-based distributed computing system architecture for managing location data of moving objects. The extended SLDS guarantees real-time service capabilities using the TMO(Time-triggered Message-triggered Object) programming scheme and efficiently manages large volume of location data through distributing moving object data over multiple nodes. The extended SLDS also has a little search and update overhead because of managing location data in main memory. In addition, we proved that the extended SLDS stores location data and performs load distribution more efficiently than the original SLDS through the performance evaluation.

  • PDF

Pre-Filtering based Post-Load Shedding Method for Improving Spatial Queries Accuracy in GeoSensor Environment (GeoSensor 환경에서 공간 질의 정확도 향상을 위한 선-필터링을 이용한 후-부하제한 기법)

  • Kim, Ho;Baek, Sung-Ha;Lee, Dong-Wook;Kim, Gyoung-Bae;Bae, Hae-Young
    • Journal of Korea Spatial Information System Society
    • /
    • v.12 no.1
    • /
    • pp.18-27
    • /
    • 2010
  • In u-GIS environment, GeoSensor environment requires that dynamic data captured from various sensors and static information in terms of features in 2D or 3D are fused together. GeoSensors, the core of this environment, are distributed over a wide area sporadically, and are collected in any size constantly. As a result, storage space could be exceeded because of restricted memory in DSMS. To solve this kind of problems, a lot of related studies are being researched actively. There are typically 3 different methods - Random Load Shedding, Semantic Load Shedding, and Sampling. Random Load Shedding chooses and deletes data in random. Semantic Load Shedding prioritizes data, then deletes it first which has lower priority. Sampling uses statistical operation, computes sampling rate, and sheds load. However, they are not high accuracy because traditional ones do not consider spatial characteristics. In this paper 'Pre-Filtering based Post Load Shedding' are suggested to improve the accuracy of spatial query and to restrict load shedding in DSMS. This method, at first, limits unnecessarily increased loads in stream queue with 'Pre-Filtering'. And then, it processes 'Post-Load Shedding', considering data and spatial status to guarantee the accuracy of result. The suggested method effectively reduces the number of the performance of load shedding, and improves the accuracy of spatial query.

Development of Information Technology Infrastructures through Construction of Big Data Platform for Road Driving Environment Analysis (도로 주행환경 분석을 위한 빅데이터 플랫폼 구축 정보기술 인프라 개발)

  • Jung, In-taek;Chong, Kyu-soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.3
    • /
    • pp.669-678
    • /
    • 2018
  • This study developed information technology infrastructures for building a driving environment analysis platform using various big data, such as vehicle sensing data, public data, etc. First, a small platform server with a parallel structure for big data distribution processing was developed with H/W technology. Next, programs for big data collection/storage, processing/analysis, and information visualization were developed with S/W technology. The collection S/W was developed as a collection interface using Kafka, Flume, and Sqoop. The storage S/W was developed to be divided into a Hadoop distributed file system and Cassandra DB according to the utilization of data. Processing S/W was developed for spatial unit matching and time interval interpolation/aggregation of the collected data by applying the grid index method. An analysis S/W was developed as an analytical tool based on the Zeppelin notebook for the application and evaluation of a development algorithm. Finally, Information Visualization S/W was developed as a Web GIS engine program for providing various driving environment information and visualization. As a result of the performance evaluation, the number of executors, the optimal memory capacity, and number of cores for the development server were derived, and the computation performance was superior to that of the other cloud computing.

SPQUSAR : A Large-Scale Qualitative Spatial Reasoner Using Apache Spark (SPQUSAR : Apache Spark를 이용한 대용량의 정성적 공간 추론기)

  • Kim, Jongwhan;Kim, Jonghoon;Kim, Incheol
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.12
    • /
    • pp.774-779
    • /
    • 2015
  • In this paper, we present the design and implementation of a large-scale qualitative spatial reasoner using Apache Spark, an in-memory high speed cluster computing environment, which is effective for sequencing and iterating component reasoning jobs. The proposed reasoner can not only check the integrity of a large-scale spatial knowledge base representing topological and directional relationships between spatial objects, but also expand the given knowledge base by deriving new facts in highly efficient ways. In general, qualitative reasoning on topological and directional relationships between spatial objects includes a number of composition operations on every possible pair of disjunctive relations. The proposed reasoner enhances computational efficiency by determining the minimal set of disjunctive relations for spatial reasoning and then reducing the size of the composition table to include only that set. Additionally, in order to improve performance, the proposed reasoner is designed to minimize disk I/Os during distributed reasoning jobs, which are performed on a Hadoop cluster system. In experiments with both artificial and real spatial knowledge bases, the proposed Spark-based spatial reasoner showed higher performance than the existing MapReduce-based one.

Analysis of Factors for Korean Women's Cancer Screening through Hadoop-Based Public Medical Information Big Data Analysis (Hadoop기반의 공개의료정보 빅 데이터 분석을 통한 한국여성암 검진 요인분석 서비스)

  • Park, Min-hee;Cho, Young-bok;Kim, So Young;Park, Jong-bae;Park, Jong-hyock
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.10
    • /
    • pp.1277-1286
    • /
    • 2018
  • In this paper, we provide flexible scalability of computing resources in cloud environment and Apache Hadoop based cloud environment for analysis of public medical information big data. In fact, it includes the ability to quickly and flexibly extend storage, memory, and other resources in a situation where log data accumulates or grows over time. In addition, when real-time analysis of accumulated unstructured log data is required, the system adopts Hadoop-based analysis module to overcome the processing limit of existing analysis tools. Therefore, it provides a function to perform parallel distributed processing of a large amount of log data quickly and reliably. Perform frequency analysis and chi-square test for big data analysis. In addition, multivariate logistic regression analysis of significance level 0.05 and multivariate logistic regression analysis of meaningful variables (p<0.05) were performed. Multivariate logistic regression analysis was performed for each model 3.

Predicting the Performance of Recommender Systems through Social Network Analysis and Artificial Neural Network (사회연결망분석과 인공신경망을 이용한 추천시스템 성능 예측)

  • Cho, Yoon-Ho;Kim, In-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.159-172
    • /
    • 2010
  • The recommender system is one of the possible solutions to assist customers in finding the items they would like to purchase. To date, a variety of recommendation techniques have been developed. One of the most successful recommendation techniques is Collaborative Filtering (CF) that has been used in a number of different applications such as recommending Web pages, movies, music, articles and products. CF identifies customers whose tastes are similar to those of a given customer, and recommends items those customers have liked in the past. Numerous CF algorithms have been developed to increase the performance of recommender systems. Broadly, there are memory-based CF algorithms, model-based CF algorithms, and hybrid CF algorithms which combine CF with content-based techniques or other recommender systems. While many researchers have focused their efforts in improving CF performance, the theoretical justification of CF algorithms is lacking. That is, we do not know many things about how CF is done. Furthermore, the relative performances of CF algorithms are known to be domain and data dependent. It is very time-consuming and expensive to implement and launce a CF recommender system, and also the system unsuited for the given domain provides customers with poor quality recommendations that make them easily annoyed. Therefore, predicting the performances of CF algorithms in advance is practically important and needed. In this study, we propose an efficient approach to predict the performance of CF. Social Network Analysis (SNA) and Artificial Neural Network (ANN) are applied to develop our prediction model. CF can be modeled as a social network in which customers are nodes and purchase relationships between customers are links. SNA facilitates an exploration of the topological properties of the network structure that are implicit in data for CF recommendations. An ANN model is developed through an analysis of network topology, such as network density, inclusiveness, clustering coefficient, network centralization, and Krackhardt's efficiency. While network density, expressed as a proportion of the maximum possible number of links, captures the density of the whole network, the clustering coefficient captures the degree to which the overall network contains localized pockets of dense connectivity. Inclusiveness refers to the number of nodes which are included within the various connected parts of the social network. Centralization reflects the extent to which connections are concentrated in a small number of nodes rather than distributed equally among all nodes. Krackhardt's efficiency characterizes how dense the social network is beyond that barely needed to keep the social group even indirectly connected to one another. We use these social network measures as input variables of the ANN model. As an output variable, we use the recommendation accuracy measured by F1-measure. In order to evaluate the effectiveness of the ANN model, sales transaction data from H department store, one of the well-known department stores in Korea, was used. Total 396 experimental samples were gathered, and we used 40%, 40%, and 20% of them, for training, test, and validation, respectively. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. The input variable measuring process consists of following three steps; analysis of customer similarities, construction of a social network, and analysis of social network patterns. We used Net Miner 3 and UCINET 6.0 for SNA, and Clementine 11.1 for ANN modeling. The experiments reported that the ANN model has 92.61% estimated accuracy and 0.0049 RMSE. Thus, we can know that our prediction model helps decide whether CF is useful for a given application with certain data characteristics.