• Title/Summary/Keyword: hadoop

Search Result 397, Processing Time 0.024 seconds

Rapid Management Mechanism Against Harmful Materials of Agri-Food Based on Big Data Analysis (빅 데이터 분석 기반 농 식품 위해인자 신속관리 방법)

  • Park, Hyeon;Kang, Sung-soo;Jeong, Hoon;Kim, Se-Han
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.6
    • /
    • pp.1166-1174
    • /
    • 2015
  • There were the attempts to prevent the spread of harmful materials of the agri-food through the record tracking of the products with the bar code, the partial information tracking of the agri-food storage and the delivery vehicle, or the control of the temperature by intuition. However, there were many problems in the attempts because of the insufficient information, the information distortion and the independent information network of each distribution company. As a result, it is difficult to prevent the spread over the life-cycle of the agri-food using the attempts. To solve the problems, we propose the mechanism mainly to do context awareness, predict, and track the harmful materials of agri-food using big data processing.

Structural Change Detection Technique for RDF Data in MapReduce (맵리듀스에서의 구조적 RDF 데이터 변경 탐지 기법)

  • Lee, Taewhi;Im, Dong-Hyuk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.8
    • /
    • pp.293-298
    • /
    • 2014
  • Detecting and understanding the changes between RDF data is crucial in the evolutionary process, synchronization system, and versioning system on the web of data. However, current researches on detecting changes still remain unsatisfactory in that they did neither consider the large scale of RDF data nor accurately produce the RDF deltas. In this paper, we propose a scalable and effective change detection using a MapReduce framework which has been used in many fields to process and analyze large volumes of data. In particular, we focus on the structure-based change detection that adopts a strategy for the comparison of blank nodes in RDF data. To achieve this, we employ a method which is composed of two MapReduce jobs. First job partitions the triples with blank nodes by grouping each triple with the same blank node ID and then computes the incoming path to the blank node. Second job partitions the triples with the same path and matchs blank nodes with the Hungarian method. In experiments, we show that our approach is more accurate and effective than the previous approach.

Data Access Frequency based Data Replication Method using Erasure Codes in Cloud Storage System (클라우드 스토리지 시스템에서 데이터 접근빈도와 Erasure Codes를 이용한 데이터 복제 기법)

  • Kim, Ju-Kyeong;Kim, Deok-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.85-91
    • /
    • 2014
  • Cloud storage system uses a distributed file system for storing and managing data. Traditional distributed file system makes a triplication of data in order to restore data loss in disk failure. However, enforcing data replication method increases storage utilization and causes extra I/O operations during replication process. In this paper, we propose a data replication method using erasure codes in cloud storage system to improve storage space efficiency and I/O performance. In particular, according to data access frequency, the proposed method can reduce the number of data replications but using erasure codes can keep the same data recovery performance. Experimental results show that proposed method improves performance in storage efficiency 40%, read throughput 11%, write throughput 10% better than HDFS does.

Design and Implementation of a Search Engine based on Apache Spark (아파치 스파크 기반 검색엔진의 설계 및 구현)

  • Park, Ki-Sung;Choi, Jae-Hyun;Kim, Jong-Bae;Park, Jae-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.1
    • /
    • pp.17-28
    • /
    • 2017
  • Recently, a study on data has been actively conducted because the value of the data has become more useful. Web crawler that is program of data collection recently spotlighted because it can take advantage of the various fields. Web crawler can be defined as a tool to analyze the web pages and collects the URL by traversing the web server in an automated manner. For the treatment of Big-data, distributed Web crawler is widely used which is based on the Hadoop MapReduce. But, it is difficult to use and has constraints on the performance. Apache spark that is the In-memory computing platform is an alternative to MapReduce. The search engine which is one of the main purposes of web crawler displays the information you search by keyword gathered by web crawler. If search engines implement a spark-based web crawler instead of traditional MapReduce-based web crawler, it would be a more rapid data collection.

Design and Implementation of Efficient Storage and Retrieval Technology of Traffic Big Data (교통 빅데이터의 효율적 저장 및 검색 기술의 설계와 구현)

  • Kim, Ki-su;Yi, Jae-Jin;Kim, Hong-Hoi;Jang, Yo-lim;Hahm, Yu-Kun
    • The Journal of Bigdata
    • /
    • v.4 no.2
    • /
    • pp.207-220
    • /
    • 2019
  • Recent developments in information and communication technology has enabled the deployment of sensor based data to provide real-time services. In Korea, The Korea Transportation Safety Authority is collecting driving information of all commercial vehicles through a fitted digital tachograph (DTG). This information gathered using DTG can be utilized in various ways in the field of transportation. Notably in autonomous driving, the real-time analysis of this information can be used to prevent or respond to dangerous driving behavior. However, there is a limit to processing a large amount of data at a level suitable for real-time services using a traditional database system. In particular, due to a such technical problem, the processing of large quantity of traffic big data for real-time commercial vehicle operation information analysis has never been attempted in Korea. In order to solve this problem, this study optimized the new database server system and confirmed that a real-time service is possible. It is expected that the constructed database system will be used to secure base data needed to establish digital twin and autonomous driving environments.

  • PDF

Study on the Application Methods of Big Data at a Corporation -Cases of A and Y corporation Big Data System Projects- (기업의 빅데이터 적용방안 연구 -A사, Y사 빅데이터 시스템 적용 사례-)

  • Lee, Jae Sung;Hong, Sung Chan
    • Journal of Internet Computing and Services
    • /
    • v.15 no.1
    • /
    • pp.103-112
    • /
    • 2014
  • In recent years, the rapid diffusion of smart devices and growth of internet usage and social media has led to a constant production of huge amount of valuable data set that includes personal information, buying patterns, location information and other things. IT and Production Infrastructure has also started to produce its own data with the vitalization of M2M (Machine-to-Machine) and IoT (Internet of Things). This analysis study researches the applicable effects of Structured and Unstructured Big Data in various business circumstances, and purposes to find out the value creation method for a corporation through the Structured and Unstructured Big Data case studies. The result demonstrates that corporations looking for the optimized big data utilization plan could maximize their creative values by utilizing Unstructured and Structured Big Data generated interior and exterior of corporations.

Matrix-based Filtering and Load-balancing Algorithm for Efficient Similarity Join Query Processing in Distributed Computing Environment (분산 컴퓨팅 환경에서 효율적인 유사 조인 질의 처리를 위한 행렬 기반 필터링 및 부하 분산 알고리즘)

  • Yang, Hyeon-Sik;Jang, Miyoung;Chang, Jae-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.7
    • /
    • pp.667-680
    • /
    • 2016
  • As distributed computing platforms like Hadoop MapReduce have been developed, it is necessary to perform the conventional query processing techniques, which have been executed in a single computing machine, in distributed computing environments efficiently. Especially, studies on similarity join query processing in distributed computing environments have been done where similarity join means retrieving all data pairs with high similarity between given two data sets. But the existing similarity join query processing schemes for distributed computing environments have a problem of skewed computing load balance between clusters because they consider only the data transmission cost. In this paper, we propose Matrix-based Load-balancing Algorithm for efficient similarity join query processing in distributed computing environment. In order to uniform load balancing of clusters, the proposed algorithm estimates expected computing cost by using matrix and generates partitions based on the estimated cost. In addition, it can reduce computing loads by filtering out data which are not used in query processing in clusters. Finally, it is shown from our performance evaluation that the proposed algorithm is better on query processing performance than the existing one.

Implementation of Data processing of the High Availability for Software Architecture of the Cloud Computing (클라우드 서비스를 위한 고가용성 대용량 데이터 처리 아키텍쳐)

  • Lee, Byoung-Yup;Park, Junho;Yoo, Jaesoo
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.2
    • /
    • pp.32-43
    • /
    • 2013
  • These days, there are more and more IT research institutions which foresee cloud services as the predominant IT service in the near future and there, in fact, are actual cloud services provided by some IT leading vendors. Regardless of physical location of the service and environment of the system, cloud service can provide users with storage services, usage of data and software. On the other hand, cloud service has challenges as well. Even though cloud service has its edge in terms of the extent to which the IT resource can be freely utilized regardless of the confinement of hardware, the availability is another problem to be solved. Hence, this paper is dedicated to tackle the aforementioned issues; prerequisites of cloud computing for distributed file system, open source based Hadoop distributed file system, in-memory database technology and high availability database system. Also the author tries to body out the high availability mass distributed data management architecture in cloud service's perspective using currently used distributed file system in cloud computing market.

Distributed File Systems Architectures of the Large Data for Cloud Data Services (클라우드 데이터 서비스를 위한 대용량 데이터 처리 분산 파일 아키텍처 설계)

  • Lee, Byoung-Yup;Park, Jun-Ho;Yoo, Jae-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.2
    • /
    • pp.30-39
    • /
    • 2012
  • In these day, some of IT venders already were going to cloud computing market, as well they are going to expand their territory for the cloud computing market through that based on their hardware and software technology, making collaboration between hardware and software vender. Distributed file system is very mainly technology for the cloud computing that must be protect performance and safety for high levels service requests as well data store. This paper introduced distributed file system for cloud computing and how to use this theory such as memory database, Hadoop file system, high availability database system. now In the market, this paper define a very large distributed processing architect as a reference by kind of distributed file systems through using technology in cloud computing market.

DETECTING VARIABILITY IN ASTRONOMICAL TIME SERIES DATA: APPLICATIONS OF CLUSTERING METHODS IN CLOUD COMPUTING ENVIRONMENTS

  • Shin, Min-Su;Byun, Yong-Ik;Chang, Seo-Won;Kim, Dae-Won;Kim, Myung-Jin;Lee, Dong-Wook;Ham, Jae-Gyoon;Jung, Yong-Hwan;Yoon, Jun-Weon;Kwak, Jae-Hyuck;Kim, Joo-Hyun
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.36 no.2
    • /
    • pp.131.1-131.1
    • /
    • 2011
  • We present applications of clustering methods to detect variability in massive astronomical time series data. Focusing on variability of bright stars, we use clustering methods to separate possible variable sources from other time series data, which include intrinsically non-variable sources and data with common systematic patterns. We already finished the analysis of the Northern Sky Variability Survey data, which include about 16 million light curves, and present candidate variable sources with their association to other data at different wavelengths. We also apply our clustering method to the light curves of bright objects in the SuperWASP Data Release 1. For the analysis of the SuperWASP data, we exploit a elastically configurable Cloud computing environments that the KISTI Supercomputing Center is deploying. Two quite different configurations are incorporated in our Cloud computing test bed. One system uses the Hadoop distributed processing with its distributed file system, using distributed processing with data locality condition. Another one adopts the Condor and the Lustre network file system. We present test results, considering performance of processing a large number of light curves, and finding clusters of variable and non-variable objects.

  • PDF