• 제목/요약/키워드: 하둡 서버

검색결과 29건 처리시간 0.029초

Adaptive User and Topic Modeling based Automatic TV Recommender System for Big Data Processing (빅 데이터 처리를 위한 적응적 사용자 및 토픽 모델링 기반 자동 TV 프로그램 추천시스템)

  • Kim, EunHui;Kim, Munchurl
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 한국방송공학회 2015년도 하계학술대회
    • /
    • pp.195-198
    • /
    • 2015
  • 최근 TV 서비스의 가입자 및 TV 프로그램 콘텐츠의 급격한 증가에 따라 빅데이터 처리에 적합한 추천 시스템의 필요성이 증가하고 있다. 본 논문은 사용자들의 간접 평가 데이터 기반의 추천 시스템 디자인 시, 누적된 사용자의 과거 이용내역 데이터를 저장하지 않고 새로 생성된 사용자 이용내역 데이터를 학습하는 효율적인 알고리즘이면서, 시간 흐름에 따라 사용자들의 선호도 변화 및 TV 프로그램 스케줄 변화의 추적이 가능한 토픽 모델링 기반의 알고리즘을 제안한다. 빅데이터 처리를 위해서는 분산처리 형태의 알고리즘을 피할 수 없는데, 기존의 연구들 중 토픽 모델링 기반의 추론 알고리즘의 병렬분산처리 과정 중에 핵심이 되는 부분은 많은 데이터를 여러 대의 기계에 나누어 병렬분산 학습하면서 전역변수 데이터를 동기화하는 부분이다. 그런데, 이러한 전역데이터 동기화 기술에 있어, 여러 대의 컴퓨터를 병렬분산처리하기위한 하둡 기반의 시스템 및 서버-클라이언트간의 중재, 고장 감내 시스템 등을 모두 고려한 알고리즘들이 제안되어 왔으나, 네트워크 대역폭 한계로 인해 데이터 증가에 따른 동기화 시간 지연은 피할 수 없는 부분이다. 이에, 본 논문에서는 빅데이터 처리를 위해 사용자들을 클러스터링하고, 클러스터별 제안 알고리즘으로 전역데이터 동기화를 수행한 것과 지역 데이터를 활용하여 추론 연산한 결과, 클러스터별 지역별 TV프로그램 시청 토큰 별 은닉토픽 할당 테이블을 유지할 때 추천 성능이 더욱 향상되어 나오는 결과를 확인하여, 제안된 구조의 추천 시스템 디자인의 효율성과 합리성을 확인할 수 있었다.

  • PDF

The Distributed Encryption Processing System for Large Capacity Personal Information based on MapReduce (맵리듀스 기반 대용량 개인정보 분산 암호화 처리 시스템)

  • Kim, Hyun-Wook;Park, Sung-Eun;Euh, Seong-Yul
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • 제18권3호
    • /
    • pp.576-585
    • /
    • 2014
  • Collecting and utilizing have a huge amount of personal data have caused severe security issues such as leakage of personal information. Several encryption algorithms for collected personal information have been widely adopted to prevent such problems. In this paper, a novel algorithm based on MapReduce is proposed for encrypting such private information. Furthermore, test environment has been built for the performance verification of the distributed encryption processing method. As the result of the test, average time efficiency has improved to 15.3% compare to encryption processing of token server and 3.13% compare to parallel processing.

Recommendation of Best Empirical Route Based on Classification of Large Trajectory Data (대용량 경로데이터 분류에 기반한 경험적 최선 경로 추천)

  • Lee, Kye Hyung;Jo, Yung Hoon;Lee, Tea Ho;Park, Heemin
    • KIISE Transactions on Computing Practices
    • /
    • 제21권2호
    • /
    • pp.101-108
    • /
    • 2015
  • This paper presents the implementation of a system that recommends empirical best routes based on classification of large trajectory data. As many location-based services are used, we expect the amount of location and trajectory data to become big data. Then, we believe we can extract the best empirical routes from the large trajectory repositories. Large trajectory data is clustered into similar route groups using Hadoop MapReduce framework. Clustered route groups are stored and managed by a DBMS, and thus it supports rapid response to the end-users' request. We aim to find the best routes based on collected real data, not the ideal shortest path on maps. We have implemented 1) an Android application that collects trajectories from users, 2) Apache Hadoop MapReduce program that can cluster large trajectory data, 3) a service application to query start-destination from a web server and to display the recommended routes on mobile phones. We validated our approach using real data we collected for five days and have compared the results with commercial navigation systems. Experimental results show that the empirical best route is better than routes recommended by commercial navigation systems.

Implement on Search Machine using Open Source Framework (오픈 소스 프레임워크를 활용한 검색엔진 구현)

  • Song, Hyun-Ok;Kim, A-Yong;Jung, Hoe-Kyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • 제19권3호
    • /
    • pp.552-557
    • /
    • 2015
  • IT technology development and smart appliances due to the increased use of a lot of data on production and consumption has become in the internet. Because this is why importance of information retrieval technology although the growing becoming aware of the difficult techniques to access the required of lot a background knowledge on information retrieval technology. However, the Lucene due to emerge provide to background can implement on search engine by using the Lucene of lack background knowledge for search technology. In this paper, suggest to implement on search engine by using the developed a framework on Lucene-based. Suggest a frameworks are use in the search engines on have guarantee in server environment support on distributed processing and distributed storage, and high availability by using the Hadoop and Nutch, Solr, Zookeeper.

Design and Implementation of a Search Engine based on Apache Spark (아파치 스파크 기반 검색엔진의 설계 및 구현)

  • Park, Ki-Sung;Choi, Jae-Hyun;Kim, Jong-Bae;Park, Jae-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • 제21권1호
    • /
    • pp.17-28
    • /
    • 2017
  • Recently, a study on data has been actively conducted because the value of the data has become more useful. Web crawler that is program of data collection recently spotlighted because it can take advantage of the various fields. Web crawler can be defined as a tool to analyze the web pages and collects the URL by traversing the web server in an automated manner. For the treatment of Big-data, distributed Web crawler is widely used which is based on the Hadoop MapReduce. But, it is difficult to use and has constraints on the performance. Apache spark that is the In-memory computing platform is an alternative to MapReduce. The search engine which is one of the main purposes of web crawler displays the information you search by keyword gathered by web crawler. If search engines implement a spark-based web crawler instead of traditional MapReduce-based web crawler, it would be a more rapid data collection.

Design and Implementation of Efficient Storage and Retrieval Technology of Traffic Big Data (교통 빅데이터의 효율적 저장 및 검색 기술의 설계와 구현)

  • Kim, Ki-su;Yi, Jae-Jin;Kim, Hong-Hoi;Jang, Yo-lim;Hahm, Yu-Kun
    • The Journal of Bigdata
    • /
    • 제4권2호
    • /
    • pp.207-220
    • /
    • 2019
  • Recent developments in information and communication technology has enabled the deployment of sensor based data to provide real-time services. In Korea, The Korea Transportation Safety Authority is collecting driving information of all commercial vehicles through a fitted digital tachograph (DTG). This information gathered using DTG can be utilized in various ways in the field of transportation. Notably in autonomous driving, the real-time analysis of this information can be used to prevent or respond to dangerous driving behavior. However, there is a limit to processing a large amount of data at a level suitable for real-time services using a traditional database system. In particular, due to a such technical problem, the processing of large quantity of traffic big data for real-time commercial vehicle operation information analysis has never been attempted in Korea. In order to solve this problem, this study optimized the new database server system and confirmed that a real-time service is possible. It is expected that the constructed database system will be used to secure base data needed to establish digital twin and autonomous driving environments.

  • PDF

Service Delivery Time Improvement using HDFS in Desktop Virtualization (데스크탑 가상화에서 HDFS를 이용한 서비스 제공시간 개선 연구)

  • Lee, Wan-Hee;Lee, Bong-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • 제16권5호
    • /
    • pp.913-921
    • /
    • 2012
  • The current PC-based desktop environment is being converted into server-based virtual desktop environment due to security, mobility, and low upgrade cost. In this paper, a desktop virtualization system is implemented using an open source-based cloud computing platform and hypervisor. The implemented system is applied to the virtualziation of computer in university. In order to reduce the image transfer time, we propose a solution using HDFS. In addition, an image management structure needed for desktop virtualization is designed and implemented, and applied to a real computer lab which accommodates 30 PCs. The performance of the proposed system is evaluated in various aspects including implementation cost, power saving rate, reduction rate of license cost, and management cost. The experimental results showed that the proposed system considerably reduced the image transfer time for desktop service.

A Study on Big Data Based Method of Patient Care Analysis (빅데이터 기반 환자 간병 방법 분석 연구)

  • Park, Ji-Hun;Hwang, Seung-Yeon;Yun, Bum-Sik;Choe, Su-Gil;Lee, Don-Hee;Kim, Jeong-Joon;Moon, Jin-Yong;Park, Kyung-won
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • 제20권3호
    • /
    • pp.163-170
    • /
    • 2020
  • With the development of information and communication technologies, the growing volume of data is increasing exponentially, raising interest in big data. As technologies related to big data have developed, big data is being collected, stored, processed, analyzed, and utilized in many fields. Big data analytics in the health care sector, in particular, is receiving much attention because they can also have a huge social and economic impact. It is predicted that it will be able to use Big Data technology to analyze patients' diagnostic data and reduce the amount of money that is spent on simple hospital care. Therefore, in this thesis, patient data is analyzed to present to patients who are unable to go to the hospital or caregivers who do not have medical expertise with close care guidelines. First, the collected patient data is stored in HDFS and the data is processed and classified using R, a big data processing and analysis tool, in the Hadoop environment. Visualize to a web server using R Shiny, which is used to implement various functions of R on the web.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • 제14권6호
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.