• 제목/요약/키워드: Big Data Cluster

검색결과 208건 처리시간 0.025초

실내 환경 모니터링을 위한 빅데이터 클러스터 설계 및 구현 (Design and Implementation of Big Data Cluster for Indoor Environment Monitering)

  • 전병찬;고민구
    • 디지털산업정보학회논문지
    • /
    • 제13권2호
    • /
    • pp.77-85
    • /
    • 2017
  • Due to the expansion of accommodation space caused by increase of population along with lifestyle changes, most of people spend their time indoor except for the travel time. Because of this, environmental change of indoor is very important, and it affects people's health and economy in resources. But, most of people don't acknowledge the importance of indoor environment. Thus, monitoring system for sustaining and managing indoor environment systematically is needed, and big data clusters should be used in order to save and manage numerous sensor data collected from many spaces. In this paper, we design a big data cluster for the indoor environment monitoring in order to store the sensor data and monitor unit of the huge building Implementation design big data cluster-based system for the analysis, and a distributed file system and building a Hadoop, HBase for big data processing. Also, various sensor data is saved for collection, and effective indoor environment management and health enhancement through monitoring is expected.

Scalable Prediction Models for Airbnb Listing in Spark Big Data Cluster using GPU-accelerated RAPIDS

  • Muralidharan, Samyuktha;Yadav, Savita;Huh, Jungwoo;Lee, Sanghoon;Woo, Jongwook
    • Journal of information and communication convergence engineering
    • /
    • 제20권2호
    • /
    • pp.96-102
    • /
    • 2022
  • We aim to build predictive models for Airbnb's prices using a GPU-accelerated RAPIDS in a big data cluster. The Airbnb Listings datasets are used for the predictive analysis. Several machine-learning algorithms have been adopted to build models that predict the price of Airbnb listings. We compare the results of traditional and big data approaches to machine learning for price prediction and discuss the performance of the models. We built big data models using Databricks Spark Cluster, a distributed parallel computing system. Furthermore, we implemented models using multiple GPUs using RAPIDS in the spark cluster. The model was developed using the XGBoost algorithm, whereas other models were developed using traditional central processing unit (CPU)-based algorithms. This study compared all models in terms of accuracy metrics and computing time. We observed that the XGBoost model with RAPIDS using GPUs had the highest accuracy and computing time.

비용 효율적 맵리듀스 처리를 위한 클러스터 규모 설정 (Scaling of Hadoop Cluster for Cost-Effective Processing of MapReduce Applications)

  • 류우석
    • 한국전자통신학회논문지
    • /
    • 제15권1호
    • /
    • pp.107-114
    • /
    • 2020
  • 본 논문에서는 하둡 플랫폼에서 비용 효율적 빅데이터 분석을 수행하기 위한 클러스터 규모의 설정 방안을 연구한다. 의료기관의 경우 진료기록의 병원 외부 저장이 가능해짐에 따라 클라우드 기반 빅데이터 분석 요구가 증가하고 있다. 본 논문에서는 대중적으로 많이 사용되고 있는 클라우드 서비스인 아마존 EMR 프레임워크를 분석하고, 비용 효율적으로 하둡을 운용하기 위해 클러스터의 규모를 산정하기 위한 모델을 제시한다. 그리고, 다양한 조건에서의 실험을 통해 맵리듀스의 실행에 영향을 미치는 요인을 분석한다. 이를 통해 비용 대비 처리시간이 가장 효율적인 클러스터를 설정함으로써 빅데이터 분석시 효율성을 증대시킬 수 있다.

A Container Orchestration System for Process Workloads

  • Jong-Sub Lee;Seok-Jae Moon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제15권4호
    • /
    • pp.270-278
    • /
    • 2023
  • We propose a container orchestration system for process workloads that combines the potential of big data and machine learning technologies to integrate enterprise process-centric workloads. This proposed system analyzes big data generated from industrial automation to identify hidden patterns and build a machine learning prediction model. For each machine learning case, training data is loaded into a data store and preprocessed for model training. In the next step, you can use the training data to select and apply an appropriate model. Then evaluate the model using the following test data: This step is called model construction and can be performed in a deployment framework. Additionally, a visual hierarchy is constructed to display prediction results and facilitate big data analysis. In order to implement parallel computing of PCA in the proposed system, several virtual systems were implemented to build the cluster required for the big data cluster. The implementation for evaluation and analysis built the necessary clusters by creating multiple virtual machines in a big data cluster to implement parallel computation of PCA. The proposed system is modeled as layers of individual components that can be connected together. The advantage of a system is that components can be added, replaced, or reused without affecting the rest of the system.

A Study on FIFA Partner Adidas of 2022 Qatar World Cup Using Big Data Analysis

  • Kyung-Won, Byun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제15권1호
    • /
    • pp.164-170
    • /
    • 2023
  • The purpose of this study is to analyze the big data of Adidas brand participating in the Qatar World Cup in 2022 as a FIFA partner to understand useful information, semantic connection and context from unstructured data. Therefore, this study collected big data generated during the World Cup from Adidas participating in sponsorship as a FIFA partner for the 2022 Qatar World Cup and collected data from major portal sites to understand its meaning. According to text mining analysis, 'Adidas' was used the most 3,340 times based on the frequency of keyword appearance, followed by 'World Cup', 'Qatar World Cup', 'Soccer', 'Lionel Messi', 'Qatar', 'FIFA', 'Korea', and 'Uniform'. In addition, the TF-IDF rankings were 'Qatar World Cup', 'Soccer', 'Lionel Messi', 'World Cup', 'Uniform', 'Qatar', 'FIFA', 'Ronaldo', 'Korea', and 'Nike'. As a result of semantic network analysis and CONCOR analysis, four groups were formed. First, Cluster A named it 'Qatar World Cup Sponsor' as words such as 'Adidas', 'Nike', 'Qatar World Cup', 'Sponsor', 'Sponsor Company', 'Marketing', 'Nation', 'Launch', 'Official', 'Commemoration' and 'National Team' were formed into groups. Second, B Cluster named it 'Group stage' as words such as 'Qatar', 'Uruguay', 'FIFA' and 'group stage' were formed into groups. Third, C Cluster named it 'Winning' as words such as 'World Cup Winning', 'Champion', 'France', 'Argentina', 'Lionel Messi', 'Advertising' and 'Photograph' formed a group. Fourth, D Cluster named it 'Official Ball' as words such as 'Official Ball', 'World Cup Official Ball', 'Soccer Ball', 'All Times', 'Al Rihla', 'Public', 'Technology' was formed into groups.

Big Data Analysis on the Perception of Home Training According to the Implementation of COVID-19 Social Distancing

  • Hyun-Chang Keum;Kyung-Won Byun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제15권3호
    • /
    • pp.211-218
    • /
    • 2023
  • Due to the implementation of COVID-19 distancing, interest and users in 'home training' are rapidly increasing. Therefore, the purpose of this study is to identify the perception of 'home training' through big data analysis on social media channels and provide basic data to related business sector. Social media channels collected big data from various news and social content provided on Naver and Google sites. Data for three years from March 22, 2020 were collected based on the time when COVID-19 distancing was implemented in Korea. The collected data included 4,000 Naver blogs, 2,673 news, 4,000 cafes, 3,989 knowledge IN, and 953 Google channel news. These data analyzed TF and TF-IDF through text mining, and through this, semantic network analysis was conducted on 70 keywords, big data analysis programs such as Textom and Ucinet were used for social big data analysis, and NetDraw was used for visualization. As a result of text mining analysis, 'home training' was found the most frequently in relation to TF with 4,045 times. The next order is 'exercise', 'Homt', 'house', 'apparatus', 'recommendation', and 'diet'. Regarding TF-IDF, the main keywords are 'exercise', 'apparatus', 'home', 'house', 'diet', 'recommendation', and 'mat'. Based on these results, 70 keywords with high frequency were extracted, and then semantic indicators and centrality analysis were conducted. Finally, through CONCOR analysis, it was clustered into 'purchase cluster', 'equipment cluster', 'diet cluster', and 'execute method cluster'. For the results of these four clusters, basic data on the 'home training' business sector were presented based on consumers' main perception of 'home training' and analysis of the meaning network.

A Classification Algorithm Based on Data Clustering and Data Reduction for Intrusion Detection System over Big Data

  • Wang, Qiuhua;Ouyang, Xiaoqin;Zhan, Jiacheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권7호
    • /
    • pp.3714-3732
    • /
    • 2019
  • With the rapid development of network, Intrusion Detection System(IDS) plays a more and more important role in network applications. Many data mining algorithms are used to build IDS. However, due to the advent of big data era, massive data are generated. When dealing with large-scale data sets, most data mining algorithms suffer from a high computational burden which makes IDS much less efficient. To build an efficient IDS over big data, we propose a classification algorithm based on data clustering and data reduction. In the training stage, the training data are divided into clusters with similar size by Mini Batch K-Means algorithm, meanwhile, the center of each cluster is used as its index. Then, we select representative instances for each cluster to perform the task of data reduction and use the clusters that consist of representative instances to build a K-Nearest Neighbor(KNN) detection model. In the detection stage, we sort clusters according to the distances between the test sample and cluster indexes, and obtain k nearest clusters where we find k nearest neighbors. Experimental results show that searching neighbors by cluster indexes reduces the computational complexity significantly, and classification with reduced data of representative instances not only improves the efficiency, but also maintains high accuracy.

사이클 선수들의 체형 특성에 관한 연구 (Investigation on the Korean Cyclists' Body Type Through Anthropometric Measurements)

  • 최미성;정성필
    • 한국의류학회지
    • /
    • 제28권7호
    • /
    • pp.1019-1028
    • /
    • 2004
  • The purpose of this study was to compare the body measurements of cyclists and non-cyclists and to classify cyclists' body types to offer basic information for the bicycle apparel manufacturer in Korea. The anthropometric data was collected including both direct and indirect measurements of 81 cyclists (40 female, 41 male) aged from 19 to 24. Anthropometric measurements were analyzed using percentiles, T-test, factor and cluster analysis. The results were as follows; Comparison of anthropomeoic data between cyclist and non-cyclist was to clarify that cyclists have bigger size than non-cyclists; especially the thigh circumference shows big differences. As the result of factor analysis, 5 factors, which explain 74% of variance, were extracted from all items for male and female cyclists. The results of cluster analysis classified body types into 3 groups. Cluster 1 among three female cyclist groups has biggest torso and had an erect back. Cluster 2 has small size among three female group and drooping shoulders. Cluster 3 has the bended forward shoulders and shows the protrusion back. In case of male cyclists, cluster 1 has thin body type owing to big height measurements and small girth measurements. Cluster 2 among three male groups has the biggest torso and thigh circumference. Cluster 3 has big forward angle of shoulders and shows the protrusion of the back as female cyclist.

Hadoop 클러스터에서 네임 노드와 데이터 노드가 빅 데이터처리 성능에 미치는 영향에 관한 연구 (A Study on the Effect of the Name Node and Data Node on the Big Data Processing Performance in a Hadoop Cluster)

  • 이영훈;김용일
    • 스마트미디어저널
    • /
    • 제6권3호
    • /
    • pp.68-74
    • /
    • 2017
  • 빅 데이터 처리는 파일이나 이미지, 동영상 등 다양한 형태의 데이터를 처리하여 문제를 해결하고 통찰력 있는 유용한 정보를 제공한다. 현재 빅 데이터 처리를 위해 다양한 플랫폼이 사용되지만, 하둡이 가지는 단순성, 생산성, 확장성, 그리고 내고장성 때문에 많은 기관, 기업에서 빅 데이터 처리에 하둡을 사용하고 있다. 또한, 하둡은 다양한 하드웨어 플랫폼으로 클러스터를 구축할 수 있으며, 네임 노드(Master)와 데이터 노드(Slave)로 구분하여 빅 데이터를 처리한다. 본 논문에서는 실제 기관과 기업에서 사용하는 완전분산모드를 사용하였으며 원활한 테스트를 위해 저전력이고 저가인 싱글 보드를 사용하여 하둡 클러스터를 구축하였다. 네임 노드의 성능 영향 분석은 싱글 보드와 랩톱을 네임 노드로 사용하여 같은 데이터 처리를 통하여 비교하였으며 데이터 노드의 개수에 따른 영향 분석은 싱글 보드를 기존 클러스터의 개수에서 2배까지 늘려가며 데이터 노드가 미치는 영향을 분석하였다.

대용량 분산처리 플랫폼 공유 모델 연구 (Shared Distributed Big-Data Processing Platform Model: a Study)

  • 정환진;강태호;김규석;신영호;정진규
    • 정보과학회 컴퓨팅의 실제 논문지
    • /
    • 제22권11호
    • /
    • pp.601-613
    • /
    • 2016
  • 최근 다양한 분야에서 빅데이터 분석의 수요가 증가하고 있다. 효과적인 빅데이터 분석을 위해 분산처리시스템을 이용하지만 시스템 구축에는 상당한 금전적, 시간적 비용이 소모된다. 따라서 시스템 구축비용절감을 위한 방안이 필요하며 빅데이터 분석 플랫폼 서비스를 제공하여 사용자의 시스템 구축비용을 절약할 수 있다. 멀티테넌시는 다수의 사용자가 하나의 서비스를 공유하는 환경을 말하며 싱글테넌트 환경에 비해 시스템 자원 이용률을 향상시킬 수 있다는 장점이 있다. 본 논문에서는 대용량 분산처리 플랫폼 모델 두 가지를 제시하며 멀티테넌시를 지원하기 위한 방안에 대해 설명한다. 첫 번째 모델은 다수의 사용자가 단일 하둡 플랫폼을 공유하는 모델로 하둡의 멀티테넌시 지원을 활용하며, 다른 모델은 가상화 클라우드 컴퓨팅 환경을 활용하여 개별 가상 하둡 클러스터를 제공하는 모델이다. 제시한 두 모델의 프로토타입을 구축하였으며 두 모델의 성능 비교와 하둡 플랫폼의 멀티테넌시 검증을 하였다.