• Title/Summary/Keyword: Apache Spark analysis

Search Result 17, Processing Time 0.035 seconds

Real-Time Stock Price Prediction using Apache Spark (Apache Spark를 활용한 실시간 주가 예측)

  • Dong-Jin Shin;Seung-Yeon Hwang;Jeong-Joon Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.4
    • /
    • pp.79-84
    • /
    • 2023
  • Apache Spark, which provides the fastest processing speed among recent distributed and parallel processing technologies, provides real-time functions and machine learning functions. Although official documentation guides for these functions are provided, a method for fusion of functions to predict a specific value in real time is not provided. Therefore, in this paper, we conducted a study to predict the value of data in real time by fusion of these functions. The overall configuration is collected by downloading stock price data provided by the Python programming language. And it creates a model of regression analysis through the machine learning function, and predicts the adjusted closing price among the stock price data in real time by fusing the real-time streaming function with the machine learning function.

Using IoT and Apache Spark Analysis Technique to Monitoring Architecture Model for Fruit Harvest Region (IoT 기반 Apache Spark 분석기법을 이용한 과수 수확 불량 영역 모니터링 아키텍처 모델)

  • Oh, Jung Won;Kim, Hangkon
    • Smart Media Journal
    • /
    • v.6 no.4
    • /
    • pp.58-64
    • /
    • 2017
  • Modern society is characterized by rapid increase in world population, aging of the rural population, decrease of cultivation area due to industrialization. The food problem is becoming an important issue with the farmers and becomes rural. Recently, the researches about the field of the smart farm are actively carried out to increase the profit of the rural area. The existing smart farm researches mainly monitor the cultivation environment of the crops in the greenhouse, another way like in the case of poor quality t is being studied that the system to control cultivation environmental factors is automatically activated to keep the cultivation environment of crops in optimum conditions. The researches focus on the crops cultivated indoors, and there are not many studies applied to the cultivation environment of crops grown outside. In this paper, we propose a method to improve the harvestability of poor areas by monitoring the areas with bad harvests by using big data analysis, by precisely predicting the harvest timing of fruit trees growing in orchards. Factors besides for harvesting include fruit color information and fruit weight information We suggest that a harvest correlation factor data collected in real time. It is analyzed using the Apache Spark engine. The Apache Spark engine has excellent performance in real-time data analysis as well as high capacity batch data analysis. User device receiving service supports PC user and smartphone users. A sensing data receiving device purpose Arduino, because it requires only simple processing to receive a sensed data and transmit it to the server. It regulates a harvest time of fruit which produces a good quality fruit, it is needful to determine a poor harvest area or concentrate a bad area. In this paper, we also present an architectural model to determine the bad areas of fruit harvest using strong data analysis.

S-PARAFAC: Distributed Tensor Decomposition using Apache Spark (S-PARAFAC: 아파치 스파크를 이용한 분산 텐서 분해)

  • Yang, Hye-Kyung;Yong, Hwan-Seung
    • Journal of KIISE
    • /
    • v.45 no.3
    • /
    • pp.280-287
    • /
    • 2018
  • Recently, the use of a recommendation system and tensor data analysis, which has high-dimensional data, is increasing, as they allow us to analyze the tensor and extract potential elements and patterns. However, due to the large size and complexity of the tensor, it needs to be decomposed in order to analyze the tensor data. While several tools are used for tensor decomposition such as rTensor, pyTensor, and MATLAB, since such tools run on a single machine, they are unable to handle large data. Also, while distributed tensor decomposition tools based on Hadoop can handle a scalable tensor, its computing speed is too slow. In this paper, we propose S-PARAFAC, which is a tensor decomposition tool based on Apache Spark, in distributed in-memory environments. We converted the PARAFAC algorithm into an Apache Spark version that enables rapid processing of tensor data. We also compared the performance of the Hadoop based tensor tool and S-PARAFAC. The result showed that S-PARAFAC is approximately 4~25 times faster than the Hadoop based tensor tool.

A Comparative Performance Analysis of Spark-Based Distributed Deep-Learning Frameworks (스파크 기반 딥 러닝 분산 프레임워크 성능 비교 분석)

  • Jang, Jaehee;Park, Jaehong;Kim, Hanjoo;Yoon, Sungroh
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.5
    • /
    • pp.299-303
    • /
    • 2017
  • By piling up hidden layers in artificial neural networks, deep learning is delivering outstanding performances for high-level abstraction problems such as object/speech recognition and natural language processing. Alternatively, deep-learning users often struggle with the tremendous amounts of time and resources that are required to train deep neural networks. To alleviate this computational challenge, many approaches have been proposed in a diversity of areas. In this work, two of the existing Apache Spark-based acceleration frameworks for deep learning (SparkNet and DeepSpark) are compared and analyzed in terms of the training accuracy and the time demands. In the authors' experiments with the CIFAR-10 and CIFAR-100 benchmark datasets, SparkNet showed a more stable convergence behavior than DeepSpark; but in terms of the training accuracy, DeepSpark delivered a higher classification accuracy of approximately 15%. For some of the cases, DeepSpark also outperformed the sequential implementation running on a single machine in terms of both the accuracy and the running time.

A Deep Learning Approach for Intrusion Detection

  • Roua Dhahbi;Farah Jemili
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.10
    • /
    • pp.89-96
    • /
    • 2023
  • Intrusion detection has been widely studied in both industry and academia, but cybersecurity analysts always want more accuracy and global threat analysis to secure their systems in cyberspace. Big data represent the great challenge of intrusion detection systems, making it hard to monitor and analyze this large volume of data using traditional techniques. Recently, deep learning has been emerged as a new approach which enables the use of Big Data with a low training time and high accuracy rate. In this paper, we propose an approach of an IDS based on cloud computing and the integration of big data and deep learning techniques to detect different attacks as early as possible. To demonstrate the efficacy of this system, we implement the proposed system within Microsoft Azure Cloud, as it provides both processing power and storage capabilities, using a convolutional neural network (CNN-IDS) with the distributed computing environment Apache Spark, integrated with Keras Deep Learning Library. We study the performance of the model in two categories of classification (binary and multiclass) using CSE-CIC-IDS2018 dataset. Our system showed a great performance due to the integration of deep learning technique and Apache Spark engine.

Network Traffic Measurement Analysis using Machine Learning

  • Hae-Duck Joshua Jeong
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.2
    • /
    • pp.19-27
    • /
    • 2023
  • In recent times, an exponential increase in Internet traffic has been observed as a result of advancing development of the Internet of Things, mobile networks with sensors, and communication functions within various devices. Further, the COVID-19 pandemic has inevitably led to an explosion of social network traffic. Within this context, considerable attention has been drawn to research on network traffic analysis based on machine learning. In this paper, we design and develop a new machine learning framework for network traffic analysis whereby normal and abnormal traffic is distinguished from one another. To achieve this, we combine together well-known machine learning algorithms and network traffic analysis techniques. Using one of the most widely used datasets KDD CUP'99 in the Weka and Apache Spark environments, we compare and investigate results obtained from time series type analysis of various aspects including malicious codes, feature extraction, data formalization, network traffic measurement tool implementation. Experimental analysis showed that while both the logistic regression and the support vector machine algorithm were excellent for performance evaluation, among these, the logistic regression algorithm performs better. The quantitative analysis results of our proposed machine learning framework show that this approach is reliable and practical, and the performance of the proposed system and another paper is compared and analyzed. In addition, we determined that the framework developed in the Apache Spark environment exhibits a much faster processing speed in the Spark environment than in Weka as there are more datasets used to create and classify machine learning models.

A Performance Comparison Study on Data Analysis Tool -Applying Machine Learning- (데이터 분석 도구 성능 비교 연구 -기계 학습을 적용하여-)

  • Kwon, Tae-Hee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.10a
    • /
    • pp.34-37
    • /
    • 2016
  • 빅데이터 시대가 도래되면서 과거와 비교할 수 없을 만큼의 방대하고 다양한 데이터가 생산됨에 따라 기존의 데이터 분석 도구의 사용은 한계에 부딪히게 되었다. 따라서 기존의 분석 도구보다 효율적이고 정확성이 높은 데이터 분석 도구를 필요로 하게 되었고, 빅데이터를 처리할 수 있는 분석 도구들에 대한 많은 연구들이 진행되어 왔다. R과 Apache Spark는 대표적인 데이터 분석 도구로 기계 학습을 위한 기능을 제공하고 있다. 본 논문에서는 기계 학습을 활용하여 두 개의 널리 알려진 데이터 분석 도구인 R과 Apache Spark의 데이터 분석 성능을 비교함으로써 보다 효율적이고 정확성이 높은 도구를 모색하고자 한다.

Implementation of Parallel Local Alignment Method for DNA Sequence using Apache Spark (Apache Spark을 이용한 병렬 DNA 시퀀스 지역 정렬 기법 구현)

  • Kim, Bosung;Kim, Jinsu;Choi, Dojin;Kim, Sangsoo;Song, Seokil
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.10
    • /
    • pp.608-616
    • /
    • 2016
  • The Smith-Watrman (SW) algorithm is a local alignment algorithm which is one of important operations in DNA sequence analysis. The SW algorithm finds the optimal local alignment with respect to the scoring system being used, but it has a problem to demand long execution time. To solve the problem of SW, some methods to perform SW in distributed and parallel manner have been proposed. The ADAM which is a distributed and parallel processing framework for DNA sequence has parallel SW. However, the parallel SW of the ADAM does not consider that the SW is a dynamic programming method, so the parallel SW of the ADAM has the limit of its performance. In this paper, we propose a method to enhance the parallel SW of ADAM. The proposed parallel SW (PSW) is performed in two phases. In the first phase, the PSW splits a DNA sequence into the number of partitions and assigns them to multiple nodes. Then, the original Smith-Waterman algorithm is performed in parallel at each node. In the second phase, the PSW estimates the portion of data sequence that should be recalculated, and the recalculation is performed on the portions in parallel at each node. In the experiment, we compare the proposed PSW to the parallel SW of the ADAM to show the superiority of the PSW.

Performance Factor of Distributed Processing of Machine Learning using Spark (스파크를 이용한 머신러닝의 분산 처리 성능 요인)

  • Ryu, Woo-Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.1
    • /
    • pp.19-24
    • /
    • 2021
  • In this paper, we study performance factor of machine learning in the distributed environment using Apache Spark and presents an efficient distributed processing method through experiments. This work firstly presents performance factor when performing machine learning in a distributed cluster by classifying cluster performance, data size, and configuration of spark engine. In addition, performance study of regression analysis using Spark MLlib running on the Hadoop cluster is performed while changing the configuration of the node and the Spark Executor. As a result of the experiment, it was confirmed that the effective number of executors was affected by the number of data blocks, but depending on the cluster size, the maximum and minimum values were limited by the number of cores and the number of worker nodes, respectively.

Improving Performance based on Processing Analysis of Big data log file (벅데이터 로그파일 처리 분석을 통한 성능 개선 방안)

  • Lee, Jaehan;Yu, Heonchang
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.10a
    • /
    • pp.539-541
    • /
    • 2016
  • 최근 빅데이터 분석을 위해 아파치 하둡(Apache Hadoop) 기반 에코시스템(Ecosystern)이 다양하게 활용되고 있다. 본 논문에서는 수집된 로그 데이터를 가공하여 데이터베이스에 로드하는 과정을 효율적으로 처리하기 위한 성능 평가를 수행한다. 이를 기반으로 텍스트 파일의 로그 데이터를 자바 코드로 개발된 프로그램에서 JDBC를 이용하여 오라클(Oracle) 데이터베이스에 삽입(Insert)하는 과정의 성능을 개선하기 위한 방안을 제안한다. 대용량 로그 파일의 효율적인 처리를 위해 하둡 에코시스템을 이용하여 처리 속도를 개선하고, 최근 인메모리(In-Mernory) 처리 방식으로 빠른 처리 속도로 인해 각광받고 있는 아파치 스파크(Apache Spark)를 이용한 처리와의 성능 평가를 수행한다. 이 연구를 통해 최적의 로그데이터 처리 시스템의 구축 방안을 제안한다.