• Title/Summary/Keyword: 메모리 기반 학습

Search Result 140, Processing Time 0.024 seconds

An Efficient Multidimensional Scaling Method based on CUDA and Divide-and-Conquer (CUDA 및 분할-정복 기반의 효율적인 다차원 척도법)

  • Park, Sung-In;Hwang, Kyu-Baek
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.4
    • /
    • pp.427-431
    • /
    • 2010
  • Multidimensional scaling (MDS) is a widely used method for dimensionality reduction, of which purpose is to represent high-dimensional data in a low-dimensional space while preserving distances among objects as much as possible. MDS has mainly been applied to data visualization and feature selection. Among various MDS methods, the classical MDS is not readily applicable to data which has large numbers of objects, on normal desktop computers due to its computational complexity. More precisely, it needs to solve eigenpair problems on dissimilarity matrices based on Euclidean distance. Thus, running time and required memory of the classical MDS highly increase as n (the number of objects) grows up, restricting its use in large-scale domains. In this paper, we propose an efficient approximation algorithm for the classical MDS based on divide-and-conquer and CUDA. Through a set of experiments, we show that our approach is highly efficient and effective for analysis and visualization of data consisting of several thousands of objects.

K Nearest Neighbor Joins for Big Data Processing based on Spark (Spark 기반 빅데이터 처리를 위한 K-최근접 이웃 연결)

  • JIAQI, JI;Chung, Yeongjee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.9
    • /
    • pp.1731-1737
    • /
    • 2017
  • K Nearest Neighbor Join (KNN Join) is a simple yet effective method in machine learning. It is widely used in small dataset of the past time. As the number of data increases, it is infeasible to run this model on an actual application by a single machine due to memory and time restrictions. Nowadays a popular batch process model called MapReduce which can run on a cluster with a large number of computers is widely used for large-scale data processing. Hadoop is a framework to implement MapReduce, but its performance can be further improved by a new framework named Spark. In the present study, we will provide a KNN Join implement based on Spark. With the advantage of its in-memory calculation capability, it will be faster and more effective than Hadoop. In our experiments, we study the influence of different factors on running time and demonstrate robustness and efficiency of our approach.

A Quality Identification System for Molding Parts Using HTM-Based Sound Recognition (HTM 기반의 소리 연식을 이용한 부품의 양.불량 판별 시스템)

  • Bae, Sun-Gap;Han, Chang-Young;Seo, Dae-Ho;Kim, Sung-Jin;Bae, Jong-Min;Kang, Hyun-Syug
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.10
    • /
    • pp.1494-1505
    • /
    • 2010
  • A variety of sounds take place in medium and small-sized manufactories producing many kinds of parts in a small quantity with one press. We developed the identification system for the quality of parts using HTM(Hierarchical Temporal Memory)-based sound recognition. HTM is the theory that the operation principle of human brain's neocortex is applied to computer, suggested by Jeff Hopkins. This theory memorizes temporal and spatial patterns hierarchically about the real world, which is known for its cognitive power superior to the previous recognition technologies in many cases. By applying the HTM model to the sound recognition, we developed the identification system for the quality of molding parts. In order to verify its performance we recorded the various sounds at the moment of producing parts in the real factory, constructed the HTM network of sound, and then identified the quality of parts by repeating learning and training. It reveals that this system gets an excellent and accurate results at the noisy factory.

Detecting code reuse attack using RNN (RNN을 이용한 코드 재사용 공격 탐지 방법 연구)

  • Kim, Jin-sub;Moon, Jong-sub
    • Journal of Internet Computing and Services
    • /
    • v.19 no.3
    • /
    • pp.15-23
    • /
    • 2018
  • A code reuse attack is an attack technique that can execute arbitrary code without injecting code directly into the stack by combining executable code fragments existing in program memory and executing them continuously. ROP(Return-Oriented Programming) attack is typical type of code reuse attack and serveral defense techniques have been proposed to deal with this. However, since existing methods use Rule-based method to detect attacks based on specific rules, there is a limitation that ROP attacks that do not correspond to previously defined rules can not be detected. In this paper, we introduce a method to detect ROP attack by learning command pattern used in ROP attack code using RNN(Recurrent Neural Network). We also show that the proposed method effectively detects ROP attacks by measuring False Positive Ratio, False Negative Ratio, and Accuracy for normal code and ROP attack code discrimination.

A Study on the Index Estimation of Missing Real Estate Transaction Cases Using Machine Learning (머신러닝을 활용한 결측 부동산 매매 지수의 추정에 대한 연구)

  • Kim, Kyung-Min;Kim, Kyuseok;Nam, Daisik
    • Journal of the Economic Geographical Society of Korea
    • /
    • v.25 no.1
    • /
    • pp.171-181
    • /
    • 2022
  • The real estate price index plays key roles as quantitative data in real estate market analysis. International organizations including OECD publish the real estate price indexes by country, and the Korea Real Estate Board announces metropolitan-level and municipal-level indexes. However, when the index is set on the smaller spatial unit level than metropolitan and municipal-level, problems occur: missing values. As the spatial scope is narrowed down, there are cases where there are few or no transactions depending on the unit period, which lead index calculation difficult or even impossible. This study suggests a supervised learning-based machine learning model to compensate for missing values that may occur due to no transaction in a specific range and period. The models proposed in our research verify the accuracy of predicting the existing values and missing values.

Development of Commercial Game Engine-based Low Cost Driving Simulator for Researches on Autonomous Driving Artificial Intelligent Algorithms (자율주행 인공지능 알고리즘 연구를 위한 상용 게임 엔진 기반 초저가 드라이빙 시뮬레이터 개발)

  • Im, Ji Ung;Kang, Min Su;Park, Dong Hyuk;Won, Jong hoon
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.20 no.6
    • /
    • pp.242-263
    • /
    • 2021
  • This paper presents a method to implement a low-cost driving simulator for developing autonomous driving algorithms. This is implemented by using GTA V, a physical engine-based commercial game software, containing a function to emulate output and data of various sensors for autonomous driving. For this, NF of Script Hook V is incorporated to acquire GT data by accessing internal data of the software engine, and then, various sensor data for autonomous driving are generated. We present an overall function of the developed driving simulator and perform a verification of individual functions. We explain the process of acquiring GT data via direct access to the internal memory of the game engine to build up an autonomous driving algorithm development environment. And, finally, an example applicable to artificial neural network training and performance evaluation by processing the emulated sensor output is included.

Distributed Processing of Big Data Analysis based on R using SparkR (SparkR을 이용한 R 기반 빅데이터 분석의 분산 처리)

  • Ryu, Woo-Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.1
    • /
    • pp.161-166
    • /
    • 2022
  • In this paper, we analyze the problems that occur when performing the big data analysis using R as a data analysis tool, and present the usefulness of the data analysis with SparkR which connects R and Spark to support distributed processing of big data effectively. First, we study the memory allocation problem of R which occurs when loading large amounts of data and performing operations, and the characteristics and programming environment of SparkR. And then, we perform the comparison analysis of the execution performance when linear regression analysis is performed in each environment. As a result of the analysis, it was shown that R can be used for data analysis through SparkR without additional language learning, and the code written in R can be effectively processed distributedly according to the increase in the number of nodes in the cluster.

Deep Learning-based Rheometer Quality Inspection Model Using Temporal and Spatial Characteristics

  • Jaehyun Park;Yonghun Jang;Bok-Dong Lee;Myung-Sub Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.11
    • /
    • pp.43-52
    • /
    • 2023
  • Rubber produced by rubber companies is subjected to quality suitability inspection through rheometer test, followed by secondary processing for automobile parts. However, rheometer test is being conducted by humans and has the disadvantage of being very dependent on experts. In order to solve this problem, this paper proposes a deep learning-based rheometer quality inspection system. The proposed system combines LSTM(Long Short-Term Memory) and CNN(Convolutional Neural Network) to take advantage of temporal and spatial characteristics from the rheometer. Next, combination materials of each rubber was used as an auxiliary input to enable quality conformity inspection of various rubber products in one model. The proposed method examined its performance with 30,000 validation datasets. As a result, an F1-score of 0.9940 was achieved on average, and its excellence was proved.

Spark based Scalable RDFS Ontology Reasoning over Big Triples with Confidence Values (신뢰값 기반 대용량 트리플 처리를 위한 스파크 환경에서의 RDFS 온톨로지 추론)

  • Park, Hyun-Kyu;Lee, Wan-Gon;Jagvaral, Batselem;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.43 no.1
    • /
    • pp.87-95
    • /
    • 2016
  • Recently, due to the development of the Internet and electronic devices, there has been an enormous increase in the amount of available knowledge and information. As this growth has proceeded, studies on large-scale ontological reasoning have been actively carried out. In general, a machine learning program or knowledge engineer measures and provides a degree of confidence for each triple in a large ontology. Yet, the collected ontology data contains specific uncertainty and reasoning such data can cause vagueness in reasoning results. In order to solve the uncertainty issue, we propose an RDFS reasoning approach that utilizes confidence values indicating degrees of uncertainty in the collected data. Unlike conventional reasoning approaches that have not taken into account data uncertainty, by using the in-memory based cluster computing framework Spark, our approach computes confidence values in the data inferred through RDFS-based reasoning by applying methods for uncertainty estimating. As a result, the computed confidence values represent the uncertainty in the inferred data. To evaluate our approach, ontology reasoning was carried out over the LUBM standard benchmark data set with addition arbitrary confidence values to ontology triples. Experimental results indicated that the proposed system is capable of running over the largest data set LUBM3000 in 1179 seconds inferring 350K triples.

Comparative Analysis of CNN Deep Learning Model Performance Based on Quantification Application for High-Speed Marine Object Classification (고속 해상 객체 분류를 위한 양자화 적용 기반 CNN 딥러닝 모델 성능 비교 분석)

  • Lee, Seong-Ju;Lee, Hyo-Chan;Song, Hyun-Hak;Jeon, Ho-Seok;Im, Tae-ho
    • Journal of Internet Computing and Services
    • /
    • v.22 no.2
    • /
    • pp.59-68
    • /
    • 2021
  • As artificial intelligence(AI) technologies, which have made rapid growth recently, began to be applied to the marine environment such as ships, there have been active researches on the application of CNN-based models specialized for digital videos. In E-Navigation service, which is combined with various technologies to detect floating objects of clash risk to reduce human errors and prevent fires inside ships, real-time processing is of huge importance. More functions added, however, mean a need for high-performance processes, which raises prices and poses a cost burden on shipowners. This study thus set out to propose a method capable of processing information at a high rate while maintaining the accuracy by applying Quantization techniques of a deep learning model. First, videos were pre-processed fit for the detection of floating matters in the sea to ensure the efficient transmission of video data to the deep learning entry. Secondly, the quantization technique, one of lightweight techniques for a deep learning model, was applied to reduce the usage rate of memory and increase the processing speed. Finally, the proposed deep learning model to which video pre-processing and quantization were applied was applied to various embedded boards to measure its accuracy and processing speed and test its performance. The proposed method was able to reduce the usage of memory capacity four times and improve the processing speed about four to five times while maintaining the old accuracy of recognition.