• Title/Summary/Keyword: Distributed neural network

Search Result 168, Processing Time 0.112 seconds

Design of data mining IDS for transformed intrusion pattern (변형 침입 패턴을 위한 데이터 마이닝 침입 탐지 시스템 설계)

  • 김용호;정종근;이윤배;김판구;염순자
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2001.10a
    • /
    • pp.479-482
    • /
    • 2001
  • IDS has been studied mainly in the field of the detection decision and collecting of audit data. The detection decision should decide whether successive behaviors are intrusions or not, the collecting of audit data needs ability that collects precisely data for intrusion decision. Artificial methods such as rule based system and neural network are recently introduced in order to solve this problem. However, these methods have simple host structures and defects that can't detect transformed intrusion patterns. So, we propose the method using data mining that can retrieve and estimate the patterns and retrieval of user's behavior in the distributed different hosts.

  • PDF

Design of Intrusion Detection System applying for data mining agent (데이터 마이닝 에이전트를 적용한 침입 탐지 시스템 설계)

  • 정종근;구제영;김용호;오근탁;이윤배
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.05a
    • /
    • pp.619-622
    • /
    • 2002
  • IDS has been studied mainly in the field of the detection derision and collecting of audit data. The detection decision should decide whether successive behaviors are intrusions or not , the collecting of audit data needs ability that collects precisely data for intrusion decision. Artificial methods such as rule based system and neural network are recently introduced in order to solve this problem. However, these methods have simple host structures and defects that can't detect transformed intrusion patterns. So, we propose the method using data mining agent that can retrieve and estimate the patterns and retrieval of user's behavior in the distributed different hosts.

  • PDF

The Automatic Coordination Model for Multi-Agent System Using Learning Method (학습기법을 이용한 멀티 에이전트 시스템 자동 조정 모델)

  • Lee, Mal-Rye;Kim, Sang-Geun
    • The KIPS Transactions:PartB
    • /
    • v.8B no.6
    • /
    • pp.587-594
    • /
    • 2001
  • Multi-agent system fits to the distributed and open internet environments. In a multi-agent system, agents must cooperate with each other through a coordination procedure, when the conflicts between agents arise. Where those are caused by the point that each action acts for a purpose separately without coordination. But previous researches for coordination methods in multi-agent system have a deficiency that they cannot solve correctly the cooperation problem between agents, which have different goals in dynamic environment. In this paper, we suggest the automatic coordination model for multi-agent system using neural network and reinforcement learning in dynamic environment. We have competitive experiment between multi-agents that have complexity environment and diverse activity. And we analysis and evaluate effect of activity of multi-agents. The results show that the proposed method is proper.

  • PDF

Reliability analysis of simply supported beam using GRNN, ELM and GPR

  • Jagan, J;Samui, Pijush;Kim, Dookie
    • Structural Engineering and Mechanics
    • /
    • v.71 no.6
    • /
    • pp.739-749
    • /
    • 2019
  • This article deals with the application of reliability analysis for determining the safety of simply supported beam under the uniformly distributed load. The uncertainties of the existing methods were taken into account and hence reliability analysis has been adopted. To accomplish this aim, Generalized Regression Neural Network (GRNN), Extreme Learning Machine (ELM) and Gaussian Process Regression (GPR) models are developed. Reliability analysis is the probabilistic style to determine the possibility of failure free operation of a structure. The application of probabilistic mathematics into the quantitative aspects of a structure and improve the qualitative aspects of a structure. In order to construct the GRNN, ELM and GPR models, the dataset contains Modulus of Elasticity (E), Load intensity (w) and performance function (${\delta}$) in which E and w are inputs and ${\delta}$ is the output. The achievement of the developed models was weighed by various statistical parameters; one among the most primitive parameter is Coefficient of Determination ($R^2$) which has 0.998 for training and 0.989 for testing. The GRNN outperforms the other ELM and GPR models. Other different statistical computations have been carried out, which speaks out the errors and prediction performance in order to justify the capability of the developed models.

Effectiveness of satellite-based vegetation index on distributed regional rainfall-runoff LSTM model (분포형 지역화 강우-유출 LSTM 모형에서의 위성기반 식생지수의 유효성)

  • Jeonghun Lee;Dongkyun Kim
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.230-230
    • /
    • 2023
  • 딥러닝 알고리즘 중 과거의 정보를 저장하는 문제(장기종속성 문제)가 있는 단순 RNN(Simple Recurrent Neural Network)의 단점을 해결한 LSTM(Long short-term memory)이 등장하면서 특정한 유역의 강우-유출 모형을 구축하는 연구가 증가하고 있다. 그러나 하나의 모형으로 모든 유역에 대한 유출을 예측하는 지역화 강우-유출 모형은 서로 다른 유역의 식생, 지형 등의 차이에서 발생하는 수문학적 행동의 차이를 학습해야 하므로 모형 구축에 어려움이 있다. 따라서, 본 연구에서는 국내 12개의 유역에 대하여 LSTM 기반 분포형 지역화 강우-유출 모형을 구축한 이후 강우 이외의 보조 자료에 따른 정확도를 살펴보았다. 국내 12개 유역의 7년 (2012.01.01-2018.12.31) 동안의 49개 격자(4km2)에 대한 10분 간격 레이더 강우, MODIS 위성 이미지 영상을 활용한 식생지수 (Normalized Difference Vegetation Index), 10분 간격 기온, 유역 평균 경사, 단순 하천 경사를 입력자료로 활용하였으며 10분 간격 유량 자료를 출력 자료로 사용하여 LSTM 기반 분포형 지역화 강우-유출 모형을 구축하였다. 이후 구축된 모형의 성능을 검증하기 위해 학습에 사용되지 않은 3개의 유역에 대한 자료를 활용하여 Nash-Sutcliffe Model Efficiency Coefficient (NSE)를 확인하였다. 식생지수를 보조 자료를 활용하였을 경우 제안한 모형은 3개의 검증 유역에 대하여 하천 흐름을 높은 정확도로 예측하였으며 딥러닝 모형이 위성 자료를 통하여 식생에 의한 차단 및 토양 침투와 같은 동적 요소의 학습이 가능함을 나타낸다.

  • PDF

Bias-correction of Dual Polarization Radar rainfall using Convolutional Autoencoder

  • Jung, Sungho;Le, Xuan Hien;Oh, Sungryul;Kim, Jeongyup;Lee, GiHa
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.166-166
    • /
    • 2020
  • Recently, As the frequency of localized heavy rains increases, the use of high-resolution radar data is increasing. The produced radar rainfall has still gaps of spatial and temporal compared to gauge observation rainfall, and in many studies, various statistical techniques are performed for correct rainfall. In this study, the precipitation correction of the S-band Dual Polarization radar in use in the flood forecast was performed using the ConvAE algorithm, one of the Convolutional Neural Network. The ConvAE model was trained based on radar data sets having a 10-min temporal resolution: radar rainfall data, gauge rainfall data for 790minutes(July 2017 in Cheongju flood event). As a result of the validation of corrected radar rainfall were reduced gaps compared to gauge rainfall and the spatial correction was also performed. Therefore, it is judged that the corrected radar rainfall using ConvAE will increase the reliability of the gridded rainfall data used in various physically-based distributed hydrodynamic models.

  • PDF

Hybrid All-Reduce Strategy with Layer Overlapping for Reducing Communication Overhead in Distributed Deep Learning (분산 딥러닝에서 통신 오버헤드를 줄이기 위해 레이어를 오버래핑하는 하이브리드 올-리듀스 기법)

  • Kim, Daehyun;Yeo, Sangho;Oh, Sangyoon
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.7
    • /
    • pp.191-198
    • /
    • 2021
  • Since the size of training dataset become large and the model is getting deeper to achieve high accuracy in deep learning, the deep neural network training requires a lot of computation and it takes too much time with a single node. Therefore, distributed deep learning is proposed to reduce the training time by distributing computation across multiple nodes. In this study, we propose hybrid allreduce strategy that considers the characteristics of each layer and communication and computational overlapping technique for synchronization of distributed deep learning. Since the convolution layer has fewer parameters than the fully-connected layer as well as it is located at the upper, only short overlapping time is allowed. Thus, butterfly allreduce is used to synchronize the convolution layer. On the other hand, fully-connecter layer is synchronized using ring all-reduce. The empirical experiment results on PyTorch with our proposed scheme shows that the proposed method reduced the training time by up to 33% compared to the baseline PyTorch.

A Research about Time Domain Estimation Method for Greenhouse Environmental Factors based on Artificial Intelligence (인공지능 기반 온실 환경인자의 시간영역 추정)

  • Lee, JungKyu;Oh, JongWoo;Cho, YongJin;Lee, Donghoon
    • Journal of Bio-Environment Control
    • /
    • v.29 no.3
    • /
    • pp.277-284
    • /
    • 2020
  • To increase the utilization of the intelligent methodology of smart farm management, estimation modeling techniques are required to assess prior examination of crops and environment changes in realtime. A mandatory environmental factor such as CO2 is challenging to establish a reliable estimation model in time domain accounted for indoor agricultural facilities where various correlated variables are highly coupled. Thus, this study was conducted to develop an artificial neural network for reducing time complexity by using environmental information distributed in adjacent areas from a time perspective as input and output variables as CO2. The environmental factors in the smart farm were continuously measured using measuring devices that integrated sensors through experiments. Modeling 1 predicted by the mean data of the experiment period and modeling 2 predicted by the day-to-day data were constructed to predict the correlation of CO2. Modeling 2 predicted by the previous day's data learning performed better than Modeling 1 predicted by the 60-day average value. Until 30 days, most of them showed a coefficient of determination between 0.70 and 0.88, and Model 2 was about 0.05 higher. However, after 30 days, the modeling coefficients of both models showed low values below 0.50. According to the modeling approach, comparing and analyzing the values of the determinants showed that data from adjacent time zones were relatively high performance at points requiring prediction rather than a fixed neural network model.

A Comparative Study of Reservoir Surface Area Detection Algorithm Using SAR Image (SAR 영상을 활용한 저수지 수표면적 탐지 알고리즘 비교 연구)

  • Jeong, Hagyu;Park, Jongsoo;Lee, Dalgeun;Lee, Junwoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_3
    • /
    • pp.1777-1788
    • /
    • 2022
  • The reservoir is a major water supply source in the domestic agricultural environment, and the monitoring of water storage of reservoirs is important for the utilization and management of agricultural water resource. Remote sensing via satellite imagery can be an effective method for regular monitoring of widely distributed objects such as reservoirs, and in this study, image classification and image segmentation algorithms are applied to Sentinel-1 Synthetic Aperture Radar (SAR) imagery for water body detection in 53 reservoirs in South Korea. Six algorithms are used: Neural Network (NN), Support Vector Machine (SVM), Random Forest (RF), Otsu, Watershed (WS), and Chan-Vese (CV), and the results of water body detection are evaluated with in-situ images taken by drones. The correlations between the in-situ water surface area and detected water surface area from each algorithm are NN 0.9941, SVM 0.9942, RF 0.9940, Otsu 0.9922, WS 0.9709, and CV 0.9736, and the larger the scale of reservoir, the higher the linear correlation was. WS showed low recall due to the undetected water bodies, and NN, SVM, and RF showed low precision due to over-detection. For water body detection through SAR imagery, we found that aquatic plants and artificial structures can be the error factors causing undetection of water body.

Real-time Hand Gesture Recognition System based on Vision for Intelligent Robot Control (지능로봇 제어를 위한 비전기반 실시간 수신호 인식 시스템)

  • Yang, Tae-Kyu;Seo, Yong-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.10
    • /
    • pp.2180-2188
    • /
    • 2009
  • This paper is study on real-time hand gesture recognition system based on vision for intelligent robot control. We are proposed a recognition system using PCA and BP algorithm. Recognition of hand gestures consists of two steps which are preprocessing step using PCA algorithm and classification step using BP algorithm. The PCA algorithm is a technique used to reduce multidimensional data sets to lower dimensions for effective analysis. In our simulation, the PCA is applied to calculate feature projection vectors for the image of a given hand. The BP algorithm is capable of doing parallel distributed processing and expedite processing since it take parallel structure. The BP algorithm recognized in real time hand gestures by self learning of trained eigen hand gesture. The proposed PCA and BP algorithm show improvement on the recognition compared to PCA algorithm.