• Title/Summary/Keyword: 1D Convolutional Neural Network

Search Result 77, Processing Time 0.041 seconds

Pixel-level prediction of velocity vectors on hull surface based on convolutional neural network (합성곱 신경망 기반 선체 표면 유동 속도의 픽셀 수준 예측)

  • Jeongbeom Seo;Dayeon Kim;Inwon Lee
    • Journal of the Korean Society of Visualization
    • /
    • v.21 no.1
    • /
    • pp.18-25
    • /
    • 2023
  • In these days, high dimensional data prediction technology based on neural network shows compelling results in many different kind of field including engineering. Especially, a lot of variants of convolution neural network are widely utilized to develop pixel level prediction model for high dimensional data such as picture, or physical field value from the sensors. In this study, velocity vector field of ideal flow on ship surface is estimated on pixel level by Unet. First, potential flow analysis was conducted for the set of hull form data which are generated by hull form transformation method. Thereafter, four different neural network with a U-shape structure were conFig.d to train velocity vectors at the node position of pre-processed hull form data. As a result, for the test hull forms, it was confirmed that the network with short skip-connection gives the most accurate prediction results of streamlines and velocity magnitude. And the results also have a good agreement with potential flow analysis results. However, in some cases which don't have nothing in common with training data in terms of speed or shape, the network has relatively high error at the region of large curvature.

Connection stiffness reduction analysis in steel bridge via deep CNN and modal experimental data

  • Dang, Hung V.;Raza, Mohsin;Tran-Ngoc, H.;Bui-Tien, T.;Nguyen, Huan X.
    • Structural Engineering and Mechanics
    • /
    • v.77 no.4
    • /
    • pp.495-508
    • /
    • 2021
  • This study devises a novel approach, namely quadruple 1D convolutional neural network, for detecting connection stiffness reduction in steel truss bridge structure using experimental and numerical modal data. The method is developed based on expertise in two domains: firstly, in Structural Health Monitoring, the mode shapes and its high-order derivatives, including second, third, and fourth derivatives, are accurate indicators in assessing damages. Secondly, in the Machine Learning literature, the deep convolutional neural networks are able to extract relevant features from input data, then perform classification tasks with high accuracy and reduced time complexity. The efficacy and effectiveness of the present method are supported through an extensive case study with the railway Nam O bridge. It delivers highly accurate results in assessing damage localization and damage severity for single as well as multiple damage scenarios. In addition, the robustness of this method is tested with the presence of white noise reflecting unavoidable uncertainties in signal processing and modeling in reality. The proposed approach is able to provide stable results with data corrupted by noise up to 10%.

Design of a 1-D CRNN Model for Prediction of Fine Dust Risk Level (미세먼지 위험 단계 예측을 위한 1-D CRNN 모델 설계)

  • Lee, Ki-Hyeok;Hwang, Woo-Sung;Choi, Myung-Ryul
    • Journal of Digital Convergence
    • /
    • v.19 no.2
    • /
    • pp.215-220
    • /
    • 2021
  • In order to reduce the harmful effects on the human body caused by the recent increase in the generation of fine dust in Korea, there is a need for technology to help predict the level of fine dust and take precautions. In this paper, we propose a 1D Convolutional-Recurrent Neural Network (1-D CRNN) model to predict the level of fine dust in Korea. The proposed model is a structure that combines the CNN and the RNN, and uses domestic and foreign fine dust, wind direction, and wind speed data for data prediction. The proposed model achieved an accuracy of about 76%(Partial up to 84%). The proposed model aims to data prediction model for time series data sets that need to consider various data in the future.

The Impact of the PCA Dimensionality Reduction for CNN based Hyperspectral Image Classification (CNN 기반 초분광 영상 분류를 위한 PCA 차원축소의 영향 분석)

  • Kwak, Taehong;Song, Ahram;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_1
    • /
    • pp.959-971
    • /
    • 2019
  • CNN (Convolutional Neural Network) is one representative deep learning algorithm, which can extract high-level spatial and spectral features, and has been applied for hyperspectral image classification. However, one significant drawback behind the application of CNNs in hyperspectral images is the high dimensionality of the data, which increases the training time and processing complexity. To address this problem, several CNN based hyperspectral image classification studies have exploited PCA (Principal Component Analysis) for dimensionality reduction. One limitation to this is that the spectral information of the original image can be lost through PCA. Although it is clear that the use of PCA affects the accuracy and the CNN training time, the impact of PCA for CNN based hyperspectral image classification has been understudied. The purpose of this study is to analyze the quantitative effect of PCA in CNN for hyperspectral image classification. The hyperspectral images were first transformed through PCA and applied into the CNN model by varying the size of the reduced dimensionality. In addition, 2D-CNN and 3D-CNN frameworks were applied to analyze the sensitivity of the PCA with respect to the convolution kernel in the model. Experimental results were evaluated based on classification accuracy, learning time, variance ratio, and training process. The size of the reduced dimensionality was the most efficient when the explained variance ratio recorded 99.7%~99.8%. Since the 3D kernel had higher classification accuracy in the original-CNN than the PCA-CNN in comparison to the 2D-CNN, the results revealed that the dimensionality reduction was relatively less effective in 3D kernel.

A Study on Estimation of Lying Posture at Multiple Angles Using Single Frequency Modulated Continuous Wave (FMCW) Radar-Based CNNs (FMCW 레이더 및 CNN을 이용한 다양한 각도로 누운 자세 추정 연구)

  • Jang, Kyongseok;Zhou, Junhao;Kim, Youngok
    • Proceedings of the Korean Society of Disaster Information Conference
    • /
    • 2023.11a
    • /
    • pp.349-350
    • /
    • 2023
  • 본 논문에서는 FMCW(Frequency Modulated Continuous Wave) 레이더를 사용하여 재난 상황에서 누워 있는 사람의 다양한 각도의 자세를 통해 사람의 상태를 파악하거나 위치를 추정하고자하였다. 사람의 세 가지 누운 자세 데이터를 전처리하고 이미지로 변환한 데이터를 CNN(Convolutional Neural Network) 1D 모델로 학습시켜 누운 자세를 다양한 각도에서 구별할 수 있는지 분석하여 확인하고자하였으며, 분석 결과 CNN 1D 모델은 99.27%의 정확도를 보였다.

  • PDF

Combining 2D CNN and Bidirectional LSTM to Consider Spatio-Temporal Features in Crop Classification (작물 분류에서 시공간 특징을 고려하기 위한 2D CNN과 양방향 LSTM의 결합)

  • Kwak, Geun-Ho;Park, Min-Gyu;Park, Chan-Won;Lee, Kyung-Do;Na, Sang-Il;Ahn, Ho-Yong;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.5_1
    • /
    • pp.681-692
    • /
    • 2019
  • In this paper, a hybrid deep learning model, called 2D convolution with bidirectional long short-term memory (2DCBLSTM), is presented that can effectively combine both spatial and temporal features for crop classification. In the proposed model, 2D convolution operators are first applied to extract spatial features of crops and the extracted spatial features are then used as inputs for a bidirectional LSTM model that can effectively process temporal features. To evaluate the classification performance of the proposed model, a case study of crop classification was carried out using multi-temporal unmanned aerial vehicle images acquired in Anbandegi, Korea. For comparison purposes, we applied conventional deep learning models including two-dimensional convolutional neural network (CNN) using spatial features, LSTM using temporal features, and three-dimensional CNN using spatio-temporal features. Through the impact analysis of hyper-parameters on the classification performance, the use of both spatial and temporal features greatly reduced misclassification patterns of crops and the proposed hybrid model showed the best classification accuracy, compared to the conventional deep learning models that considered either spatial features or temporal features. Therefore, it is expected that the proposed model can be effectively applied to crop classification owing to its ability to consider spatio-temporal features of crops.

CNN Architecture for Accurately and Efficiently Learning a 3D Triangular Mesh (3차원 삼각형 메쉬를 정확하고 효율적으로 학습하기 위한 CNN 아키텍처)

  • Hong Eun Na;Jong-Hyun Kim
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.01a
    • /
    • pp.369-372
    • /
    • 2023
  • 본 논문에서는 삼각형 구조로 구성된 3차원 메쉬(Mesh)에서 합성곱 신경망(Convolution Neural Network, CNN)을 응용하여 정확도가 높은 새로운 학습 표현 기법을 제시한다. 우리는 메쉬를 구성하고 있는 폴리곤의 edge와 face의 로컬 특징을 기반으로 학습을 진행한다. 일반적으로 딥러닝은 인공신경망을 수많은 계층 형태로 연결한 기법을 말하며, 주요 처리 대상은 1, 2차원 데이터 형태인 오디오 파일과 이미지였다. 인공지능에 대한 연구가 지속되면서 3차원 딥러닝이 도입되었지만, 기존의 학습과는 달리 3차원 딥러닝은 데이터의 확보가 쉽지 않다. 혼합현실과 메타버스 시장의 확대로 인해 3차원 모델링 시장이 증가하고, 기술의 발전으로 데이터를 획득할 수 있는 방법이 생겼지만, 3차원 데이터를 직접적으로 학습에 이용하는 방식으로 적용하는 것은 쉽지 않다. 그렇게 때문에 본 논문에서는 산업 현장에서 이용되는 데이터인 메쉬 구조를 폴리곤의 최소 단위인 삼각형 형태로 구성하여 학습 데이터를 구성해 기존의 방법보다 정확도가 높은 학습 기법을 제안한다.

  • PDF

Super-resolution based on multi-channel input convolutional residual neural network (다중 채널 입력 Convolution residual neural networks 기반의 초해상화 기법)

  • Youm, Gwang-Young;Kim, Munchurl
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.06a
    • /
    • pp.37-39
    • /
    • 2016
  • 최근 Convolutional neural networks(CNN) 기반의 초해상화 기법인 Super-Resolution Convolutional Neural Networks (SRCNN) 이 좋은 PSNR 성능을 발휘하는 것으로 보고되었다 [1]. 하지만 많은 제안 방법들이 고주파 성분을 복원하는데 한계를 드러내는 것처럼, SRCNN 도 고주파 성분 복원에 한계점을 지니고 있다. 또한 SRCNN 의 네트워크 층을 깊게 만들면 좋은 PSNR 성능을 발휘하는 것으로 널리 알려져 있지만, 네트워크의 층을 깊게 하는 것은 네트워크 파라미터 학습을 어렵게 하는 경향이 있다. 네트워크의 층을 깊게 할 경우, gradient 값이 아래(역방향) 층으로 갈수록 발산하거나 0 으로 수렴하여, 네트워크 파라미터 학습이 제대로 되지 않는 현상이 발생하기 때문이다. 따라서 본 논문에서는 네트워크 층을 깊게 하는 대신에, 입력을 다중 채널로 구성하여, 네트워크에 고주파 성분에 관한 추가적인 정보를 주는 방법을 제안하였다. 많은 초해상화 기법들이 고주파 성분의 복원 능력이 부족하다는 점에 착안하여, 우리는 네트워크가 고주파 성분에 관한 많은 정보를 필요로 한다는 것을 가정하였다. 따라서 우리는 네트워크의 입력을 고주파 성분이 여러 가지 강도로 입력되도록 저해상도 입력 영상들을 구성하였다. 또한 잔차신호 네트워크(residual networks)를 도입하여, 네트워크 파라미터를 학습할 때 고주파 성분의 복원에 집중할 수 있도록 하였다. 본 논문의 효율성을 검증하기 위하여 set5 데이터와 set14 데이터에 관하여 실험을 진행하였고, SRCNN 과 비교하여 set5 데이터에서는 2, 3, 4 배에 관하여 각각 평균 0.29, 0.35, 0.17dB 의 PSNR 성능 향상이 있었으며, set14 데이터에서는 3 배의 관하여 평균 0.20dB 의 PSNR 성능 향상이 있었다.

  • PDF

Estimation of Manhattan Coordinate System using Convolutional Neural Network (합성곱 신경망 기반 맨하탄 좌표계 추정)

  • Lee, Jinwoo;Lee, Hyunjoon;Kim, Junho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.31-38
    • /
    • 2017
  • In this paper, we propose a system which estimates Manhattan coordinate systems for urban scene images using a convolutional neural network (CNN). Estimating the Manhattan coordinate system from an image under the Manhattan world assumption is the basis for solving computer graphics and vision problems such as image adjustment and 3D scene reconstruction. We construct a CNN that estimates Manhattan coordinate systems based on GoogLeNet [1]. To train the CNN, we collect about 155,000 images under the Manhattan world assumption by using the Google Street View APIs and calculate Manhattan coordinate systems using existing calibration methods to generate dataset. In contrast to PoseNet [2] that trains per-scene CNNs, our method learns from images under the Manhattan world assumption and thus estimates Manhattan coordinate systems for new images that have not been learned. Experimental results show that our method estimates Manhattan coordinate systems with the median error of $3.157^{\circ}$ for the Google Street View images of non-trained scenes, as test set. In addition, compared to an existing calibration method [3], the proposed method shows lower intermediate errors for the test set.

Research for Radar Signal Classification Model Using Deep Learning Technique (딥 러닝 기법을 이용한 레이더 신호 분류 모델 연구)

  • Kim, Yongjun;Yu, Kihun;Han, Jinwoo
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.22 no.2
    • /
    • pp.170-178
    • /
    • 2019
  • Classification of radar signals in the field of electronic warfare is a problem of discriminating threat types by analyzing enemy threat radar signals such as aircraft, radar, and missile received through electronic warfare equipment. Recent radar systems have adopted a variety of modulation schemes that are different from those used in conventional systems, and are often difficult to analyze using existing algorithms. Also, it is necessary to design a robust algorithm for the signal received in the real environment due to the environmental influence and the measurement error due to the characteristics of the hardware. In this paper, we propose a radar signal classification method which are not affected by radar signal modulation methods and noise generation by using deep learning techniques.