• Title/Summary/Keyword: 3-D CNN

Search Result 158, Processing Time 0.03 seconds

Automatic Classification of Bridge Component based on Deep Learning (딥러닝 기반 교량 구성요소 자동 분류)

  • Lee, Jae Hyuk;Park, Jeong Jun;Yoon, Hyungchul
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.2
    • /
    • pp.239-245
    • /
    • 2020
  • Recently, BIM (Building Information Modeling) are widely being utilized in Construction industry. However, most structures that have been constructed in the past do not have BIM. For structures without BIM, the use of SfM (Structure from Motion) techniques in the 2D image obtained from the camera allows the generation of 3D model point cloud data and BIM to be established. However, since these generated point cloud data do not contain semantic information, it is necessary to manually classify what elements of the structure. Therefore, in this study, deep learning was applied to automate the process of classifying structural components. In the establishment of deep learning network, Inception-ResNet-v2 of CNN (Convolutional Neural Network) structure was used, and the components of bridge structure were learned through transfer learning. As a result of classifying components using the data collected to verify the developed system, the components of the bridge were classified with an accuracy of 96.13 %.

Host-Based Intrusion Detection Model Using Few-Shot Learning (Few-Shot Learning을 사용한 호스트 기반 침입 탐지 모델)

  • Park, DaeKyeong;Shin, DongIl;Shin, DongKyoo;Kim, Sangsoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.7
    • /
    • pp.271-278
    • /
    • 2021
  • As the current cyber attacks become more intelligent, the existing Intrusion Detection System is difficult for detecting intelligent attacks that deviate from the existing stored patterns. In an attempt to solve this, a model of a deep learning-based intrusion detection system that analyzes the pattern of intelligent attacks through data learning has emerged. Intrusion detection systems are divided into host-based and network-based depending on the installation location. Unlike network-based intrusion detection systems, host-based intrusion detection systems have the disadvantage of having to observe the inside and outside of the system as a whole. However, it has the advantage of being able to detect intrusions that cannot be detected by a network-based intrusion detection system. Therefore, in this study, we conducted a study on a host-based intrusion detection system. In order to evaluate and improve the performance of the host-based intrusion detection system model, we used the host-based Leipzig Intrusion Detection-Data Set (LID-DS) published in 2018. In the performance evaluation of the model using that data set, in order to confirm the similarity of each data and reconstructed to identify whether it is normal data or abnormal data, 1D vector data is converted to 3D image data. Also, the deep learning model has the drawback of having to re-learn every time a new cyber attack method is seen. In other words, it is not efficient because it takes a long time to learn a large amount of data. To solve this problem, this paper proposes the Siamese Convolutional Neural Network (Siamese-CNN) to use the Few-Shot Learning method that shows excellent performance by learning the little amount of data. Siamese-CNN determines whether the attacks are of the same type by the similarity score of each sample of cyber attacks converted into images. The accuracy was calculated using Few-Shot Learning technique, and the performance of Vanilla Convolutional Neural Network (Vanilla-CNN) and Siamese-CNN was compared to confirm the performance of Siamese-CNN. As a result of measuring Accuracy, Precision, Recall and F1-Score index, it was confirmed that the recall of the Siamese-CNN model proposed in this study was increased by about 6% from the Vanilla-CNN model.

A Study on Combine Artificial Intelligence Models for multi-classification for an Abnormal Behaviors in CCTV images (CCTV 영상의 이상행동 다중 분류를 위한 결합 인공지능 모델에 관한 연구)

  • Lee, Hongrae;Kim, Youngtae;Seo, Byung-suk
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.498-500
    • /
    • 2022
  • CCTV protects people and assets safely by identifying dangerous situations and responding promptly. However, it is difficult to continuously monitor the increasing number of CCTV images. For this reason, there is a need for a device that continuously monitors CCTV images and notifies when abnormal behavior occurs. Recently, many studies using artificial intelligence models for image data analysis have been conducted. This study simultaneously learns spatial and temporal characteristic information between image data to classify various abnormal behaviors that can be observed in CCTV images. As an artificial intelligence model used for learning, we propose a multi-classification deep learning model that combines an end-to-end 3D convolutional neural network(CNN) and ResNet.

  • PDF

Restoring Motion Capture Data for Pose Estimation (자세 추정을 위한 모션 캡처 데이터 복원)

  • Youn, Yeo-su;Park, Hyun-jun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.5-7
    • /
    • 2021
  • Motion capture data files for pose estimation may have inaccurate data depending on the surrounding environment and the degree of movement, so it is necessary to correct it. In the past, inaccurate data was restored with post-processing by people, but recently various kind of neural networks such as LSTM and R-CNN are used as automated method. However, since neural network-based data restoration methods require a lot of computing resource, this paper proposes a method that reduces computing resource and maintains data restoration rate compared to neural network-based method. The proposed method automatically restores inaccurate motion capture data by using posture measurement data (c3d). As a result of the experiment, data restoration rates ranged from 89% to 99% depending on the degree of inaccuracy of the data.

  • PDF

Implementation of CNN-based Masking Algorithm for Post Processing of Aerial Image

  • CHOI, Eunsoo;QUAN, Zhixuan;JUNG, Sangwoo
    • Korean Journal of Artificial Intelligence
    • /
    • v.9 no.2
    • /
    • pp.7-14
    • /
    • 2021
  • Purpose: To solve urban problems, empirical research is being actively conducted to implement a smart city based on various ICT technologies, and digital twin technology is needed to effectively implement a smart city. A digital twin is essential for the realization of a smart city. A digital twin is a virtual environment that intuitively visualizes multidimensional data in the real world based on 3D. Digital twin is implemented on the premise of the convergence of GIS and BIM, and in particular, a lot of time is invested in data pre-processing and labeling in the data construction process. In digital twin, data quality is prioritized for consistency with reality, but there is a limit to data inspection with the naked eye. Therefore, in order to improve the required time and quality of digital twin construction, it was attempted to detect a building using Mask R-CNN, a deep learning-based masking algorithm for aerial images. If the results of this study are advanced and used to build digital twin data, it is thought that a high-quality smart city can be realized.

Deep Learning Based Digital Staining Method in Fourier Ptychographic Microscopy Image (Fourier Ptychographic Microscopy 영상에서의 딥러닝 기반 디지털 염색 방법 연구)

  • Seok-Min Hwang;Dong-Bum Kim;Yu-Jeong Kim;Yeo-Rin Kim;Jong-Ha Lee
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.2
    • /
    • pp.97-106
    • /
    • 2022
  • In this study, H&E staining is necessary to distinguish cells. However, dyeing directly requires a lot of money and time. The purpose is to convert the phase image of unstained cells to the amplitude image of stained cells. Image data taken with FPM was created with Phase image and Amplitude image using Matlab's parameters. Through normalization, a visually identifiable image was obtained. Through normalization, a visually distinguishable image was obtained. Using the GAN algorithm, a Fake Amplitude image similar to the Real Amplitude image was created based on the Phase image, and cells were distinguished by objectification using MASK R-CNN with the Fake Amplitude image As a result of the study, D loss max is 3.3e-1, min is 6.8e-2, G loss max is 6.9e-2, min is 2.9e-2, A loss max is 5.8e-1, min is 1.2e-1, Mask R-CNN max is 1.9e0, and min is 3.2e-1.

Global lifelog media cloud development and deployment (글로벌 라이프로그 미디어 클라우드 개발 및 구축)

  • Song, Hyeok;Choe, In-Gyu;Lee, Yeong-Han;Go, Min-Su;O, Jin-Taek;Yu, Ji-Sang
    • Broadcasting and Media Magazine
    • /
    • v.22 no.1
    • /
    • pp.35-46
    • /
    • 2017
  • 글로벌 라이프로그 미디어 클라우드 서비스를 위하여 네트워크 기술, 클라우드 기술 멀티미디어 App 기술 및 하이라이팅 엔진 기술이 요구된다. 본 논문에서는 미디어 클라우드 서비스를 위한 개발 기술 및 서비스 기술 개발 결과를 보였다. 하이라이팅 엔진은 표정인식기술, 이미지 분류기술, 주목도 지도 생성기술, 모션 분석기술, 동영상 분석 기술, 얼굴 인식 기술 및 오디오 분석기술 등을 포함하고 있다. 표정인식 기술로는 Alexnet을 최적화하여 Alexnet 대비 1.82% 우수한 인식 성능을 보였으며 처리속도면에서 28배 빠른 결과를 보였다. 행동 인식 기술에 있어서는 기존 2D CNN 및 LSTM에 기반한 인식 방법에 비하여 제안하는 3D CNN 기법이 0.8% 향상된 결과를 보였다. (주)판도라티비는 클라우드 기반 라이프로그 동영상 생성 서비스를 개발하여 현재 테스트 서비스를 진행하고 있다.

Face Morphing Using Generative Adversarial Networks (Generative Adversarial Networks를 이용한 Face Morphing 기법 연구)

  • Han, Yoon;Kim, Hyoung Joong
    • Journal of Digital Contents Society
    • /
    • v.19 no.3
    • /
    • pp.435-443
    • /
    • 2018
  • Recently, with the explosive development of computing power, various methods such as RNN and CNN have been proposed under the name of Deep Learning, which solve many problems of Computer Vision have. The Generative Adversarial Network, released in 2014, showed that the problem of computer vision can be sufficiently solved in unsupervised learning, and the generation domain can also be studied using learned generators. GAN is being developed in various forms in combination with various models. Machine learning has difficulty in collecting data. If it is too large, it is difficult to refine the effective data set by removing the noise. If it is too small, the small difference becomes too big noise, and learning is not easy. In this paper, we apply a deep CNN model for extracting facial region in image frame to GAN model as a preprocessing filter, and propose a method to produce composite images of various facial expressions by stably learning with limited collection data of two persons.

Estimation of Significant Wave Heights from X-Band Radar Based on ANN Using CNN Rainfall Classifier (CNN 강우여부 분류기를 적용한 ANN 기반 X-Band 레이다 유의파고 보정)

  • Kim, Heeyeon;Ahn, Kyungmo;Oh, Chanyeong
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.33 no.3
    • /
    • pp.101-109
    • /
    • 2021
  • Wave observations using a marine X-band radar are conducted by analyzing the backscattered radar signal from sea surfaces. Wave parameters are extracted using Modulation Transfer Function obtained from 3D wave number and frequency spectra which are calculated by 3D FFT of time series of sea surface images (42 images per minute). The accuracy of estimation of the significant wave height is, therefore, critically dependent on the quality of radar images. Wave observations during Typhoon Maysak and Haishen in the summer of 2020 show large errors in the estimation of the significant wave heights. It is because of the deteriorated radar images due to raindrops falling on the sea surface. This paper presents the algorithm developed to increase the accuracy of wave heights estimation from radar images by adopting convolution neural network(CNN) which automatically classify radar images into rain and non-rain cases. Then, an algorithm for deriving the Hs is proposed by creating different ANN models and selectively applying them according to the rain or non-rain cases. The developed algorithm applied to heavy rain cases during typhoons and showed critically improved results.

3D Clothes Modeling of Virtual Human for Metaverse (메타버스를 위한 가상 휴먼의 3차원 의상 모델링)

  • Kim, Hyun Woo;Kim, Dong Eon;Kim, Yujin;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.638-653
    • /
    • 2022
  • In this paper, we propose the new method of creating 3D virtual-human reflecting the pattern of clothes worn by the person in the high-resolution whole body front image and the body shape data about the person. To get the pattern of clothes, we proceed Instance Segmentation and clothes parsing using Cascade Mask R-CNN. After, we use Pix2Pix to blur the boundaries and estimate the background color and can get UV-Map of 3D clothes mesh proceeding UV-Map base warping. Also, we get the body shape data using SMPL-X and deform the original clothes and body mesh. With UV-Map of clothes and deformed clothes and body mesh, user finally can see the animation of 3D virtual-human reflecting user's appearance by rendering with the state-of-the game engine, i.e. Unreal Engine.