• Title/Summary/Keyword: deep-learning

Search Result 5,598, Processing Time 0.027 seconds

Deep Learning Based 3D Gesture Recognition Using Spatio-Temporal Normalization (시 공간 정규화를 통한 딥 러닝 기반의 3D 제스처 인식)

  • Chae, Ji Hun;Gang, Su Myung;Kim, Hae Sung;Lee, Joon Jae
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.5
    • /
    • pp.626-637
    • /
    • 2018
  • Human exchanges information not only through words, but also through body gesture or hand gesture. And they can be used to build effective interfaces in mobile, virtual reality, and augmented reality. The past 2D gesture recognition research had information loss caused by projecting 3D information in 2D. Since the recognition of the gesture in 3D is higher than 2D space in terms of recognition range, the complexity of gesture recognition increases. In this paper, we proposed a real-time gesture recognition deep learning model and application in 3D space using deep learning technique. First, in order to recognize the gesture in the 3D space, the data collection is performed using the unity game engine to construct and acquire data. Second, input vector normalization for learning 3D gesture recognition model is processed based on deep learning. Thirdly, the SELU(Scaled Exponential Linear Unit) function is applied to the neural network's active function for faster learning and better recognition performance. The proposed system is expected to be applicable to various fields such as rehabilitation cares, game applications, and virtual reality.

Wavelet-like convolutional neural network structure for time-series data classification

  • Park, Seungtae;Jeong, Haedong;Min, Hyungcheol;Lee, Hojin;Lee, Seungchul
    • Smart Structures and Systems
    • /
    • v.22 no.2
    • /
    • pp.175-183
    • /
    • 2018
  • Time-series data often contain one of the most valuable pieces of information in many fields including manufacturing. Because time-series data are relatively cheap to acquire, they (e.g., vibration signals) have become a crucial part of big data even in manufacturing shop floors. Recently, deep-learning models have shown state-of-art performance for analyzing big data because of their sophisticated structures and considerable computational power. Traditional models for a machinery-monitoring system have highly relied on features selected by human experts. In addition, the representational power of such models fails as the data distribution becomes complicated. On the other hand, deep-learning models automatically select highly abstracted features during the optimization process, and their representational power is better than that of traditional neural network models. However, the applicability of deep-learning models to the field of prognostics and health management (PHM) has not been well investigated yet. This study integrates the "residual fitting" mechanism inherently embedded in the wavelet transform into the convolutional neural network deep-learning structure. As a result, the architecture combines a signal smoother and classification procedures into a single model. Validation results from rotor vibration data demonstrate that our model outperforms all other off-the-shelf feature-based models.

Prediction of Static and Dynamic Behavior of Truss Structures Using Deep Learning (딥러닝을 이용한 트러스 구조물의 정적 및 동적 거동 예측)

  • Sim, Eun-A;Lee, Seunghye;Lee, Jaehong
    • Journal of Korean Association for Spatial Structures
    • /
    • v.18 no.4
    • /
    • pp.69-80
    • /
    • 2018
  • In this study, an algorithm applying deep learning to the truss structures was proposed. Deep learning is a method of raising the accuracy of machine learning by creating a neural networks in a computer. Neural networks consist of input layers, hidden layers and output layers. Numerous studies have focused on the introduction of neural networks and performed under limited examples and conditions, but this study focused on two- and three-dimensional truss structures to prove the effectiveness of algorithms. and the training phase was divided into training model based on the dataset size and epochs. At these case, a specific data value was selected and the error rate was shown by comparing the actual data value with the predicted value, and the error rate decreases as the data set and the number of hidden layers increases. In consequence, it showed that it is possible to predict the result quickly and accurately without using a numerical analysis program when applying the deep learning technique to the field of structural analysis.

Development of microfluidic green algae cell counter based on deep learning (딥러닝 기반 녹조 세포 계수 미세 유체 기기 개발)

  • Cho, Seongsu;Shin, Seonghun;Sim, Jaemin;Lee, Jinkee
    • Journal of the Korean Society of Visualization
    • /
    • v.19 no.2
    • /
    • pp.41-47
    • /
    • 2021
  • River and stream are the important water supply source in our lives. Eutrophication causes excessive green algae growth including microcystis, which makes harmful to ecosystem and human health. Therefore, the water purification process to remove green algae is essential. In Korea, green algae alarm system exists depending on the concentration of green algae cells in river or stream. To maintain the growth amount under control, green algae monitoring system is being used. However, the unmanned, small and automatic monitoring system would be preferable. In this study, we developed the 3D printed device to measure the concentration of green algae cell using microfluidic droplet generator and deep learning. Deep learning network was trained by using transfer learning through pre-trained deep learning network. This newly developed microfluidic cell counter has sufficient accuracy to be possibly applicable to green algae alarm system.

Development of Virtual Simulator and Database for Deep Learning-based Object Detection (딥러닝 기반 장애물 인식을 위한 가상환경 및 데이터베이스 구축)

  • Lee, JaeIn;Gwak, Gisung;Kim, KyongSu;Kang, WonYul;Shin, DaeYoung;Hwang, Sung-Ho
    • Journal of Drive and Control
    • /
    • v.18 no.4
    • /
    • pp.9-18
    • /
    • 2021
  • This study proposes a method for creating learning datasets to recognize obstacles using deep learning algorithms in automated construction machinery or an autonomous vehicle. Recently, many researchers and engineers have developed various recognition algorithms based on deep learning following an increase in computing power. In particular, the image classification technology and image segmentation technology represent deep learning recognition algorithms. They are used to identify obstacles that interfere with the driving situation of an autonomous vehicle. Therefore, various organizations and companies have started distributing open datasets, but there is a remote possibility that they will perfectly match the user's desired environment. In this study, we created an interface of the virtual simulator such that users can easily create their desired training dataset. In addition, the customized dataset was further advanced by using the RDBMS system, and the recognition rate was improved.

A Study on Worker Risk Reduction Methods using the Deep Learning Image Processing Technique in the Turning Process (선삭공정에서 딥러닝 영상처리 기법을 이용한 작업자 위험 감소 방안 연구)

  • Bae, Yong Hwan;Lee, Young Tae;Kim, Ho-Chan
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.20 no.12
    • /
    • pp.1-7
    • /
    • 2021
  • The deep learning image processing technique was used to prevent accidents in lathe work caused by worker negligence. During lathe operation, when the chuck is rotated, it is very dangerous if the operator's hand is near the chuck. However, if the chuck is stopped during operation, it is not dangerous for the operator's hand to be in close proximity to the chuck for workpiece measurement, chip removal or tool change. We used YOLO (You Only Look Once), a deep learning image processing program for object detection and classification. Lathe work images such as hand, chuck rotation and chuck stop are used for learning, object detection and classification. As a result of the experiment, object detection and class classification were performed with a success probability of over 80% at a confidence score 0.5. Thus, we conclude that the artificial intelligence deep learning image processing technique can be effective in preventing incidents resulting from worker negligence in future manufacturing systems.

A Study on Detection of Abnormal Patterns Based on AI·IoT to Support Environmental Management of Architectural Spaces (건축공간 환경관리 지원을 위한 AI·IoT 기반 이상패턴 검출에 관한 연구)

  • Kang, Tae-Wook
    • Journal of KIBIM
    • /
    • v.13 no.3
    • /
    • pp.12-20
    • /
    • 2023
  • Deep learning-based anomaly detection technology is used in various fields such as computer vision, speech recognition, and natural language processing. In particular, this technology is applied in various fields such as monitoring manufacturing equipment abnormalities, detecting financial fraud, detecting network hacking, and detecting anomalies in medical images. However, in the field of construction and architecture, research on deep learning-based data anomaly detection technology is difficult due to the lack of digitization of domain knowledge due to late digital conversion, lack of learning data, and difficulties in collecting and processing field data in real time. This study acquires necessary data through IoT (Internet of Things) from the viewpoint of monitoring for environmental management of architectural spaces, converts them into a database, learns deep learning, and then supports anomaly patterns using AI (Artificial Infelligence) deep learning-based anomaly detection. We propose an implementation process. The results of this study suggest an effective environmental anomaly pattern detection solution architecture for environmental management of architectural spaces, proving its feasibility. The proposed method enables quick response through real-time data processing and analysis collected from IoT. In order to confirm the effectiveness of the proposed method, performance analysis is performed through prototype implementation to derive the results.

Indirect Inspection Signal Diagnosis of Buried Pipe Coating Flaws Using Deep Learning Algorithm (딥러닝 알고리즘을 이용한 매설 배관 피복 결함의 간접 검사 신호 진단에 관한 연구)

  • Sang Jin Cho;Young-Jin Oh;Soo Young Shin
    • Transactions of the Korean Society of Pressure Vessels and Piping
    • /
    • v.19 no.2
    • /
    • pp.93-101
    • /
    • 2023
  • In this study, a deep learning algorithm was used to diagnose electric potential signals obtained through CIPS and DCVG, used indirect inspection methods to confirm the soundness of buried pipes. The deep learning algorithm consisted of CNN(Convolutional Neural Network) model for diagnosing the electric potential signal and Grad CAM(Gradient-weighted Class Activation Mapping) for showing the flaw prediction point. The CNN model for diagnosing electric potential signals classifies input data as normal/abnormal according to the presence or absence of flaw in the buried pipe, and for abnormal data, Grad CAM generates a heat map that visualizes the flaw prediction part of the buried pipe. The CIPS/DCVG signal and piping layout obtained from the 3D finite element model were used as input data for learning the CNN. The trained CNN classified the normal/abnormal data with 93% accuracy, and the Grad-CAM predicted flaws point with an average error of 2m. As a result, it confirmed that the electric potential signal of buried pipe can be diagnosed using a CNN-based deep learning algorithm.

A Three-Dimensional Deep Convolutional Neural Network for Automatic Segmentation and Diameter Measurement of Type B Aortic Dissection

  • Yitong Yu;Yang Gao;Jianyong Wei;Fangzhou Liao;Qianjiang Xiao;Jie Zhang;Weihua Yin;Bin Lu
    • Korean Journal of Radiology
    • /
    • v.22 no.2
    • /
    • pp.168-178
    • /
    • 2021
  • Objective: To provide an automatic method for segmentation and diameter measurement of type B aortic dissection (TBAD). Materials and Methods: Aortic computed tomography angiographic images from 139 patients with TBAD were consecutively collected. We implemented a deep learning method based on a three-dimensional (3D) deep convolutional neural (CNN) network, which realizes automatic segmentation and measurement of the entire aorta (EA), true lumen (TL), and false lumen (FL). The accuracy, stability, and measurement time were compared between deep learning and manual methods. The intra- and inter-observer reproducibility of the manual method was also evaluated. Results: The mean dice coefficient scores were 0.958, 0.961, and 0.932 for EA, TL, and FL, respectively. There was a linear relationship between the reference standard and measurement by the manual and deep learning method (r = 0.964 and 0.991, respectively). The average measurement error of the deep learning method was less than that of the manual method (EA, 1.64% vs. 4.13%; TL, 2.46% vs. 11.67%; FL, 2.50% vs. 8.02%). Bland-Altman plots revealed that the deviations of the diameters between the deep learning method and the reference standard were -0.042 mm (-3.412 to 3.330 mm), -0.376 mm (-3.328 to 2.577 mm), and 0.026 mm (-3.040 to 3.092 mm) for EA, TL, and FL, respectively. For the manual method, the corresponding deviations were -0.166 mm (-1.419 to 1.086 mm), -0.050 mm (-0.970 to 1.070 mm), and -0.085 mm (-1.010 to 0.084 mm). Intra- and inter-observer differences were found in measurements with the manual method, but not with the deep learning method. The measurement time with the deep learning method was markedly shorter than with the manual method (21.7 ± 1.1 vs. 82.5 ± 16.1 minutes, p < 0.001). Conclusion: The performance of efficient segmentation and diameter measurement of TBADs based on the 3D deep CNN was both accurate and stable. This method is promising for evaluating aortic morphology automatically and alleviating the workload of radiologists in the near future.

Performance Improvement of a Deep Learning-based Object Recognition using Imitated Red-green Color Blindness of Camouflaged Soldier Images (적록색맹 모사 영상 데이터를 이용한 딥러닝 기반의 위장군인 객체 인식 성능 향상)

  • Choi, Keun Ha
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.23 no.2
    • /
    • pp.139-146
    • /
    • 2020
  • The camouflage pattern was difficult to distinguish from the surrounding background, so it was difficult to classify the object and the background image when the color image is used as the training data of deep-learning. In this paper, we proposed a red-green color blindness image transformation method using the principle that people of red-green blindness distinguish green color better than ordinary people. Experimental results show that the camouflage soldier's recognition performance improved by proposed a deep learning model of the ensemble technique using the imitated red-green-blind image data and the original color image data.